Every LOGIQ server configuration file must have one or more destination for the data to be written to.
destinations:-name: t_webserverss3:bucket: productionprefix: webservers/credential: productiongroupings:-name: Environmentvalue: Production-name: Tiervalue: Web
NOTE: A missing destination in a rule is equivalent to evaluating the rule but dropping the incoming packet once a rule is matched. However, it is possible other rules with properly configured destinations may still process and successfully write the incoming packet.This also implies that an incoming data message can go to multiple destinations simultaneously.
Destinations are defined using the "destinations" keyword in the configuration file. All destination definitions must have a "name", a destination specific section and and optional "partition" section defining data partitioning scheme.
The config validator will flag an error for destination names that are not defined but referenced in rules. In addition, depending on the type of the destination e.g. AWS S3 or On-prem S3, additional mandatory fields may be required when defining a destination.
This key is used to specify a unique name for a destination definition
-name: qa_labipv4_list: 192.168.1.1, 192.168.1.10
The destination for an incoming message is specified in the message rules using their names
rules:-source: debug_instancesdestination: t_debug
An optional partition reference can be provided which tells the LOGIQ server on how to organize the data at the destination.
In the example below, there are two destinations defined: t_webservers and t_debug. A partition reference p_by_day is specified for the destination t_webservers _but not for t_debug. The p_by_day is a named reference to a partition definition. See section on Partitions on how to specify a partitioning scheme for destination data.
destinations:-name: t_webserverspartition: p_by_days3:bucket: productionprefix: webservers/credential: productiongroupings:-name: Environmentvalue: Production-name: Tiervalue: Web-name: t_debugs3:bucket: debugcredential: debug
A destination definition must specify one of the below defined types. A destination with no subsection for one of the types, is not a valid destination and the config parser will issue an error.
An S3 destination is used for AWS S3 and AWS S3 compatible object stores.
NOTE: For AWS S3 compatible object store, the AWS SDK should work on the object store and support bucket listing, creation and multi-part uploads.
Below are two examples. The first destination t_webservers is a definition of a bucket in AWS S3. The second destination t_minio_webservers is a definition of a bucket hosted in an S3 compatible object store. An S3 compatible object stores supports an optional endpoint key in the s3 definition that points to the http endpoint where the object store is hosted.
-name: t_webserverspartition: p_by_days3:bucket: logiqf978aregion: us-east-1prefix: webserverscredential: athena_and_s3_accessgroupings:-name: Environmentvalue: Production-name: Tiervalue: Web-name: t_webserverspartition: p_by_days3:bucket: logiq-f978bendpoint: http://10.0.1.49:9000region: us-east-1prefix: webserverscredential: compatible_s3groupings:-name: Environmentvalue: Development-name: Tiervalue: Web
The bucket key specifies the bucket in the S3 compatible store where LOGIQ will upload the log data. This is a required parameter and the bucket must exist. LOGIQ currently doesn't support creating the bucket automatically.
Every S3 destination must provide a credential to use. The credential key is a reference to a credential name that is defined in the credentials section of the config. Please refer to the Credentials section for more details on how to specify a credential.
If you want to write data to any S3 compatible object store thats not AWS S3 e.g. Nutanix Objects, Cloudian, Azure blob etc., you need to specify the endpoint for the s3 compatible object store. The endpoint URL will either be an http or and https URL along with a port e.g. http://10.1.0.49:9000.
If an endpoint is not provided, it is implicitly assumed that we are talking with AWS S3.
Groupings provide a way to group incoming data streams that share similar characteristics. E.g. a user may want to tag all traffic coming from 10.0.20.0/24 subnet to be Production data stream. See section on Groupings for more details.
The prefix key is an optional key that specifies a prefix to use for all the objects being created in S3. E.g. if bucket is foo and prefix is bar/, all of the uploaded data objects will be under foo/bar/
NOTE: If a prefix is not specified, the destination name is used as a prefix.