LOGIQ server needs a configuration file that describes how to handle incoming data and where to store it. See below for a minimal configuration to get started. In the below example we are connecting LOGIQ to ingest data into an AWS backed S3 compatible store. The data stored will be partitioned by day.
NOTE: Change the below config to work with your environment. If you are using AWS deployment using CloudFormation, the config file is automatically generated during the install. If you are using a Kubernetes cluster to run LOGIQ, the configuration is passed as a config map.
options:ca: "/etc/ssl/logiq/certs/logIQ.crt"cert: "/etc/ssl/logiq/certs/logiq-server.crt"key: "/etc/ssl/logiq/certs/logiq-server.key"host: 0.0.0.0credentials:-name: aws_logging_buckets3:secret_key: logiq_secretaccess_key: logiq_accesspartitions:-name: p_by_dayfields:- year- month- daydestinations:-name: default_log_storepartition: p_by_days3:bucket: logiq-bucket-1fbc-1region: us-east-1credential: aws_logging_bucketrules:-destination: default_log_store
The above configuration defines a simple rule that all incoming data needs to go to the destination default_log_store.
See the section on LOGIQ server configuration on additional configuration options.