input {​file {path => "/var/log/syslog"type => "syslog"start_position => "beginning"}​filter {uuid {target => "uuid"}}​output {syslog { appname => "my-awesome-app"host => "logiq-server-dns.my-domain.com"protocol => "ssl-tcp"msgid => "%{uuid}"ssl_cert => "client.crt"ssl_key => "client.key"ssl_cacert => "logiq.crt"ssl_verify => trueport => "7514"rfc => "rfc5424"id => "%{uuid}"}stdout { codec => rubydebug }}
NOTE: Change "host" , "appname", "ssl_cert", "ssl_key", "ssl_cacert" above to suit your configuration
output {http {url => "https://logiq-dns-or-ip/v1/json_batch"headers => { "Authorization" => "Bearer <Auth token>" }http_method => "post"format => "json_batch"content_type => "json_batch"pool_max => 300pool_max_per_route => 100socket_timeout => 60}}
You can additionally control the data organization by specifying additional fields
filter {mutate {add_field => { "cluster_id" => "demo-http-test" }add_field => { "namespace" => "namespace_name" }add_field => { "app_name" => "application_name" }add_field => { "proc_id" => "process_or_pod_identifier" }}}
You can generate the Bearer token using logiqctl
​
$logiqctl get httpingestkey
Please see below on how to configure Rsyslog to send to LOGIQ server. Rsyslog can send data to LOGIQ using either TCP transport or RELP transport. The RELP module for Rsyslog is called omrelp
and for the TCP forward is called omfwd
LOGIQ strongly recommends sending data using the RELP transport to ensure packets are not lost or dropped. RELP relies on acknowledgements from the receiver to make sure packet is delivered. LOGIQ, for its part only sends the acknowledgements back once the data is written to persistent store.
Update the syslog config in /etc/rsyslog.conf
or /etc/rsyslog.d/50-default.conf
*.* action(type="omfwd"queue.type="LinkedList"action.resumeRetryCount="-1"queue.size="10000"queue.saveonshutdown="on"target="logiq-server-syslog-host" Port="514" Protocol="tcp")
rsyslog is installed by default in most modern OS's, rsyslog needs the omrelp module to send data to a RELP aware endpoint such as LOGIQ. To enable RELP install packages listed below
rsyslog-relp, enables RELP protocol for rsyslog
rsyslog-gnutls, enables rsyslog to communicate over a secure socket
sudo apt updatesudo apt install rsyslog-gnutls rsyslog-relp
For Redhat/CentOS/Fedora, use yum to install
yum install rsyslog-gnutls rsyslog-relp
Update the syslog config in /etc/rsyslog.conf
or /etc/rsyslog.d/50-default.conf
module(load="omrelp")action(type="omrelp"target="logiq-server-relp-host"port="2514"tls="on"tls.caCert="/etc/ssl/LOGIQ/certs/LOGIQ.crt"tls.myCert="/etc/ssl/LOGIQ/certs/client.crt"tls.myPrivKey="/etc/ssl/LOGIQ/certs/client.key"tls.authMode="fingerprint"tls.PermittedPeer=["SHA1:BF:46:AB:9F:A3:77:46:AF:6B:D2:EC:A4:30:72:F1:CC:0E:17:C9:42"]action.reportSuspensionContinuation="on"action.resumeRetryCount="-1"action.resumeInterval="10"queue.type="fixedArray"queue.size="250000"queue.dequeueBatchSize="1024"queue.workerThreads="4"queue.workerThreadMinimumMessages="50000"queue.spoolDirectory="/var/log/rsyslog"queue.fileName="XXX_sock"queue.maxFileSize="16m"queue.maxDiskSpace="2G"queue.highWaterMark="200000"queue.lowWaterMark="100000"queue.checkpointInterval="30000"queue.saveOnShutdown="on"queue.timeoutEnqueue="1")
NOTE: Change "target", "port", tls.caCert" , "tls.myCert", "tls.myPrivKey", "tls.PermitterPeer" above to suit your configuration. For non TLS config, set "tls" parameter as "off" and remove all tls.* parameters from above config file. E.g. of target=ec2-34-213-110-235.us-west-2.compute.amazonaws.com
Update the syslog config in /etc/rsyslog.conf
or /etc/rsyslog.d/50-default.conf
Rsyslog non-TLS uses port 20514 vs TLS which uses port 2514
module(load="omrelp")action(type="omrelp"target="logiq-server-relp-host"port="20514"tls="off"action.reportSuspensionContinuation="on"action.resumeRetryCount="-1"action.resumeInterval="10"queue.type="fixedArray"queue.size="250000"queue.dequeueBatchSize="1024"queue.workerThreads="4"queue.workerThreadMinimumMessages="50000"queue.spoolDirectory="/var/log/rsyslog"queue.fileName="XXX_sock"queue.maxFileSize="16m"queue.maxDiskSpace="2G"queue.highWaterMark="200000"queue.lowWaterMark="100000"queue.checkpointInterval="30000"queue.saveOnShutdown="on"queue.timeoutEnqueue="1")
If you are running a K8S cluster, you can use fluentd to send data to the LOGIQ server. Please see below for instructions
When deploying fluentd daemonset on K8S clusters, we recommend you use the fluentd daemon set container provided by LOGIQ. It is available at https://hub.docker.com/repository/docker/logiqai/fluentd-remote-syslog. It allows the administrator to pass a human readable CLUSTER_ID
or cluster identifier with all the log data.
Providing a CLUSTER_ID allows LOGIQ to separate namespaces that may be conflicting in two separate K8S clusters.
It is also easier for the administrator to use human readable names vs LOGIQ using uuid's etc that it detects from the incoming stream.
Clone the repository to get the kubectl
YAML files to start your daemonset
git clone https://bitbucket.org/logiqcloud/client-integrations.git
The files needed are under folder fluentd
$ cd client-integrations/$ cd fluentd/$ ls -latotal 32drwxr-xr-x 6 user staff 192 Oct 30 14:47 .drwxr-xr-x 7 user staff 224 Oct 30 14:47 ..-rw-r--r-- 1 user staff 645 Oct 30 14:47 README.md-rw-r--r-- 1 user staff 1373 Oct 30 14:47 fluentd-logiq.yaml-rw-r--r-- 1 user staff 1373 Oct 30 14:47 fluentd-logiq_non_tls.yaml-rw-r--r-- 1 user staff 590 Oct 30 14:47 fluentd_rbac.yaml-rw-r--r-- 1 user staff 210 Oct 30 14:47 secret.yaml
Edit the fluentd/secret.yaml
to include your CA and Client pub/private keys in base64 encoded format
Edit the fluentd/fluentd-logiq.yaml
and add your LOGIQ cluster IP/DNS. Also configure your CLUSTER_ID
(e.g. RC, Prod, Dev-Test, QA).
....- env:- name: SYSLOG_HOSTvalue: "YOUR_LOGIQ_SERVER_IP"- name: CLUSTER_IDvalue: "YOUR_CLUSTER_ID"....
Run the kubectl
commands to create the kube-logging
namespace. You can choose a different namespace as well. In case a different namespace is used please edit the YAML files to set the correct namespace before applying them
kubectl create namespace kube-loggingkubectl apply -f fluentd_rbac.yamlkubectl apply -f secret.yamlkubectl apply -f fluentd-logiq.yaml
Edit thefluentd/fluentd-logiq_non_tls.yaml
and add your LOGIQ cluster IP/DNS. Also configure your CLUSTER_ID
(e.g. RC, Prod, Dev-Test, QA)
....- env:- name: SYSLOG_HOSTvalue: "YOUR_LOGIQ_SERVER_IP"- name: CLUSTER_IDvalue: "YOUR_CLUSTER_ID"....
Run the kubectl
commands to create the kube-logging
namespace. You can choose a different namespace as well. In case a different namespace is used please edit the YAML files to set the correct namespace before applying them
kubectl create namespace kube-loggingkubectl apply -f fluentd_rbac.yamlkubectl apply -f fluentd-logiq_non_tls.yaml
If you are running a K8S cluster, you can use fluent-bit to send data to the LOGIQ server. Please see below for instructions
LOGIQ has provided its own fluent-bit daemon for deploying on K8S clusters. It is available at https://bitbucket.org/logiqcloud/client-integrations/src/master/fluent-bit/. It allows the administrator to pass a human readable CLUSTER_ID
or cluster identifier with all the log data.
Providing a CLUSTER_ID allows LOGIQ to separate namespaces that may be conflicting in two separate K8S clusters.
It is also easier for the administrator to use human readable names vs LOGIQ using uuid's etc that it detects from the incoming stream.
Clone the repository to get the kubectl
YAML files to start your daemonset
git clone https://bitbucket.org/logiqcloud/client-integrations.git
The files needed are under folder fluent-bit
$ cd client-integrations/$ cd fluentd-bit/$ ls -latotal 64drwxr-xr-x 8 user staff 256 Aug 9 05:47 .drwxr-xr-x 9 user staff 288 Aug 9 05:20 ..-rw-r--r-- 1 user staff 2446 Aug 9 05:47 README.md-rw-r--r-- 1 user staff 8688 Aug 9 05:32 fluent-bit-config-logiq-forward.yml-rw-r--r-- 1 user staff 1670 Aug 9 05:29 fluent-bit-daemonset-logiq-forward.yml-rw-r--r-- 1 user staff 269 Aug 9 05:26 fluent-bit-role-binding.yaml-rw-r--r-- 1 user staff 194 Aug 9 04:49 fluent-bit-role.yaml-rw-r--r-- 1 user staff 86 Aug 9 05:25 fluent-bit-service-account.yaml
To get started run the following commands to create the namespace, service account and role setup:
$ kubectl create namespace logiq-logging$ kubectl create -f fluent-bit-service-account.yaml$ kubectl create -f fluent-bit-role-binding.yaml$ kubectl create -f fluent-bit-role.yaml
The next step is to create a ConfigMap that will be used by the Fluent Bit DaemonSet:
$ kubectl create -f fluent-bit-config-logiq-forward.yml
Fluent Bit DaemonSet is ready to be used with LOGIQ on a regular Kubernetes Cluster, configure the following in deamonset fluent-bit-daemonset-logiq-forward.yml
name: LOGIQ_HOST
value: "YOUR_LOGIQ_SERVER_IP"
name: LOGIQ_PORT
value: "24224"
name: CLUSTER_ID
value: "YOUR_CLUSTER_ID"
For Kubernetes version < 1.17, please change the apiVersion: "extensions/v1beta1" from "apps/v1" and remove selector attached to DaemonSet spec selector: matchLabels: k8s-app: fluent-bit-logging
$ kubectl create -f fluent-bit-daemonset-logiq-forward.yml
Using the docker syslog driver to send logs to LOGIQ is quite simple. Details about the docker syslog driver can be found here https://docs.docker.com/config/containers/logging/syslog/​
LOGIQ supports both TLS and non TLS syslog ports using TCP.
Following fields are required options to be passed to the logdriver
tag
- User a human readable string for better readability otherwise the first 12 chars of the container id will be used
syslog-format=rfc3164
syslog-address
syslog-tls-cert
syslog-tls-key
syslog-tls-ca-cert
syslog-tls-skip-verify
Sending data from docker to LOGIQ using TCP and non TLS port can be done as below. In the example below, we are going to run a mysql container and have all logs go to LOGIQ server hosted at logiqserver-devtest.example.com
docker run --log-driver syslog \--log-opt syslog-address=tcp://logiqserver-devtest.example.com:514 \--log-opt syslog-format=rfc3164 --log-opt tag=mysql --name mysql3 -d mysql
When using to connect to LOGIQ TLS port in a secured setup, pass the client certificates to connect to the server
docker run --log-driver syslog \--log-opt syslog-address=tcp://logiqserver-devtest.example.com:514 \--log-opt syslog-tls-cert=client.pem --log-opt syslog-tls-key=key.pem \--log-opt syslog-tls-ca-cert=ca.pem --log-opt syslog-format=rfc3164 \--log-opt tag=mysql --name mysql3 -d mysql
Fluentd out-forward
Buffered Output plugin forwards events to other fluentd nodes. Logiq has the capability to act as one of the fluentd nodes.
The below code block defines the minimal changes to be added to fluentd configuration to start sending log events to flash. It is important to have the transformations while sending the data to Logiq.
<source>@type tailpath /var/log/*.logpos_file /var/log/tty.txt.pos<parse>@type none</parse></source>​​<filter>@type record_transformerenable_ruby<record>hostname "#{Socket.gethostname}"namespace "#{Socket.gethostname}"cluster_id "hadoop-master"log ${record["message"]}</record></filter>​​​<match>@type forwardsend_timeout 10srecover_wait 10shard_timeout 20s​​<format>@type msgpacktime_type unixtimeutc</format><buffer time,tag,message>@type memorytimekey 2stimekey_wait 1sflush_mode intervalflush_interval 1sretry_max_interval 2sretry_timeout 10s</buffer>​<server>name logiqhost development.logiq.aiport 24224weight 100</server>​<secondary>@type secondary_filedirectory /var/log/forward-failed</secondary></match>
Forward is the protocol used by Fluentd to route messages between peers. The forward output plugin allows to provide interoperability between compatible systems, Logiq being one.
The below code block defines the minimal changes to be added to the fluent-bit configuration to start sending log events to flash.
[INPUT]Name tailPath /var/log/*Path_Key OnTag logiqBuffer_Max_Size 1024kRead_from_Head On​[FILTER]Name record_modifierMatch logiqRecord cluster_id flash​[FILTER]Name record_modifierMatch logiqRecord namespace mesos​[FILTER]Name record_modifierMatch logiqRecord app_name fluentbit​​[OUTPUT]Name forwardMatch logiqhost localhostport 24224​