Filebeat
Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or Logstash for indexing.
LOGIQ supports data ingestion from Filebeat. The following section describes the steps for configuring log forwarding from Filebeat to LOGIQ by modifying the Filebeat configuration file.

Configuring Filebeat to forward logs to LOGIQ

To configure Filebeat, you need to configure the Filebeat configuration file. The location of your Filebeat configuration file can vary depending on your platform. Locate and access the configuration file and modify it as described below.

Ingest Ports

LOGIQ.AI hosts the lumberjack protocol at ports 25224 (non-TLS) and 25225 (TLS)
The ports are configurable and can be changed if needed.

Understanding Filebeats configuration

Filebeats configuraion consists of 2 key sections input and output.
The input section describes what files to process. The output section describes how/where to send the collected data. In addition, the input section contains processors that allow data modification before it is sent to the output. We will look at each of these below.

INPUT

The input section will typically contain the list of file paths that Filebeat should monitor.
1
inputs:
2
- enabled: true
3
paths:
4
- /var/log/*.log
5
type: filestream
Copied!

OUTPUT

The output section describes how to send data to LOGIQ. The LOGIQ layer supports the lumberjack protocols v1 and v2 for ingesting data and is compatible with logstash. Use output type logstash
1
output:
2
logstash:
3
hosts:["Logiq-endpoint>:25224"]
Copied!

PROCESSORS

Under the input section, include the processors section in addition to your existing Filebeat configuration along with namespace, application, and cluster_id. Doing so ensures that the logs you forward to LOGIQ can be identified and partitioned easily
We strongly recommend partitioning your data for better organization as well as query performance. Each Cluster, Namespace and Application tuple becomes a partition in LOGIQ and can be used to organize data that is logically related
1
inputs:
2
- enabled: true
3
paths:
4
- <path to your log>
5
type: filestream
6
processors:
7
- add_fields:
8
target: logiq
9
fields:
10
namespace: myproject
11
application: test
12
cluster_id: foo
Copied!

Example

The following is a complete Filebeat configuration for forwarding logs to LOGIQ without SSL.
1
output.logstash:
2
hosts: ["logs.mycompany.com:5044"]
3
"/etc/client.key"
4
filebeat:
5
inputs:
6
- enabled: true
7
paths:
8
- <path to your log>
9
type: filestream
10
processors:
11
- add_fields:
12
target: logiq
13
fields:
14
namespace: myproject
15
application: test
16
cluster_id: foo
17
18
output:
19
# console:
20
# pretty: true
21
logstash:
22
hosts:["Logiq-endpoint>:25224"]
23
processors:
24
- add_host_metadata:
25
when:
26
not:
27
contains:
28
tags: forwarded
29
- add_cloud_metadata: null
30
- add_docker_metadata: null
31
- add_kubernetes_metadata: null
32
setup:
33
kibana: null
34
template:
35
settings:
36
index:
37
number_of_shards: 1
Copied!

Using SSL/TLS

To configure Filebeat to use SSL while forwarding logs, specify the key/SSL certificate information under the ssl field. You must also specify 25225 as the port for your LOGIQ endpoint in the output section.
The following is an example of a Filebeat configuration file with processors and ssl configured to forward logs to LOGIQ using SSL.
1
output.logstash:
2
hosts: ["logs.mycompany.com:5044"]
3
"/etc/client.key"
4
filebeat:
5
inputs:
6
- enabled: true
7
paths:
8
- <path to your log>
9
type: filestream
10
processors:
11
- add_fields:
12
target: logiq
13
fields:
14
namespace: myproject
15
application: test
16
cluster_id: foo
17
18
output:
19
# console:
20
# pretty: true
21
logstash:
22
hosts:["Logiq-endpoint>:25225"]
23
ssl.certificate_authorities: "/etc/ssl/LOGIQ/certs/LOGIQ.crt"
24
ssl.certificate: "/etc/ssl/LOGIQ/certs/client.crt"
25
ssl.key: "/etc/ssl/LOGIQ/certs/client.key"
26
processors:
27
- add_host_metadata:
28
when:
29
not:
30
contains:
31
tags: forwarded
32
- add_cloud_metadata: null
33
- add_docker_metadata: null
34
- add_kubernetes_metadata: null
35
setup:
36
kibana: null
37
template:
38
settings:
39
index:
40
number_of_shards: 1
Copied!
Your logs will now be forwarded from Filebeat into LOGIQ and will start appearing in the LOGIQ UI.