AWS CloudWatch
You can forward Cloud watch logs to Logiq using 2 methods.
  • Logiq CloudWatch exporter Lambda function
  • Run Logstash on VM (or docker)

LOGIQ CloudWatch exporter Lambda function

You can export AWS CloudWatch logs to LOGIQ using an AWS Lambada function. The AWS Lambda function acts as a trigger for a CloudWatch log stream.
This guide explains the process for setting up an AWS Lambda function and configuring an AWS CloudWatch trigger to forward CloudWatch logs to LOGIQ.

Creating a Lambda function

LOGIQ provides CloudFormation templates to create the LOGIQ CloudWatch exporter Lambda function.
Depending on the type of logs you'd like to export, use the appropriate CloudFormation template from the following list.

Exporting Lambda Function logs

Use the following CloudFormation template to export AWS Lambda function logs to LOGIQ.
https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/logiq-cloudwatch-lambda-logs-exporter.yaml

Exporting CloudTrail Logs

Use the following CloudFormation template to export CloudTrail logs to LOGIQ.
https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/logiq-cloudwatch-cloudtrail-exporter.yaml

Exporting AWS VPC Flowlogs

Use the following CloudFormation template to export Flowlogs logs to LOGIQ.
https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/logiq-cloudwatch-flowlogs-exporter.yaml

Exporting Cloudwatch logs from other services

Use the following CloudFormation template to export cloudwatch logs.
https://logiqcf.s3.amazonaws.com/cloudwatch-exporter/logiq-cloudwatch-exporter.yaml
This CloudFormation stack creates a Lambda function and its necessary permissions. You must configure the following attributes.
Parameter
Description
APPNAME
Application name - a readable name for LOGIQ to partition logs.
CLUSTERID
Cluster ID - a readable name for LOGIQ to partition logs.
NAMESPACE
Namespace - a readable name for LOGIQ to partition logs.
LOGIQHOST
IP address or hostname of the LOGIQ server. (Without http or https)
INGESTTOKEN
JWT token to securely ingest logs. Refer here to generate ingest token.

Configuring the CloudWatch trigger

Once the CloudFormation stack is created, navigate to the AWS Lambda function (logiq-cloudwatch-exporter) and add a trigger.
On the Add trigger page, select CloudWatch, and then select a CloudWatch Logs Log Group.
Once this configuration is complete, any new logs coming to the configured CloudWatch Log group will be streamed to the LOGIQ cluster.

Create the Logstash VM (or Docker)

Install Logstash on Ubuntu virtual machine as shown below.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
sudo apt-get update
sudo apt-get install logstash
# Install Logstash logstash-input-cloudwatch
cd /usr/share/logstash
sudo -u root sudo -u logstash bin/logstash-plugin install logstash-input-cloudwatch

Configure Logstash

Logstash comes with no default configuration. Create a new file /etc/logstash/conf.d/logstash.conf with these contents, modifying values as needed:
You need to download and place the FlattenJSON.rb file in your local before you run the Logstash
flattenJSON.rb
431B
Binary
input {
cloudwatch_logs {
access_key_id => "<Acess-key>"
secret_access_key => "<secret-access-key>"
region => "<region>"
"log_group" => ["<Cloud-watch-log-group"]
"log_group_prefix" => true
codec => plain
start_position => end
interval => 30
}
}
filter {
ruby {
path => "/home/<custom-path>/flattenJSON.rb"
script_params => { "field" => "cloudwatch_logs" }
}
mutate {
gsub => ["cloudwatch_logs.log_group","\/","-"]
gsub => ["cloudwatch_logs.log_group","^-",""]
add_field => { "namespace" => "<custom-namespace>" }
add_field => { "cluster_id" => "<custom-cluster-id>" }
add_field => { "app_name" => "%{[cloudwatch_logs.log_group]}" }
add_field => { "proc_id" => "%{[cloudwatch_logs.log_stream]}" }
}
}
output {
http {
url => "http://<logiq-endpoint>/v1/json_batch"
headers => { "Authorization" => "Bearer <SECURE_INGEST_TOKEN>" }
http_method => "post"
format => "json_batch"
content_type => "json_batch"
pool_max => 2000
pool_max_per_route => 100
socket_timeout => 300
}
}
You can obtain an ingest token from the LOGIQ UI as described here. You can customize the namespace and cluster_id in the Logstash configuration based on your needs.
Your AWS Cloud watch logs will now be forwarded to your LOGIQ instance. See the Explore Section to view the logs.