Deploying LOGIQ PaaS on Kubernetes
This page describes the LOGIQ deployment on Kubernetes cluster using HELM 3 charts.
- Kubernetes 1.18, 1.19 or 1.20
- Helm 3.2.0+
- Dynamic PV provisioner support in the underlying infrastructure
- ReadWriteMany volumes for deployment scaling
LOGIQ K8S components are made available as helm charts. Please read and agree to the EULA before proceeding.
helm repo add logiq-repo https://logiqai.github.io/helm-charts
The HELM repository will be named
logiq-repo
. For installing charts from this repository please make sure to use the repository name as the prefix e.g.helm install <deployment_name> logiq-repo/<chart_name>
You can now run
helm search repo logiq-repo
to see the available helm charts$ helm search repo logiq-repo
NAME CHART VERSION APP VERSION DESCRIPTION
logiq-repo/logiq v3.0.9 v3.5.9.1 LOGIQ Observability HELM chart for Kubernetes
If you already added LOGIQ's HELM repository in the past, you can get updated software releases using
helm repo update
$ helm repo update
$ helm search repo logiq-repo
NAME CHART VERSION APP VERSION DESCRIPTION
logiq-repo/logiq v3.0.9 v3.5.9.1 LOGIQ Observability HELM chart for Kubernetes
NOTE: Namespace name cannot be more than 15 characters in length
kubectl create namespace logiq
This will create a namespace
logiq
where we will deploy the LOGIQ Log Insights stack.If you choose a different name for the namespace, please remember to use the same namespace for the remainder of the steps
Sample YAML files for small, medium, large cluster configurations can be downloaded at the following links.
values.small.yaml
values.medium.yaml
values.large.yaml
values.small.yaml
7KB
Code
values.medium.yaml
7KB
Code
values.large.yaml
7KB
Code
These YAML files can be used for deployment with -f parameter as shown below in the description.
helm install logiq --namespace logiq \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq -f values.small.yaml
Please refer to Section 3.10 for sizing your LOGIQ cluster as specified in these YAML file Latest image tags.
helm install logiq --namespace logiq \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
This will install LOGIQ and expose the LOGIQ services and UI on the ingress IP. If you plan to use an AWS S3 bucket, please refer to section 3.2 before running this step. Please refer to Section 3.4 for details about storage class. Service ports are described in the Port details section. You should now be able to go to
http://ingress-ip/
The default login and password to use is
[email protected]
and flash-password
. You can change these in the UI once logged in. HELM chart can override the default admin settings as well. See section 3.7 on customizing the admin settings
Logiq Insights Login UI
LOGIQ server provides Ingest, log tailing, data indexing, query, and search capabilities.
Besides the web-based UI, LOGIQ also offers logiqctl, LOGIQ CLI for accessing the above features.
helm install logiq --namespace logiq \
--set global.domain=logiq.my-domain.com \
--set ingress.tlsEnabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.enabled=true \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
You should now be able to login to LOGIQ UI at your domain using
https://logiq.my-domain.com
that you set in the ingress after you have updated your DNS server to point to the Ingress controller service IPThe default login and password to use is
[email protected]
and flash-password
. You can change these in the UI once logged in.The
logiq.my-domain.com
also fronts all the LOGIQ service ports as described in the port details section.HELM Option | Description | Defaults |
global.domain | DNS domain where the LOGIQ service will be running. This is required for HTTPS | No default |
ingress.tlsEnabled | Enable the ingress controller to front HTTPS for services | false |
kubernetes-ingress.controller.defaultTLSSecret.enabled | Specify if a default certificate is enabled for the ingress gateway | false |
kubernetes-ingress.controller.defaultTLSSecret.secret | Specify the name of a TLS Secret for the ingress gateway. If this is not specified, a secret is automatically generated of option kubernetes-ingress.controller.defaultTLSSecret.enabled above is enabled. | |
If you want to pass your own ingress secret, you can do so when installing the HELM chart
helm install logiq --namespace logiq \
--set global.domain=logiq.my-domain.com \
--set ingress.tlsEnabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.enabled=true \
--set kubernetes-ingress.controller.defaultTLSSecret.secret=<secret_name> \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
Depending on your requirements, you may want to host your storage in your own K8S cluster or create a bucket in a cloud provider like AWS.
Please note that cloud providers may charge data transfer costs between regions. It is important that the LOGIQ cluster be deployed in the same region where the S3 bucket is hosted
Go to AWS IAM console and create an access key and secret key that can be used to create your bucket and manage access to the bucket for writing and reading your log files
Make sure to pass your
AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
and give a bucket name. The S3 gateway acts as a caching gateway and helps reduce API costs.
Create a bucket in AWS s3 with a unique bucket name in the region where you plan to host the deployment.You will need to create the S3 bucket manually along with access and secret keys to access the bucket. Check to make sure the access and secret key work with the newly created bucket.
Once the bucket is created and access/secret is verified, provide the bucket name and access credentials in the step below.
Additionally, provide a valid amazon service endpoint for s3 else the config wll default to https://s3.us-east-1.amazonaws.com
helm install logiq --namespace logiq --set global.domain=logiq.my-domain.com \
--set global.environment.s3_bucket=<bucket_name> \
--set global.environment.awsServiceEndpoint=https://s3.<region>.amazonaws.com \
--set global.environment.s3_region=<region> \
--set global.environment.AWS_ACCESS_KEY_ID=<access_key> \
--set global.environment.AWS_SECRET_ACCESS_KEY=<secret_key> \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
HELM Option | Description | Defaults |
global.cloudProvider | This helm option specifies the supported cloudProvider that is hosting the S3 compatible bucket. Right now only aws is supported. | aws |
global.environment.s3_bucket | Name of the S3 bucket in AWS | logiq |
global.environment.awsServiceEndpoint | S3 Service endpoint : https://s3.**<region>**.amazonaws.com | |
global.environment.AWS_ACCESS_KEY_ID | AWS Access key for accessing the bucket | No default |
global.environment.AWS_SECRET_ACCESS_KEY | AWS Secret key for accessing the bucket | No default |
global.environment.s3_region | AWS Region where the bucket is hosted | us-east-1 |
S3 providers may have restrictions on bucket names for e.g. AWS S3 bucket names are globally unique.
LOGIQ supports TLS for all ingest. We also enable non-TLS ports by default. It is however recommended that non-TLS ports not be used unless running in a secure VPC or cluster. The certificates can be provided to the cluster using K8S secrets. Replace the template sections below with your Base64 encoded secret files.
If you skip this step, the LOGIQ server automatically generates a ca and a pair of client and server certificates for you to use. you can get them from the ingest server pods under the folder
/flash/certs
apiVersion: v1
kind: Secret
metadata:
name: logiq-certs
type: Opaque
data:
ca.crt: {{ .Files.Get "certs/ca.crt.b64" }}
syslog.crt: {{ .Files.Get "certs/syslog.crt.b64" }}
syslog.key: {{ .Files.Get "certs/syslog.key.b64" }}
Save the secret file e.g.
logiq-certs.yaml
. Proceed to install the secret in the same namespace where you want to deploy LOGIQThe secret can now be passed into the LOGIQ deployment
helm install logiq --namespace logiq --set global.domain=logiq.my-domain.com \
--set logiq-flash.secrets_name=logiq-certs \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
HELM Option | Description | Defaults |
logiq-flash.secrets_name | TLS certificate key pair and CA cert for TLS transport | No default |
If you are planning on using a specific storage class for your volumes, you can customize it for the LOGIQ deployment. By default, LOGIQ uses the
standard
storage classIt is quite possible that your environment may use a different storage class name for the provisioner. In that case please use the appropriate storage class name. E.g. if a user creates a storage class
ebs-volume
for the EBS provisioner for their cluster, you can use ebs-volume
instead of gp2
as suggested belowCloud Provider | K8S StorageClassName | Default Provisioner |
AWS | gp3 | EBS |
Azure | UltraSSD_LRS | Azure Ultra disk |
GCP | standard | pd-standard |
Digital Ocean | do-block-storage | Block Storage Volume |
Oracle | oci | Block Volume |
Microk8s | microk8s-hostpath | |
helm upgrade --namespace logiq \
--set global.persistence.storageClass=<storage class name> \
logiq logiq-repo/logiq
- Provisioning GP3 CSI Driver on AWS EKS - https://docs.logiq.ai/deploying-logiq/logiq-paas-deployment/deploying-logiq-eks-on-aws-using-cloudformation#5.3-enable-gp3-storage-class-for-eks
To use external AWS RDS Postgres database for your LOGIQ deployment, execute the following command.
helm install logiq --namespace logiq \
--set global.chart.postgres=false \
--set global.environment.postgres_host=<postgres-host-ip/dns> \
--set global.environment.postgres_user=<username> \
--set global.environment.postgres_password=<password> \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
HELM Option | Description | Default |
global.chart.postgres | Deploy Postgres which is needed for LOGIQ metadata. Set this to false if an external Postgres cluster is being used | true |
global.environment.postgres_host | Host IP/DNS for external Postgres | postgres |
global.environment.postgres_user | Postgres admin user | postgres |
global.environment.postgres_password | Postgres admin user password | postgres |
global.environment.postgres_port | Host Port for external Postgres | 5432 |
While configuring RDS, create a new parameter group that sets autoVaccum to true or the value "1", associate this parameter group to your RDS instance.
Auto vacuum automates the execution of
VACUUM
and ANALYZE
(to gather statistics) commands. Auto vacuum checks for bloated tables in the database and reclaims the space for reuse.The deployment described above offers 30 days trial license. Send an e-mail to
[email protected]
to obtain a professional license. After obtaining the license, use the logiqctl tool to apply the license to the deployment. Please refer to logiqctl
details at https://logiqctl.logiq.ai/. You will need API-token from LOGIQ UI as shown below
Logiq Insights Login Api-token
Setup your LOGIQ Cluster endpoint
- logiqctl config set-cluster logiq.my-domain.com
Sets a logiq ui api token
- logiqctl config set-token api_token
Upload your LOGIQ deployment license
- logiqctl license set -f=license.jws
View License information
- logiqctl license get
helm install logiq --namespace logiq \
--set global.environment.admin_name="LOGIQ Administrator" \
--set global.environment.admin_password="admin_password" \
--set global.environment.admin_email="[email protected]" \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
HELM Option | Description | Default |
global.environment.admin_name | LOGIQ Administrator name | |
global.environment.admin_password | LOGIQ Administrator password | flash-password |
global.environment.admin_email | LOGIQ Administrator e-mail |
To use external Redis for your LOGIQ deployment, execute the following command.
NOTE: At this time LOGIQ only supports connecting to a Redis cluster in a local VPC without authentication. If you are using an AWS Elasticache instance, do not turn on encryption-in-transit or cluster mode.
helm install logiq --namespace logiq \
--set global.chart.redis=false \
--set global.environment.redis_host=<redis-host-ip/dns> \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
HELM Option | Description | Default |
global.chart.redis | Deploy Redis which is needed for log tailing. Set this to false if an external Redis cluster is being used | true |
global.environment.redis_host | Host IP/DNS of the external Redis cluster | redis-master |
global.environment.redis_port | Host Port where external Redis service is exposed | 6379 |
When deploying LOGIQ, configure the cluster id to monitor your own LOGIQ deployment. For details about the
cluster_id
refer to section Managing multiple K8S clustershelm install logiq --namespace logiq \
--set global.environment.cluster_id=<cluster id> \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
HELM Option | Description | Default |
global.environment.cluster_id | Cluster Id being used for the K8S cluster running LOGIQ. See Section on Managing multiple K8S clusters for more details. | LOGIQ |
When deploying LOGIQ, size your infrastructure to provide appropriate vcpu and memory requirements. We recommend the following minimum size for small. medium and large cluster specification from Section 1.3 values yaml files.
LOGIQ Cluster | vCPU | Memory | NodeCount |
small | 24 | 32 gb | 3 |
medium | 40 | 64 gb | 5 |
large | 64 | 128 gb | 8 |
The service type configurations are exposed in values.yaml as below
flash-coffee:
service:
type: ClusterIP
logiq-flash:
service:
type: NodePort
kubernetes-ingress:
controller:
service:
type: LoadBalancer
For e.g. if you are running on bare-metal and want an external LB to front LOGIQ, configure all services as
NodePort
helm install logiq -n logiq -f values.yaml \
--set flash-coffee.service.type=NodePort \
--set logiq-flash.service.type=NodePort \
--set kubernetes-ingress.controller.service.type=NodePort \
logiq-repo/logiq
The LOGIQ stack deployment can be optimized using node labels and node selectors to place various components of the stack optimally
logiq.ai/node=ingest
The node label
logiq.ai/node
above can be used to control the placement of ingest pods for log data into ingest optimized nodes. This allows for managing cost and instance sizing effectively.The various nodeSelectors are defined in the globals section of the values.yaml file
globals:
nodeSelectors:
enabled: true
ingest: ingest
infra: common
other: common
db: db
cache: cache
ingest_sync: sync
In the example above, there are different node pools being used -
ingest
, common
, db
, cache
and sync
Node selectors are enabled by setting
enabled
to true
for globals.nodeSelectors
The LOGIQ stack includes Grafana as part of the deployment as an optional component. To enable Grafana in your cluster, follow the steps below
helm upgrade --install logiq --namespace logiq \
--set global.chart.grafana=true \
--set global.persistence.storageClass=<storage class name> logiq-repo/logiq
The Grafana instance is exposed at port 3000 on the ingress controller. The deployed Grafana instance uses the same credentials as the LOGIQ UI
LOGIQ creates an Ingress resource in the namespace it is deployed.
- Creating an OIDC provider for your EKS cluster - https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.htm
- Please refer to the EKS configuration on how to automatically provision an ALB here - https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
If and when you want to decommission the installation using the following commands
helm delete logiq --namespace logiq
helm repo remove logiq-repo
kubectl delete namespace logiq
If you followed the installation steps in section 3.1 - Using an AWS S3 bucket, you may want to delete the s3 bucket that was specified at deployment time.
Last modified 5d ago