Setup JMX Exporter and send metrics to Apica Ascent endpoint
JMX to Prometheus exporter: a collector that can configurably scrape and expose mBeans of a JMX target.This exporter is intended to be run as a Java Agent, exposing a HTTP server and serving metrics of the local JVM. It can be also run as a standalone HTTP server and scrape remote JMX targets, but this has various disadvantages, such as being harder to configure and being unable to expose process metrics (e.g., memory and CPU usage). Running the exporter as a Java Agent is thus strongly encouraged.
This tutorial will explain how to run JMX exporter as a Java agent.
- 2.Move this JAR file to the same location where your service JAR file is located.
- 3.Make a minimal
config.yamlfile in the same directory:
- pattern: ".*"
- 4.Now start your JAR file with this command, where we run the jmx_exporter JAR file as a Java Agent.
java -javaagent:./jmx_prometheus_javaagent-0.17.2.jar=12345:config.yaml -jar yourJar.jar
Now as we have the metrics exposed at a local endpoint, we can scrape them using prometheus and send it to Apica Ascent endpoint via. remote_write.
Locate the prometheus configuration file and add a job in
scrape_configsfor java_metrics like this:
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
- targets: ['localhost:9090']
- job_name: 'java_metrics' # <------- Added configuration for scraping localhost:12345/metrics --------
- targets: ['localhost:12345']
group: '<GROUP NAME>'
cluster_id: '<CLUSTER ID>'
Save the prometheus config file and restart/start the prometheus systemd service by using these commands:
sudo systemctl restart prometheusOR
sudo systemctl start prometheus
You will see the metrics populate at Apica Ascent endpoint in Query editor in some time.