In our last guides, we saw how to install Kafka in both Ubuntu 20 and CentOS 8. We had a brief introduction about Kafka and what it generally does. As you continue to use Kafka, you will soon notice that you would wish to monitor the internals of your Kafka server. This will be especially important to keep track of your producers, consumers and other metrics such as topics and partitions. Moreover, monitoring the host server where Kafka is installed is beneficial in order to have an idea of its resources and to be on the lookout before things get out of hand. To satisfy that need that you will soon have, this guide will focus on how to monitor your Kafka using familiar tools, that is Prometheus and Grafana.
In order to have this setup, we are going to need some things already up and busy. First, we will need Kafka Cluster which you can refer following guides for how to.
- Install and Configure Apache Kafka on Ubuntu
- Install and Configure Apache Kafka with CMAK on CentOS 8
The other requirement is Prometheus setup. In case you don not have Prometheus installed, fret not because we have already have a beautiful guide that will assist you have one installed fast. Kindly follow How to install Prometheus guide and get it installed.
Secondly, we are going to need Grafana running as well. If there is no Grafana installed, we gladly have another guide that will have that sorted. Follow how to install Grafana to get it up and running.
- How To Install Grafana on CentOS 7
- How To Install Grafana on CentOS 8 / RHEL 8
- Install Grafana on Ubuntu | Debian
Step 1: Download Prometheus JMX Exporter
Prometheus is a powerful and popular open source time series tool and database that stores and exposes metrics and statistics. The exposed data can be used by tools such as Grafana as a data source to create beautiful and insightful graphs and charts for better visibility of your applications and servers. Apache Kafka is developed using Java and therefore we are going to need a Java exporter to scrape (extract) metrics so that Prometheus can be able to consume, store and expose.
Prometheus Exporters are used to extract and export data metrics to your Prometheus instance. One of those exporters is Java Management Extensions (JMX) Exporter which focuses on Java applications. It gives developers the ability to expose metrics, statistics, and basic operations of a Java application in a standard way that Prometheus understands. For this reason we will download and install a Prometheus exporter so that we can pull Kafka’s metrics. Visit Mavens prometheus jmx-exporter repository to get the jar file. On your server, you can use wget or curl to download it as follows:
cd ~ wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.16.1/jmx_prometheus_javaagent-0.16.1.jar
After you have the JMX Exporter downloaded, we will proceed to copy it to Kafka’s lib directory where it stores its jar files. In our previous guide, we copied Kafka files into /usr/local/kafka-server/ directory. Therefore, we shall copy the jmx_prometheus_javaagent jar file to /usr/local/kafka-server/libs/. Make sure you know where your Kafka’s home directory is for there you will find the libs directory.
sudo cp jmx_prometheus_javaagent-0.16.1.jar /usr/local/kafka-server/libs/
Step 2: Configure our Exporter
Next, we will have to configure our JMX Exporter for it to know what it will extract from Kafka. To explain this briefly, the configuration is a collection of regexps that names and filters the metrics to for Prometheus. Thanks to Prometheus, they have sample configurations in this GitHub repository. We will use kafka-2_0_0.yml sample configuration in this setup.
cd /usr/local/kafka-server/config/ sudo nano sample_jmx_exporter.yml
Duplicate its contents in a file inside config directory within Kafka’s home directory.
lowercaseOutputName: true rules: # Special cases and very specific rules - pattern : kafka.server
<>Value name: kafka_server_$1_$2 type: GAUGE labels: clientId: "$3" topic: "$4" partition: "$5" - pattern : kafka.server <>Value name: kafka_server_$1_$2 type: GAUGE labels: clientId: "$3" broker: "$4:$5" - pattern : kafka.coordinator.(\w+) <>Value name: kafka_coordinator_$1_$2_$3 type: GAUGE # Generic per-second counters with 0-2 key/value pairs - pattern: kafka.(\w+) <>Count name: kafka_$1_$2_$3_total type: COUNTER labels: "$4": "$5" "$6": "$7" - pattern: kafka.(\w+) <>Count name: kafka_$1_$2_$3_total type: COUNTER labels: "$4": "$5" - pattern: kafka.(\w+) <>Count name: kafka_$1_$2_$3_total type: COUNTER - pattern: kafka.server <>([a-z-]+) name: kafka_server_quota_$3 type: GAUGE labels: resource: "$1" clientId: "$2" - pattern: kafka.server <>([a-z-]+) name: kafka_server_quota_$4 type: GAUGE labels: resource: "$1" user: "$2" clientId: "$3" # Generic gauges with 0-2 key/value pairs - pattern: kafka.(\w+) <>Value name: kafka_$1_$2_$3 type: GAUGE labels: "$4": "$5" "$6": "$7" - pattern: kafka.(\w+) <>Value name: kafka_$1_$2_$3 type: GAUGE labels: "$4": "$5" - pattern: kafka.(\w+) <>Value name: kafka_$1_$2_$3 type: GAUGE # Emulate Prometheus 'Summary' metrics for the exported 'Histogram's. # # Note that these are missing the '_sum' metric! - pattern: kafka.(\w+) <>Count name: kafka_$1_$2_$3_count type: COUNTER labels: "$4": "$5" "$6": "$7" - pattern: kafka.(\w+) <>(\d+)thPercentile name: kafka_$1_$2_$3 type: GAUGE labels: "$4": "$5" "$6": "$7" quantile: "0.$8" - pattern: kafka.(\w+) <>Count name: kafka_$1_$2_$3_count type: COUNTER labels: "$4": "$5" - pattern: kafka.(\w+) <>(\d+)thPercentile name: kafka_$1_$2_$3 type: GAUGE labels: "$4": "$5" quantile: "0.$6" - pattern: kafka.(\w+) <>Count name: kafka_$1_$2_$3_count type: COUNTER - pattern: kafka.(\w+) <>(\d+)thPercentile name: kafka_$1_$2_$3 type: GAUGE labels: quantile: "0.$4"
Save the file and on to the next step.
Step 3: Configure Kafka Broker to use the JMX exporter
Thus far we have everything that we need to start extracting Kafka metrics. The only thing remaining is to link the JMX exporter to our Kafka Broker. Without delay, let us get that done immediately. Open the Kafka Broker server start-up script and add the JMX configuration at the end of the file as shown below. All of the scripts are in the bin directory within Kafka’s home folder.
$ cd /usr/local/kafka-server/bin/ $ sudo vim kafka-server-start.sh #!/bin/bash # Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. if [ $# -lt 1 ]; then echo "USAGE: $0 [-daemon] server.properties [--override property=value]*" exit 1 fi base_dir=$(dirname $0) if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties" fi if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" fi EXTRA_ARGS=$EXTRA_ARGS-'-name kafkaServer -loggc' COMMAND=$1 case $COMMAND in -daemon) EXTRA_ARGS="-daemon "$EXTRA_ARGS shift ;; *) ;; esac exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "[email protected]" ### ADD THE LINE BELOW ### export KAFKA_OPTS=' -javaagent:/usr/local/kafka-server/libs/jmx_prometheus_javaagent-0.16.1.jar=7075:/usr/local/kafka-server/config/sample_jmx_exporter.yml'
If you are using systemd, add the line to kafka’s systemd file under [Service] section as an Environment as shown below:
[Service] Type=simple Environment="JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64" ##Add the line below Environment="KAFKA_OPTS=-javaagent:/usr/local/kafka-server/libs/jmx_prometheus_javaagent-0.13.0.jar=7075:/usr/local/kafka-server/config/sample_jmx_exporter.yml" ExecStart=/usr/local/kafka-server/bin/kafka-server-start.sh /usr/local/kafka-server/config/server.properties ExecStop=/usr/local/kafka-server/bin/kafka-server-stop.sh Restart=on-abnormal
After adding the line at the end of the kafka-server-start.sh script or in the systemd file, restart Kafka broker.
sudo systemctl restart kafka.service
Check if the service was started by checking the existence of the port configured. If you have a firewall running and your Prometheus server is on a different server, then you should consider allowing access to this port.
$ sudo ss -tunelp | grep 7075 tcp LISTEN 0 3 [::]:7075 [::]:* users:(("java",pid=31609,fd=100)) uid:1000 ino:5391132 sk:ffff977c74f86b40 v6only:0 <->
Allow port on Firewall
### Ubuntu ### sudo ufw allow 7075 ### CentOS ### sudo firewall-cmd --permanent --add-port=7075/tcp sudo firewall-cmd --reload
Open your browser and point it to the IP or FQDN of your server and port. http://[IP or FQDN]:7075. You should see data metrics as shown below
Good Stuff! Our JMX exporter is working as expected. Now let us move to add the data being exposed to Prometheus
Step 4: Add Kafka data to Prometheus
Log into your Prometheus server and lets configure this new source as a data target. If you followed this guide to install Prometheus on Debian | Ubuntu or on RHEL 8| CentOS 8 then its configuration file is in /etc/prometheus/prometheus.yml. Kindly locate its configuration file, open it and edit as illustrated below
$ sudo vim /etc/prometheus/prometheus.yml # my global config global: scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Alertmanager configuration alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: # - "first_rules.yml" # - "second_rules.yml" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=
` to any timeseries scraped from this config. ##### CHANGE THE JOB NAME TO KAFKA AS BELOW####### - job_name: 'kafka' # metrics_path defaults to '/metrics' # scheme defaults to 'http'. #####CHANGE THE TARGET TO THE IP AND PORT OF JMX SERVICE JUST INSTALLED####### static_configs: - targets: ['10.38.83.154:7075']
You can confirm that this target has been successfully added in your Prometheus web interface. Open it up using your browser then click on Status > Targets. If successfully added, you should see it as illustrated below.
Its state should be “UP”
This is awesome thus far. Next, we are going to use the data Prometheus will store as Grafana’s data source so that we can view our metrics in style.
Step 5: Add Kafka metrics to Grafana
Now we are on the last and the best part. Here, we shall add Prometheus as our data source then visualize it all with beautiful graphs and charts. Log into your Grafana web interface and proceed as follows. If you do not have Grafana installed, kindly us the guides below to get it up quickly.
- How To Install Grafana on CentOS 8 / RHEL 8
- Install Grafana on Ubuntu | Debian
- How To Install Grafana on CentOS 7
Once you are in the Grafana web interface, click on the settings gear icon then choose “Data Sources” option from the drop-down list.
This will open the Data Sources Menu where you can add more. Click on “Add Data Source” tab
As you may guess it, we will choose Prometheus since that is what we have already configured before.
After picking Prometheus Data source, we will have to tell Grafana where to find Prometheus server. Issue a cute name and your IP and port where Prometheus is running next to url.
You can further add the Scrape Interval, Query Timeout and HTTP method. After that, click on the “Save and Test” button. If all goes well, the green message should appear. In case of errors, make sure your Prometheus server is running and reachable. Open its port in case it is behind a firewall.
After we are done adding the data source, we shall go on and add a dashboard that will visualize what is in the data source. While still on Grafana, click on the + button then select Import because we are going to use an already made dashboard created by Robust Perception. Its id is 721
On the import page, issue the id 721 then click on “Load” button.
The next page will ask you for a name, then you should pick the data source we added on the drop down at the bottom of the page. Once done, simply click on “Import“.
And you should have your metrics wonderfully displayed as shared below.
You now have your Kafka metrics well displayed on Grafana and hence have a deeper visibility of your topics and others. We appreciate your continued support and we hope the guide was helpful. We thank all the creators of the tools used in this guide to make the lives of developers and administrators better.