All applications generate information when running, this information is stored as logs. As a system administrator, you need to monitor these logs to ensure the proper functioning of the system and therefore prevent risks and errors. These logs are normally scattered over servers and management becomes harder as the data volume increases.
Graylog is a free and open-source log management tool that can be used to capture, centralize and view real-time logs from several devices across a network. It can be used to analyze both structured and unstructured logs. The Graylog setup consists of MongoDB, Elasticsearch, and the Graylog server. The server receives data from the clients installed on several servers and displays it on the web interface.
Below is a diagram illustrating the Graylog architecture
Graylog offers the following features:
- Log Collection – Graylog’s modern log-focused architecture can accept nearly any type of structured data, including log messages and network traffic from; syslog (TCP, UDP, AMQP, Kafka), AWS (AWS Logs, FlowLogs, CloudTrail), JSON Path from HTTP API, Beats/Logstash, Plain/Raw Text (TCP, UDP, AMQP, Kafka) e.t.c
- Log analysis – Graylog really shines when exploring data to understand what is happening in your environment. It uses; enhanced search, search workflow and dashboards.
- Extracting data – whenever log management system is in operations, there will be summary data that needs to be passed to somewhere else in your Operations Center. Graylog offers several options that include; scheduled reports, correlation engine, REST API and data fowarder.
- Enhanced security and performance – Graylog often contains sensitive, regulated data so it is critical that the system itself is secure, accessible, and speedy. This is achieved using role-based access control, archiving, fault tolerance e.t.c
- Extendable – with the phenomenal Open Source Community, extensions are built and made available in the market to improve the funmctionality of Graylog
This guide will walk you through how to run the Graylog Server in Docker Containers. This method is preferred since you can run and configure Graylog with all the dependencies, Elasticsearch and MongoDB already bundled.
Setup Prerequisites.
Before we begin, you need to update the system and install the required packages.
## On Debian/Ubuntu
sudo apt update && sudo apt upgrade
sudo apt install curl vim git
## On RHEL/CentOS/RockyLinux 8
sudo yum -y update
sudo yum -y install curl vim git
## On Fedora
sudo dnf update
sudo dnf -y install curl vim git
1. Install Docker and Docker-Compose on Linux
Of course, you need the docker engine to run the docker containers. To install the docker engine, use the dedicated guide below:
Once installed, check the installed version.
$ docker -v
Docker version 20.10.13, build a224086
You also need to add your system user to the docker group. This will allow you to run docker commands without using sudo
sudo usermod -aG docker $USER
newgrp docker
With docker installed, proceed and install docker-compose using the guide below:
Verify the installation.
$ docker-compose version
Docker Compose version v2.3.3
Now start and enable docker to run automatically on system boot.
sudo systemctl start docker && sudo systemctl enable docker
2. Provision the Graylog Container
The Graylog container will consist of the Graylog server, Elasticsearch, and MongoDB. To be able to achieve this, we will capture the information and settings in a YAML file.
Create the YAML file as below:
vim docker-compose.yml
In the file, add the below lines:
version: '2'
services:
# MongoDB: https://hub.docker.com/_/mongo/
mongodb:
image: mongo:4.2
networks:
- graylog
#DB in share for persistence
volumes:
- /mongo_data:/data/db
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
#data folder in share for persistence
volumes:
- /es_data:/usr/share/elasticsearch/data
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
mem_limit: 1g
networks:
- graylog
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:4.2
#journal and config directories in local NFS share for persistence
volumes:
- /graylog_journal:/usr/share/graylog/data/journal
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=e1b24204830484d635d744e849441b793a6f7e1032ea1eef40747d95d30da592
- GRAYLOG_HTTP_EXTERNAL_URI=http://192.168.205.4:9000/
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
networks:
- graylog
links:
- mongodb:mongo
- elasticsearch
restart: always
depends_on:
- mongodb
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
# Volumes for persisting data, see https://docs.docker.com/engine/admin/volumes/volumes/
volumes:
mongo_data:
driver: local
es_data:
driver: local
graylog_journal:
driver: local
networks:
graylog:
driver: bridge
In the file, replace:
- GRAYLOG_PASSWORD_SECRET with your own password which must be at least 16 characters
- GRAYLOG_ROOT_PASSWORD_SHA2 with a SHA2 password obtained using the command:
echo -n "Enter Password: " && head -1
- GRAYLOG_HTTP_EXTERNAL_URI with the IP address of your server.
You can also set more configurations for your server using GRAYLOG_, for example, to enable SMTP for sending alerts;
.......
graylog:
......
environment:
GRAYLOG_TRANSPORT_EMAIL_ENABLED: "true"
GRAYLOG_TRANSPORT_EMAIL_HOSTNAME: smtp
GRAYLOG_TRANSPORT_EMAIL_PORT: 25
GRAYLOG_TRANSPORT_EMAIL_USE_AUTH: "false"
GRAYLOG_TRANSPORT_EMAIL_USE_TLS: "false"
GRAYLOG_TRANSPORT_EMAIL_USE_SSL: "false"
.....
3. Create Persistent volumes
In order to persist the data, you will use external volumes to store the data. In this guide, we have already mapped the volumes in the YAML file. Create the 3 volumes for MongoDB, Elasticsearch, and Graylog as below:
sudo mkdir /mongo_data
sudo mkdir /es_data
sudo mkdir /graylog_journal
Set the right permissions:
sudo chmod 777 -R /mongo_data
sudo chmod 777 -R /es_data
sudo chmod 777 -R /graylog_journal
Om Rhel based systems, you need to set SELinux in permissive mode for the paths to be accessible.
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
4. Run the Graylog Server in Docker Containers
With the container provisioned, we can now spin it easily using the command:
docker-compose up -d
Sample Output:
[+] Running 30/30
⠿ graylog Pulled 23.2s
⠿ f7a1c6dad281 Pull complete 8.0s
⠿ ea8366d5a4a5 Pull complete 9.7s
⠿ 3c38647db2f9 Pull complete 10.2s
⠿ 8c1622fde1b3 Pull complete 12.9s
⠿ a51becc643cd Pull complete 17.6s
⠿ a363c7a2d0d7 Pull complete 18.5s
⠿ 208d9143b0ee Pull complete 19.1s
⠿ c30263374f43 Pull complete 19.4s
⠿ mongodb Pulled 17.5s
⠿ cf06a7c31611 Pull complete 2.2s
⠿ 5e8cbd051978 Pull complete 2.5s
⠿ 22d2e18323fe Pull complete 3.0s
⠿ ea17d81261d5 Pull complete 3.6s
⠿ ec6d044e0932 Pull complete 3.9s
.......
Once all the images have been pulled and containers started, check the status as below:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7f969caa48f5 graylog/graylog:4.2 "/usr/bin/tini -- wa…" 30 seconds ago Up 27 seconds (health: starting) 0.0.0.0:1514->1514/tcp, :::1514->1514/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:1514->1514/udp, :::9000->9000/tcp, :::1514->1514/udp, 0.0.0.0:12201->12201/tcp, 0.0.0.0:12201->12201/udp, :::12201->12201/tcp, :::12201->12201/udp thor-graylog-1
1a21d2de4439 docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2 "/tini -- /usr/local…" 31 seconds ago Up 28 seconds 9200/tcp, 9300/tcp thor-elasticsearch-1
1b187f47d77e mongo:4.2 "docker-entrypoint.s…" 31 seconds ago Up 28 seconds 27017/tcp thor-mongodb-1
If you have a firewall enabled, allow the Graylog service port through it.
##For Firewalld
sudo firewall-cmd --zone=public --add-port=9000/tcp --permanent
sudo firewall-cmd --reload
##For UFW
sudo ufw allow 9000/tcp
5. Access the Graylog Web UI
Now open the Graylog web interface using the URL http://IP_address:9000.
Log in using the username admin and SHA2 password(StrongPassw0rd) set in the YAML.
On the dashboard, let’s create the first input to get logs by navigating to the systems tab and selecting input.
Now search for Raw/Plaintext TCP and click launch new input
Once launched, a pop-up window will appear as below. You only need to change the name for the input, port(1514), and select the node, or “Global” for the location for the input. Leave the other details as they are.
Save the file and try sending a plain text message to the Graylog Raw/Plaintext TCP input on port 1514.
echo 'First log message' | nc localhost 1514
##OR from another server##
echo 'First log message' | nc 192.168.205.4 1514
On the running Raw/Plaintext Input, show received messages
The received message should be displayed as below.
You can as well export this to a dashboard as below.
Create the dashboard by providing the required information.
You will have the dashboard appear under the dashboards tab.
Conclusion
That is it!
We have triumphantly walked through how to run the Graylog Server in Docker Containers. Now you can monitor and access logs on several servers with ease. I hope this was significant to you.