Today, with the increase in sophisticated cyber threats, there is a high need for real-time monitoring and analysis on systems to detect threats on time and act accordingly.
Wazuh is a free and open-source monitoring solution. It is used to detect threats, monitor the integrity of a system, and incident response. It provides lightweight OS-level-based security using multi-platform agents. Using Wazuh, one can collect, index, aggregate, and analyze security data hence detecting system intrusions and abnormalities.
The Wazuh server can be used for:
- Cloud security
- Container security
- Log analysis
- Vulnerability detection
- Security analysis
This guide aims to demonstrate how to run the Wazuh Server in Docker Containers. Normally, there are two deployment options for Wazuh.
- All-in-one deployment: Here, both the Wazuh and Open Distro for Elasticsearch are installed on a single host.
- Distributed deployment: This method involves installing the components on separate hosts as a single/multi-node cluster. This method is preferred since it provides high availability and scalability of the product and hence convenient for large environments.
During the Wazuh installation, one can choose between two options:
- Unattended installation– Wazuh is installed using an automated script. It performs health checks to verify that the available system resources meet the minimal requirements
- Step by step installation– Involves the manual installation with detailed description of each process.
Docker is an open-source engine used to automate the deployment of different applications inside software containers. In this guide, we will install the Wazuh All-in-one deployment in a docker container. The Docker image contains:
- Wazuh Manager
- Filebeat
- Elasticsearch
- Kibana
- Nginx and Open Distro for Elasticsearch
Let’s dive in!
Getting Started.
Prepare your system for installation by updating the available packages and installing required packages.
## On Debian/Ubuntu
sudo apt update && sudo apt upgrade
sudo apt install curl vim git
## On RHEL/CentOS/RockyLinux 8
sudo yum -y update
sudo yum -y install curl vim git
## On Fedora
sudo dnf update
sudo dnf -y install curl vim git
Step 1 – Docker Installation on Linux
The first thing here is to install docker and docker-compose if you do not have them installed. Docker can be installed on any Linux system using the dedicated guide below:
Once installed, start and enable docker.
sudo systemctl start docker && sudo systemctl enable docker
Also, add your system user to the docker group.
sudo usermod -aG docker $USER
newgrp docker
With docker installed, install docker-compose using the below commands:
curl -s https://api.github.com/repos/docker/compose/releases/latest | grep browser_download_url | grep docker-compose-linux-x86_64 | cut -d '"' -f 4 | wget -qi -
chmod +x docker-compose-linux-x86_64
sudo mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose
Verify the installation.
$ docker-compose version
Docker Compose version v2.3.0
Step 2 – Provision the Wazuh Server
Before we proceed, you need to make the following settings:
- Increase max_map_count on your host
sudo sysctl -w vm.max_map_count=262144
If this is not set, Elasticsearch may fail to work.
- Configure SELinux on Rhel-based systems
For docker-elk to start, SELinux needs to be set into permissive mode as below
sudo chcon -R system_u:object_r:admin_home_t:s0 docker-elk/
All the required Wazuh components are available in a single Open Distro for Elasticsearch file that can be pulled as below:
$ cd ~
$ git clone https://github.com/wazuh/wazuh-docker.git -b v4.2.5 --depth=1
Now navigate into the directory.
cd wazuh-docker
Step 3 – Run the Wazuh Container
In the directory, there is a docker-compose.yml used for the demo deployment. Run the containers in the background as below.
docker-compose up -d
Check if the containers are running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d64698a06cc4 wazuh/wazuh-kibana-odfe:4.2.5 "/bin/sh -c ./entryp…" 38 seconds ago Up 36 seconds 0.0.0.0:443->5601/tcp, :::443->5601/tcp wazuh-docker-kibana-1
2bb0d8088b0f amazon/opendistro-for-elasticsearch:1.13.2 "/usr/local/bin/dock…" 48 seconds ago Up 37 seconds 9300/tcp, 9600/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9650/tcp wazuh-docker-elasticsearch-1
7eed74a2a2ae wazuh/wazuh-odfe:4.2.5 "/init" 48 seconds ago Up 36 seconds 0.0.0.0:1514-1515->1514-1515/tcp, :::1514-1515->1514-1515/tcp, 0.0.0.0:514->514/udp, :::514->514/udp, 0.0.0.0:55000->55000/tcp, :::55000->55000/tcp, 1516/tcp wazuh-docker-wazuh-1
At this point, Wazuh can be accessed on port 443. This is used for demo deployments, for Production deployment, we need to make several configurations to these containers.
Production deployment
For Production deployment, the production-cluster.yml is the base for this deployment. But before we run the containers, we need to make a few configurations.
Data Persistent.
Create persistent volumes for the containers:
sudo mkdir /wazuh_logs
cd /wazuh_logs
sudo mkdir ossec-api-configuration
sudo mkdir ossec-etc
sudo mkdir ossec-logs
sudo mkdir ossec-queue
sudo mkdir ossec-var-multigroups
sudo mkdir ossec-integrations
sudo mkdir ossec-active-response
sudo mkdir ossec-agentless
sudo mkdir ossec-wodles
sudo mkdir filebeat-etc
sudo mkdir filebeat-var
sudo mkdir worker-ossec-api-configuration
sudo mkdir worker-ossec-etc
sudo mkdir worker-ossec-logs
sudo mkdir worker-ossec-queue
sudo mkdir worker-ossec-var-multigroups
sudo mkdir worker-ossec-integrations
sudo mkdir worker-ossec-active-response
sudo mkdir worker-ossec-agentless
sudo mkdir worker-ossec-wodles
sudo mkdir worker-filebeat-etc
sudo mkdir worker-filebeat-var
sudo mkdir elastic-data-1
sudo mkdir elastic-data-2
sudo mkdir elastic-data-3
To be able to persist data to the local machine, you need to edit the volumes in the production-cluster.yml to suit the created paths above.
cd ~/wazuh-docker/
sudo vim production-cluster.yml
For example for the Wazuh container, set the paths as below:
volumes:
...
- /wazuh_logs/ossec-api-configuration:/var/ossec/api/configuration
- /wazuh_logs/ossec-etc:/var/ossec/etc
- /wazuh_logs/ossec-logs:/var/ossec/logs
- /wazuh_logs/ossec-queue:/var/ossec/queue
- /wazuh_logs/ossec-var-multigroups:/var/ossec/var/multigroups
- /wazuh_logs/ossec-integrations:/var/ossec/integrations
- /wazuh_logs/ossec-active-response:/var/ossec/active-response/bin
- /wazuh_logs/ossec-agentless:/var/ossec/agentless
- /wazuh_logs/ossec-wodles:/var/ossec/wodles
- /wazuh_logs/filebeat-etc:/etc/filebeat
- /wazuh_logs/filebeat-var:/var/lib/filebeat
....
Do this for all other containers by substituting the correct volume name.
Secure Traffic.
The available demo certificates need to be replaced for each node in the cluster. Use the below command to obtain the certificates using the generate-opendistro-certs.yml
docker-compose -f generate-opendistro-certs.yml run --rm generator
Sample output:
[+] Running 15/15
⠿ generator Pulled 16.8s
⠿ d6ff36c9ec48 Pull complete 4.7s
⠿ c958d65b3090 Pull complete 5.2s
⠿ edaf0a6b092f Pull complete 5.6s
⠿ 80931cf68816 Pull complete 8.3s
⠿ bf04b6bbed0c Pull complete 9.3s
⠿ 8bf847804f9e Pull complete 9.5s
⠿ 6bf89641a7f2 Pull complete 13.2s
⠿ 040f240573da Pull complete 13.4s
⠿ ac14183eb55b Pull complete 13.8s
⠿ debf0fc68082 Pull complete 14.1s
⠿ 62fb2ae4a19e Pull complete 14.3s
⠿ d3aeb8473c73 Pull complete 14.4s
⠿ 939b8ae6540a Pull complete 14.6s
⠿ f8b27a6da615 Pull complete 14.8s
Root certificate and signing certificate have been sucessfully created.
Created 4 node certificates.
Created 1 client certificates.
Success! Exiting.
At this point, you will have the certificates saved at production_cluster/ssl_certs.
$ ls -al production_cluster/ssl_certs
total 88
drwxr-xr-x 2 thor thor 4096 Mar 5 04:26 .
drwxr-xr-x 7 thor thor 4096 Mar 5 02:56 ..
-rw-r--r-- 1 root root 1704 Mar 5 04:26 admin.key
-rw-r--r-- 1 root root 3022 Mar 5 04:26 admin.pem
-rw-r--r-- 1 thor thor 888 Mar 5 04:26 certs.yml
-rw-r--r-- 1 root root 294 Mar 5 04:26 client-certificates.readme
-rw-r--r-- 1 root root 1158 Mar 5 04:26 filebeat_elasticsearch_config_snippet.yml
-rw-r--r-- 1 root root 1704 Mar 5 04:26 filebeat.key
-rw-r--r-- 1 root root 3067 Mar 5 04:26 filebeat.pem
-rw-r--r-- 1 root root 1801 Mar 5 04:26 intermediate-ca.key
-rw-r--r-- 1 root root 1497 Mar 5 04:26 intermediate-ca.pem
-rw-r--r-- 1 root root 1149 Mar 5 04:26 node1_elasticsearch_config_snippet.yml
-rw-r--r-- 1 root root 1704 Mar 5 04:26 node1.key
-rw-r--r-- 1 root root 3075 Mar 5 04:26 node1.pem
-rw-r--r-- 1 root root 1149 Mar 5 04:26 node2_elasticsearch_config_snippet.yml
-rw-r--r-- 1 root root 1704 Mar 5 04:26 node2.key
-rw-r--r-- 1 root root 3075 Mar 5 04:26 node2.pem
-rw-r--r-- 1 root root 1149 Mar 5 04:26 node3_elasticsearch_config_snippet.yml
-rw-r--r-- 1 root root 1704 Mar 5 04:26 node3.key
-rw-r--r-- 1 root root 3075 Mar 5 04:26 node3.pem
-rw-r--r-- 1 root root 1700 Mar 5 04:26 root-ca.key
-rw-r--r-- 1 root root 1330 Mar 5 04:26 root-ca.pem
Now in the production-cluster.yml file, set up the SSL certs for:
- Wazuh container
For the Wazuh-master container, set the SSL certificates as below.
......
environment:
.....
- FILEBEAT_SSL_VERIFICATION_MODE=full
- SSL_CERTIFICATE_AUTHORITIES=/etc/ssl/root-ca.pem
- SSL_CERTIFICATE=/etc/ssl/filebeat.pem
- SSL_KEY=/etc/ssl/filebeat.key
volumes:
- ./production_cluster/ssl_certs/root-ca.pem:/etc/ssl/root-ca.pem
- ./production_cluster/ssl_certs/filebeat.pem:/etc/ssl/filebeat.pem
- ./production_cluster/ssl_certs/filebeat.key:/etc/ssl/filebeat.key
......
- Elasticsearch Container
The Elasticsearch has 3 nodes here, we will configure each of them as below:
elasticsearch:
....
volumes:
...
- ./production_cluster/ssl_certs/root-ca.pem:/usr/share/elasticsearch/config/root-ca.pem
- ./production_cluster/ssl_certs/node1.key:/usr/share/elasticsearch/config/node1.key
- ./production_cluster/ssl_certs/node1.pem:/usr/share/elasticsearch/config/node1.pem
- ./production_cluster/ssl_certs/admin.pem:/usr/share/elasticsearch/config/admin.pem
- ./production_cluster/ssl_certs/admin.key:/usr/share/elasticsearch/config/admin.key
- ./production_cluster/elastic_opendistro/elasticsearch-node1.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./production_cluster/elastic_opendistro/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
For elasticsearch-2 the configuration is almost similar to the above.
elasticsearch-2:
...
volumes:
- ./production_cluster/ssl_certs/root-ca.pem:/usr/share/elasticsearch/config/root-ca.pem
- ./production_cluster/ssl_certs/node2.key:/usr/share/elasticsearch/config/node2.key
- ./production_cluster/ssl_certs/node2.pem:/usr/share/elasticsearch/config/node2.pem
- ./production_cluster/elastic_opendistro/elasticsearch-node2.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./production_cluster/elastic_opendistro/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
For elasticsearch-3:
elasticsearch-3:
...
volumes:
- ./production_cluster/ssl_certs/root-ca.pem:/usr/share/elasticsearch/config/root-ca.pem
- ./production_cluster/ssl_certs/node3.key:/usr/share/elasticsearch/config/node3.key
- ./production_cluster/ssl_certs/node3.pem:/usr/share/elasticsearch/config/node3.pem
- ./production_cluster/elastic_opendistro/elasticsearch-node3.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ./production_cluster/elastic_opendistro/internal_users.yml:/usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
- Kibana Container
Generate self-signed certificates for Kibana using the command:
bash ./production_cluster/kibana_ssl/generate-self-signed-cert.sh
Sample Output:
Generating a RSA private key
...............................................+++++
.........................................................................................................................................+++++
writing new private key to 'key.pem'
-----
Now you will have certificates for Kibana. Set SSL to true and provide the certificates’ path.
environment:
- SERVER_SSL_ENABLED=true
- SERVER_SSL_CERTIFICATE=/usr/share/kibana/config/cert.pem
- SERVER_SSL_KEY=/usr/share/kibana/config/key.pem
...
volumes:
- ./production_cluster/kibana_ssl/cert.pem:/usr/share/kibana/config/cert.pem
- ./production_cluster/kibana_ssl/key.pem:/usr/share/kibana/config/key.pem
- Nginx Container
The Nginx load balancer also requires certificates at ./production_cluster/nginx/ssl/. You can generate self-signed certificates using the command:
bash ./production_cluster/nginx/ssl/generate-self-signed-cert.sh
Add the certificates path for the container:
nginx:
....
volumes:
- ./production_cluster/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./production_cluster/nginx/ssl:/etc/nginx/ssl:ro
The ./production_cluster/nginx/nginx.conf is a file containing variables about the Nginx container.
Now you should have the production-cluster.yml configured with the SSL certificates as above.
Stop and remove the previously running demo containers and run the Production deployment as below:
docker-compose -f production-cluster.yml up -d
Check if the containers are running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42d2b8882740 nginx:stable "/docker-entrypoint.…" 2 minutes ago Up About a minute 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:1514->1514/tcp, :::1514->1514/tcp wazuh-docker-nginx-1
9395abddd27c wazuh/wazuh-kibana-odfe:4.2.5 "/bin/sh -c ./entryp…" 2 minutes ago Up 2 minutes 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp wazuh-docker-kibana-1
53aaa86606b6 amazon/opendistro-for-elasticsearch:1.13.2 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 9300/tcp, 9600/tcp, 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 9650/tcp wazuh-docker-elasticsearch-1
771a5d5d6aaf wazuh/wazuh-odfe:4.2.5 "/init" 2 minutes ago Up 2 minutes 1514-1516/tcp, 514/udp, 55000/tcp wazuh-docker-wazuh-worker-1
327e32da3e61 wazuh/wazuh-odfe:4.2.5 "/init" 2 minutes ago Up About a minute 1514/tcp, 0.0.0.0:1515->1515/tcp, :::1515->1515/tcp, 0.0.0.0:514->514/udp, :::514->514/udp, 1516/tcp, 0.0.0.0:55000->55000/tcp, :::55000->55000/tcp wazuh-docker-wazuh-master-1
67da0a98a5a6 amazon/opendistro-for-elasticsearch:1.13.2 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp wazuh-docker-elasticsearch-3-1
8874fa896370 amazon/opendistro-for-elasticsearch:1.13.2 "/usr/local/bin/dock…" 2 minutes ago Up 2 minutes 9200/tcp, 9300/tcp, 9600/tcp, 9650/tcp wazuh-docker-elasticsearch-2-1
Now we have all the 7 containers running and the web service exposed using the Nginx container.
Step 4 – Access the Wazuh Kibana Interface
The Kibana interface can be accessed on port 443 exposed by Nginx. If you have a firewall enabled, allow this port through it.
##For Firewalld
sudo firewall-cmd --add-port=443/tcp --permanent
sudo firewall-cmd --reload
##For UFW
sudo ufw allow 443/tcp
Now access the Kibana web interface using the URL https://IP_address or https://domain_name
Login using the set credentials for Elasticseach
ELASTICSEARCH_USERNAME=admin
ELASTICSEARCH_PASSWORD=SecretPassword
Wazuh will initialize as below.
The Wazuh dashboard will appear as below with modules.
You can now create and view dashboards on Kibana as below.
That is it!
You now have the Wazuh server set up for real-time monitoring and analysis. This will help you to detect threats on time and act in time. I hope this was significant.