Configure Pod Logging in Kubernetes using Sidecar container

Posted on 98 views

Kubernetes under the alias k8s or Kube is widely used to manage containerized workloads. This portable tool is used to automate deployment, scale and manage containers. Lately, the popularity of Kubernetes and its ecosystem has grown immensely courtesy of its ability to design patterns, workload types, and behavior.

One of its amazing features is Sidecar. In Kubernetes, the smallest deployable units are called pods. In most scenarios, there is a single container in a pod. However, there are situations where encapsulating multiple containers in a pod is required. This happens mostly when two containers are coupled to each other and need to share resources.

Sidecar is a separate container running along with the application container in Kubernetes. Normally a sidecar helps offload functions required by the application. They can share pod storage, storage volumes, or network interfaces.

The main use cases of Sidecar containers are:

  • Keeping Application Configuration Up to Date
  • Applications Designed to Share Storage or Networks
  • Main Application and Logging Application

In this guide, we will use a Sidecar container to configure Pod Logging in Kubernetes. In this setup, the primary pod has the main application whereas the secondary pod contains the sidecar container. The main application writes logs to a file and the secondary pod continuously retrieves the log files by sending the output to STDOUT.

Below is an illustration of Sidecar Pod Logging in Kubernetes.

Configure-Pod-Logging-in-Kubernetes-using-Sidecar-container

Now let’s dive!

Getting Started.

Let me assume that you already have a Kubernetes cluster set up. You can also use the below guides to achieve this.

Once the Kubernetes cluster is up, proceed as below.

Configure Pod Logging in Kubernetes using Sidecar container

Now you can easily configure pod logging in Kubernetes using the steps below. In this guide, we will set up a Persistent Volume Claim for the log storage

1. Create a StorageClass

We will begin by creating a storage class with the WaitForFirstConsumer BindingMode as below:

vim storageClass.yml

Paste the below lines into the file.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: my-local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Create the pod using the command:

$ kubectl create -f storageClass.yml
storageclass.storage.k8s.io/my-local-storage created

2. Create a Persistent Volume.

On the local machine, create a persistent volume with the storage class above.

vim logs-pv.yml

The file will have the lines below:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-local-pv
spec:
  capacity:
    storage: 2Gi
  accessModes:
  - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: my-local-storage
  local:
    path: /mnt/disk/logs
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node1

Now on the specified node(node1), create the volume:

DIRNAME="logs"
sudo mkdir -p /mnt/disk/$DIRNAME 
sudo chcon -Rt svirt_sandbox_file_t /mnt/disk/$DIRNAME
sudo chmod 777 /mnt/disk/$DIRNAME

now create the pod.

kubectl create -f logs-pv.yml

3. Create a Persistent Volume Claim.

Now we can create a Persistent Volume Claim and reference it to the created storage class.

vim logs-pvc.yml

Add the below lines to the file.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  # This name uniquely identifies the PVC. This is used in deployment.
  name: logs-pvc-claim
spec:
  # Read more about access modes here: http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
  storageClassName: my-local-storage
  accessModes:
    # The volume is mounted as read-write by Multiple nodes
    - ReadWriteMany
  resources:
    # This is the request for storage. Should be available in the cluster.
    requests:
      storage: 2Gi

Create the Persistent Volume Claim.

kubectl create -f logs-pvc.yml

Verify if the PV is available.

$ kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS       REASON   AGE
my-local-pv   2Gi        RWX            Retain           Available           my-local-storage            34s

4. Implementing Kubernetes Logging using Sidecar container.

In this guide, we will configure login for a web server(Nginx) using a sidecar container.

Create the configuration file.

vim app.yaml

Add the below lines to it.

kind: Pod
apiVersion: v1
metadata:
  name: simple-webapp
  labels:
    app: webapp
spec:
  containers:
    - name: main-application
      image: nginx
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/nginx
    - name: sidecar-container
      image: busybox
      command: ["sh","-c","while true; do cat /var/log/nginx/access.log; sleep 30; done"]
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/nginx
  volumes:
    - name: shared-logs
      persistentVolumeClaim:
        claimName: logs-pvc-claim

---

# Service Configuration
# --------------------
apiVersion: v1
kind: Service
metadata:
  name: simple-webapp
  labels:
    run: simple-webapp
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
  selector:
    app: webapp
  type: LoadBalancer

The added service configuration just exposes the running Nginx application using LoadBalancer. The above sidecar will view access logs for Nginx. You can also configure the sidecar to view error logs by replacing the line.

      command: ["sh","-c","while true; do cat /var/log/nginx/access.log; sleep 30; done"]

With the line.

      command: ["sh","-c","while true; do cat /var/log/nginx/error.log; sleep 30; done"]

Apply the configuration.

# kubectl create -f app.yaml
service/simple-webapp created

Verify if the pod is running:

$ kubectl get pods
NAME            READY   STATUS    RESTARTS   AGE
simple-webapp   2/2     Running   0          118s

This shows that both the main application and the sidecar are running. The PV should be bound as below

$ kubectl get pv
NAME          CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS       REASON   AGE
my-local-pv   2Gi        RWX            Retain           Bound    default/logs-pvc-claim   my-local-storage            2m26s

First, we need to test if the webserver is running. Obtain the port to which the service has been exposed.

$ kubectl get svc
kubernetes      ClusterIP   10.96.0.1              443/TCP        10m
simple-webapp   LoadBalancer   10.102.10.15           80:30979/TCP   4m49s

5. Obtain Pod Logs Kubernetes using Sidecar container

Now access the application on the browser using the exposed port. In this case, we have the post as 30979 and so the URL will be http://IP_address:30979

Configure-Pod-Logging-in-Kubernetes-using-Sidecar-container-1

Now get the logs using the command:

$ kubectl logs -f simple-webapp sidecar-container
192.168.205.11 - - [24/Apr/2022:13:41:46 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:47 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.205.11:31943/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:46 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:47 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.205.11:31943/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:46 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:47 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.205.11:31943/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:46 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:47 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.205.11:31943/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:46 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:47 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.205.11:31943/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:46 +0000] "GET / HTTP/1.1" 200 615 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"
192.168.205.11 - - [24/Apr/2022:13:41:47 +0000] "GET /favicon.ico HTTP/1.1" 404 555 "http://192.168.205.11:31943/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.127 Safari/537.36" "-"

For error logs a similar command is used:

$ kubectl logs -f simple-webapp sidecar-container
2022/04/24 13:40:52 [notice] 1#1: using the "epoll" event method
2022/04/24 13:40:52 [notice] 1#1: nginx/1.21.6
2022/04/24 13:40:52 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2022/04/24 13:40:52 [notice] 1#1: OS: Linux 5.10.0-13-amd64
2022/04/24 13:40:52 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1024:1048576
2022/04/24 13:40:52 [notice] 1#1: start worker processes
2022/04/24 13:40:52 [notice] 1#1: start worker process 31
2022/04/24 13:41:47 [error] 31#31: *1 open() "/usr/share/nginx/html/favicon.ico" failed (2: No such file or directory), client: 192.168.205.11, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "192.168.205.11:31943", referrer: "http://192.168.205.11:31943/"
2022/04/24 13:46:57 [notice] 1#1: signal 3 (SIGQUIT) received, shutting down
2022/04/24 13:46:57 [notice] 31#31: gracefully shutting down
2022/04/24 13:46:57 [notice] 31#31: exiting
2022/04/24 13:46:57 [notice] 31#31: exit
2022/04/24 13:46:57 [notice] 1#1: signal 17 (SIGCHLD) received from 31
2022/04/24 13:46:57 [notice] 1#1: worker process 31 exited with code 0
2022/04/24 13:46:57 [notice] 1#1: exit
2022/04/24 13:47:49 [notice] 1#1: using the "epoll" event method
2022/04/24 13:47:49 [notice] 1#1: nginx/1.21.6
2022/04/24 13:47:49 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2022/04/24 13:47:49 [notice] 1#1: OS: Linux 5.10.0-13-amd64

The log files should also be persistent on your local/ PV storage

$ ls -al /mnt/disk/logs/
total 16
drwxrwxrwx 2 root root 4096 Apr 24 09:40 .
drwxr-xr-x 3 root root 4096 Apr 24 09:38 ..
-rw-r--r-- 1 root root 1245 Apr 24 09:55 access.log
-rw-r--r-- 1 root root 2944 Apr 24 09:55 error.log

That was enough learning!

Using the knowledge gathered here, you can now configure pod Logging in Kubernetes using a Sidecar container. I hope this was significant.

coffee

Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.