Dynamic hostPath PV Creation in Kubernetes using Local Path Provisioner

Posted on 125 views

Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the user configuration, the Local Path Provisioner will create hostPath based persistent volume on the node automatically. It utilizes the features introduced by Kubernetes Local Persistent Volume feature, but make it a simpler solution than the built-in local volume feature in Kubernetes.

Kubernetes pods, where most of the applications will ultimately run are ephemeral in nature. Once you delete your pod and let a new one to be to launched, then you will have lost all of the data that the previous pod had generated. If you do not mind about the data, then your application can run in stateless mode and you will be all fine. But if the loss of that data will cause you to write incident reports, then deep down you know you have to look for a way to persist the data that is being spawned by your pods. There are several solutions out there that you can leverage on to persist your stateful applications’ data.

PV Creations on Kubernetes using Local Path Provisioner

And in this guide, we are going to look at one of them. We will be exploring “Local Path Provisioner”.

To kick us off, ice breaking and good acquaintance always does the trick. So let us get to know what this solution is all about then embark on the core business.

Local Path Provisioner provides a way for the Kubernetes users to utilize the local storage in each node. Based on the user configuration, the Local Path Provisioner will create hostPath based persistent volume on the node automatically. It utilizes
the features introduced by Kubernetes Local Persistent Volume feature, but make it a simpler solution than the built-in local volume feature in Kubernetes.

One amazing feature about “Local Path Provisioner” is that you can dynamically provision persistent local storage using hostPath via StorageClasses for your applications as we will see in the course of this guide.

Advantages of Local Path Provisioner

This project has the following advantages served on your table:

  • Dynamic provisioning the volume using hostPath: Currently the Kubernetes Local Volume provisioner cannot do dynamic provisioning for the local volumes.

Disadvantages of Local Path Provisioner

You will have to endure the following con Local Path Provisioner has:

  • No support for the volume capacity limit currently: The capacity limit will be ignored for now.

Project Requirements

For us to move forward, we assume that the following are already met:

  • A running Kubernetes Cluster
  • Access to the cluster
  • kubectl installed if accessing k8s from local machine/laptop.

Once all of that is met, we can now proceed and install the provisioner and get to see what it is all about. We hope you enjoy it!

Step 1: Installation of Local Path Provisioner

In this setup, the directory “/opt/local-path-provisioner” will be used across all the nodes as the path for provisioning (a.k.a, store the persistent volume data). This can be edited to fit the requirements of your environment through its ConfigMap as we shall see later. But for now, the provisioner will be installed in “local-path-storage” namespace by default. To get it installed, open up your terminal where you have access to your cluster and do the following:

cd ~
wget https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml

That will fetch the manifest that will deploy the provisioner. You can have a good look at it before installing it if you would like to. If everything is okay or if you have done the edits you want, you can go ahead and apply it in your cluster as follows:

kubectl apply -f local-path-storage.yaml

If the installation goes successfully, you should see something like the following:

$ kubectl -n local-path-storage get pod
NAME READY STATUS RESTARTS AGE
local-path-provisioner-d744ccf98-xfcbk 1/1 Running 0 7m

I personally noticed a bug on the provisioner that it was not able to grant permissions to the volume/directory it will store pod’s data. If you notice the same, just grant permission to the directory

sudo chmod 0777 /opt/local-path-provisioner -R

Step 2: Deploying Sample Application with PVC

In order to take the Local Path Provisioner we have just installed on a test drive, we are going to install WordPress and MariaDB Database and confirm that the data we are going to create will remain persisted in the cluster once we delete the pods and re-create them.

Part 1: Create persistent volume claims for the two applications.

Create a new file with the contents below.

$ vim pvcs.yaml
## PVC for MariaDB
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mariadb-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 3Gi
---
## PVC for WordPress
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: wordpress-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: local-path
  resources:
    requests:
      storage: 1Gi

You can create the resources after they are the way you would wish as far as
naming and namespaces are concerned.

$ kubectl apply -f pvcs.yaml
persistentvolumeclaim/mariadb-pvc created
persistentvolumeclaim/wordpress-pvc created

Part 2: Create MariaDB and WordPress Deployment Files.

We shall then create deployment files for MariaDB and WordPress where will reference the volumes we have created above plus their respective images.

MariaDB Database

Before we proceed, let us create a good password for our MariaDB encoded in base64 as follows:

$ echo -n 'StrongPassword' | base64
U3Ryb25nUGFzc3dvcmQ=

Copy the value you get to the secret section as shown below. Make sure you use a strong password here.

Create MariaDB manifest as follows:

$ vim mariadb.yaml
---
apiVersion: v1
kind: Secret
metadata:
  name: mariadb-secret
type: Opaque
data:
  password: U3Ryb25nUGFzc3dvcmQ= ##Copy based64 password hash
---
apiVersion: v1
kind: Service
metadata:
  name: mariadb
  labels:
    app: mariadb
spec:
  ports:
    - port: 3306
  selector:
    app: mariadb
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mariadb
spec:
  selector:
    matchLabels:
      app: mariadb
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 1
  template:
    metadata:
      labels:
        app: mariadb
    spec:
      containers:
      - name: mariadb
        image: mariadb
        imagePullPolicy: "IfNotPresent"
        env:
        - name: MARIADB_ROOT_PASSWORD
          valueFrom:
             secretKeyRef:
              name: mariadb-secret
              key: password
        - name: MARIADB_USER
          value: "wordpress"
        - name: MARIADB_PASSWORD
          valueFrom:
             secretKeyRef:
              name: mariadb-secret
              key: password
        ports:
        - containerPort: 3306
          name: mariadb
        volumeMounts:
        - mountPath: /var/lib/mysql
          name: mariadb-data
      volumes:
      - name: mariadb-data
        persistentVolumeClaim:
          claimName: mariadb-pvc

You can go ahead and create MariaDB service

$ kubectl apply -f mariadb.yaml
secret/mariadb-secret created
service/mariadb created
deployment.apps/mariadb created

WordPress

Create WordPress manifest as follows:

$ vim wordpress.yaml
apiVersion: v1
kind: Service
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
  type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
  labels:
    app: wordpress
spec:
  selector:
    matchLabels:
      app: wordpress
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - image: wordpress
        imagePullPolicy: "IfNotPresent"
        name: wordpress
        env: 
        - name: WORDPRESS_DB_HOST
          value: "mariadb"
        - name: WORDPRESS_DB_NAME
          value: "wordpress"
        - name: WORDPRESS_DB_USER
          value: "root"
        - name: WORDPRESS_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mariadb-secret
              key: password
        ports:
        - containerPort: 80
          name: wordpress
        volumeMounts:
        - name: wordpress-persistent-storage
          mountPath: /var/www/html
      volumes:
      - name: wordpress-persistent-storage
        persistentVolumeClaim:
          claimName: wordpress-pvc

After you are done editing the manifest files, let us first create WordPress database in the new pod we have created.

$ kubectl exec -it mariadb-8579dc69cc-4ldvz /bin/sh
$ mysql -uroot -pStrongPassword
CREATE DATABASE wordpress;
GRANT ALL PRIVILEGES ON wordpress.* TO 'wordpress'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;

Lastly, go ahead and create WordPress service

kubectl apply -f wordpress.yaml

Let us check if our pods are faring well under the hood

$ kubectl get pods
mariadb-8579dc69cc-4ldvz     1/1     Running   0          30m
wordpress-688dffc569-f4bbb   1/1     Running   0          13m

We can see that our pods are ready.

Check the Persistent Volumes

$ kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mariadb-pvc     Bound    pvc-1bd50ae9-e77f-4c01-ba31-e5433a98801d   3Gi        RWO            local-path     17m
wordpress-pvc   Bound    pvc-75a397fc-5726-4b4b-9ae0-258141605f75   1Gi        RWO            local-path     17m

Check the Services

$ kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1                443/TCP        58d
mariadb      NodePort    10.104.60.255            3306:32269/TCP   32m
wordpress    NodePort    10.109.101.187           80:31691/TCP     3h53m

It seems like we are faring well so far!

Step 3: Login to WordPress and create new data

In this step, we are going to add some data in our WordPress application so that we can later test if the settings we add are persisting in the database as well as in the WordPress files. Login to your WordPress instance by pointing your application to the any of your nodes and the NodePort that has been assigned to the WordPress pod.

http://node-ip-or-node-domain-name:NodePort

You should see the familiar WordPress Setup page below. Complete the setup details then create a new post.

Choose the language you like:

wordpress-language-1-1024x477

Enter the details being requested for and click “Install Wordress

wordpress-questions-2-1024x542

Click on “Login

wordpress-installsuccess-3

Enter your credentials and login:

wordpress-login-4

You will be ushered into the dashboard similar to the one below:

wordpress-dashboard-5-1024x478

Create a new post in the new dashboard by clicking on “Posts” then “Add New

wordpress-newpost-6-1024x425

Do a post and save it:

wordpress-postdone-7-1024x464

wordpress-postconfirm-8

When the post has been created, populated and saved, we shall proceed to delete all of the pods we created so that we can verify that our data is actually being persisted.

$ kubectl delete pod wordpress-688dffc569-f4bbb
pod "wordpress-688dffc569-f4bbb" deleted

$ kubectl delete pod mariadb-8579dc69cc-4ldvz
pod "mariadb-8579dc69cc-4ldvz" deleted

Then let new ones be automatically created by Kubernetes.

$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
mariadb-8579dc69cc-v85sz     1/1     Running   0          26s
wordpress-688dffc569-k7spr   1/1     Running   0          34s

Now let us login once more and confirm that our post is still there.

wordpress-postafterrestart-9

And our post is still there!!

Conclusion

It has been a journey and we hope the sceneries, the landscapes and the amazing experience fulfilled your wishes. It is time to dock and let the tides dance away in their remarkable song. Have a wonderful one as we appreciate your support and comments that make it all worthwhile.

coffee

Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.