Deploy and Use OpenEBS Container Storage on Kubernetes

Posted on 314 views

Managing storage and volumes on a Kubernetes cluster can be challenging for many engineers. Setting up Persistent volumes and dynamic allocation of the same can be made easy by the tool we are going to explore today, OpenEBS.

OpenEBS is a cloud native storage project originally created by MayaData that build on a Kubernetes cluster and allows Stateful applications to access Dynamic Local PVs and/or replicated PVs. OpenEBS runs on any Kubernetes platform and uses any Cloud storage solution including AWS s3, GKE and AKS.

OpenEBS adopts the Container Attached Storage (CAS) architecture in such a way that the volumes provisioned through OpenEBS are containerized.

OpenEBS utilizes disks attached to the worker nodes, external mount-points and local host paths on the nodes.

Features of OpenEBS

  1. Containerized storage

OpenEBS volumes are always containerized as it used Container Attached Storage architecture.

2. Synchronous replication

OpenEBS can synchronously replicate data volumes when used wirh Cstor, Jiva or Mayastor for high availability of stateful applications.

3. Snapshots

Snapshots are created instantaneously when using Cstor. This makes it easy for data migration within the Kubernetes cluster.

4. Backup and restore

The backup and restore feature works with Kubernetes solutions such as Velero. You can backup data to object storage such as AWS s3 and GCP.

5. Prometheus Metrics

OpenEBS volumes are configured to generate granular data metrics e.g throughput, latency and IOPS. this can easily be shipped via prometheus data exporter and displayed on a Grafana dashboard for monitoring of the cluster health, disk failures and utilization.


OpenEBS Architecture

OpenEBS uses the CAS model. This means that each volume has a dedicated controller POD and a set of replica PODs.

OpenEBS has the following components:

The data plane is reponsible for the actual IO path of the persistent volume. You can choose between the three storage engines discussed below depending on your workloads and preferences.

  1. cStor – This is the preferred storage engine for OpenEBS as it offers enterprise-grade features such as snapshots, clones, thin provisioning, data consistency and scalability in capacity. This in turn allows Kubernetes stateful deployments to work with high availability. cStor is designed to have three replicas whereby data is written synchronously to the replicas hence allowing pods to retain data during terminating and rescheduling.
  2. Jiva – Jiva runs exclusively in the user space with block storage capabilities such as synchronous replication. This option is ideal in situations where you have applications running on nodes that might not be able to add more block storage devices. This however is not ideal for mission-critical applications that require high performance storage capabilities.
  3. LocalPV – This is the simplest storage engine of the three. A Local Persistent Volume is a directly-attached volume to a Kubernetes node. OpenEBS can make use of a locally attached disk or a path (mount-point) to provision persistent volumes to the k8s cluster. This is ideal in situations where you are running applications that do not require advanced storage capabilities such as snapshots and clones.

The table below highlights the features available on each storage engine discussed above.


The Control plane is reponsible for volume operations such as provisioning volumes, making clones, exporting volume metrics and enforcing volume policies.


  • Node disk manager (NDM) – Used for discovery, monitoring and management of the media/disks attached on Kubernetes nodes.

Node Disk Manager is the tool used to manage persistent storage in Kubernetes for stateful applications. This brings flexibility in the management of the storage stack by unifying disks and creating pools and identifying them as kubernetes objects.

NDM discovers, provisions, manages and monitors the underlying disks for PV provisioners like OpenEBS and prometheus.

How to Setup OpenEBS on Kubernetes Cluster

This article will discuss how to setup OpenEBS on a kubernetes cluster. At the end of this article, we shall have covered the following:

  1. Setup OpenEBS on Kubernetes
  2. Provision Persistent Volumes on Kubernetes using OpenEBS
  3. Provision Storage classes (SC) and Persistent Volume Claims (PVC) on Kubernetes.

Installing OpenEBS on Kubernetes

Before we can start our installation, we have to make sure that iSCSI client is installed and running on all the nodes. This is necessary for Jiva and cStor setups.

Verify that iscsid service is running on your nodes, otherwise, install it.


sudo apt-get update
sudo apt-get install open-iscsi
sudo systemctl enable --now iscsid
systemctl status iscsid


yum install iscsi-initiator-utils -y
sudo systemctl enable --now iscsid
systemctl status iscsid

Method 1: Install OpenEBS on Kubernetes using Helm

We can deploy OpenEBS through Helm Charts. You have to check the version of Helm installed on your system.

$ helm version
version.BuildInfoVersion:"v3.6.1", GitCommit:"61d8e8c4a6f95540c15c6a65f36a6dd0a45e7a2f", GitTreeState:"clean", GoVersion:"go1.16.5"

For Helm 2, Install the OpenEBS chart using the commands below:

helm repo add openebs
helm repo update
helm install --namespace openebs --name openebs openebs/openebs

Helm 3:

For Helm 3, we need to create the openebs namespace before we can deploy the chat:

$ kubectl create ns openebs
namespace/openebs created

Deploy openebs from helm chart.

helm repo add openebs
helm repo update
helm install --namespace openebs openebs openebs/openebs

Method 2: Install OpenEBS on Kubernetes through Kubectl

We can use Kubectl to install OpenEBS

Create openebs namespace

kubectl create ns openebs

Install OpenEBS:

kubectl apply -f

Verify Pods

After a successful installation, verify that the pods are up:

kubectl get pods -n openebs

Screenshot with command output showing running pods.


Verify Stoage Classes (SC)

Ensure that the default storage classes (SC) have been created.

$ kubectl get sc

You will see storage classes created.

root@bazenga:~# kubectl get sc
NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
openebs-device                                               Delete          WaitForFirstConsumer   false                  11m
openebs-hostpath                                             Delete          WaitForFirstConsumer   false                  11m
openebs-jiva-default                               Delete          Immediate              false                  11m
openebs-snapshot-promoter   Delete          Immediate              false                  11m

Verify Block Device CRs

OpenEBS NDM daemon-sets identifies the available block devices on the nodes and creates a CR for each. All the disks available on the nodes will be identified unless you had specified an exclusion in the vendor-filter and path-filter of the NDM config-map.

$ kubectl get blockdevice -n openebs


root@bazenga:~# kubectl get blockdevice 
NAME                                           NODENAME   SIZE          CLAIMSTATE   STATUS   AGE
blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0   node02     10736352768   Unclaimed    Active   15m
blockdevice-59c0818b5f8b2e56028959d921221af2   node03     10736352768   Unclaimed    Active   15m
blockdevice-79b8a6c83ee34a7e4b55e8d23f14323d   node03     21473771008   Unclaimed    Active   15m

To verify which node a device CR belongs to, run the describe command.

$ kubectl describe blockdevice  -n openebs


$ kubectl describe blockdevice blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0 -n openebs
    Logical Sector Size:   512
    Physical Sector Size:  512
    Storage:               10736352768
    Device Type:           partition
    Drive Type:            SSD
    Firmware Revision:     
    Hardware Sector Size:  512
    Logical Block Size:    512
    Physical Block Size:   512
  Node Attributes:
    Node Name:  node02
  Partitioned:  No
  Path:         /dev/xvdb1
  Claim State:  Unclaimed
  State:        Active

Working with Storage Engines

As we had discussed before, OpenEBS provides three storage engines that one can choose to work with.

The engines are:

  • cStor
  • Jiva
  • Local PV

We shall discuss how to use the three storage engines.

Persistent volumes using cStor

For cStor, there are a number of operations needed to provision a persistent volume that utilizes this feature. These are:

  1. Create cStor storage pools
  2. Create cStor storage classes
  3. Provision a cStor volume

We will go through the steps to achieve the above.

Step 1 – Create cStor Storage pool

The storage pool is created through specifying the block devices on the nodes. Use the steps below to create a cStor storage pool.

  • Get the details of the block devices attached to the k8s cluster
kubectl get blockdevice -n openebs -o jsonpath=' range .items[*]"\n"end'


root@bazenga:~# kubectl get blockdevice -o jsonpath=' range .items[*]"\n"end'

Identify the unclaimed blockdevices:

$ kubectl get blockdevice -n openebs | grep Unclaimed


root@bazenga:~# kubectl get blockdevice | grep Unclaimed
blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0   node02     10736352768   Unclaimed    Active   82m
blockdevice-59c0818b5f8b2e56028959d921221af2   node03     10736352768   Unclaimed    Active   82m
blockdevice-79b8a6c83ee34a7e4b55e8d23f14323d   node03     21473771008   Unclaimed    Active   82m
  • Create a StoragePoolClaim YAML file specifying the PoolResourceRequests and PoolResourceLimits. These values specify the minimum and maximum resources that will be allocated to the volumes depending on the available resources on the nodes. You will also specify the blockdevices in your cluster at the blockDeviceList
$ vim cstor-pool1-config.yaml 

Add content below, replacing the block devices with yours.

kind: StoragePoolClaim
  name: cstor-disk-pool
  annotations: |
      - name: PoolResourceRequests
        value: |-
            memory: 2Gi
      - name: PoolResourceLimits
        value: |-
            memory: 4Gi
  name: cstor-disk-pool
  type: disk
    poolType: striped
    - blockdevice-4f749fd37d6bfc5613c9272b2eb75cc0
    - blockdevice-59c0818b5f8b2e56028959d921221af2
  • Apply the configuration.
$ kubectl apply -f cstor-pool1-config.yaml
  • Verify thet a cStor pool configuation has been created.
$ kubectl get spc

Desired output:

root@bazenga:~# kubectl get spc
NAME              AGE
cstor-disk-pool   76s

Verify that the cStore pool was successfully created:

$ kubectl get csp


root@bazenga:~# kubectl get csp
cstor-disk-pool-4wgo   101K        9.94G   9.94G      Healthy False      striped   35s
cstor-disk-pool-v4sh   101K        9.94G   9.94G      Healthy False      striped   35s                              

Verify that the cStor pool pods are running on nodes.

$ kubectl get pod -n openebs | grep -i 


root@bazenga:~# kubectl get pod -n openebs | grep cstor-disk-pool
cstor-disk-pool-4wgo-bd646764d-7f82v          3/3     Running   0          12m
cstor-disk-pool-v4sh-78dc8c4c7c-7gwhx         3/3     Running   0          12m

We can now use the cStor storage pools to provision cStor volumes.

Step 2 – Create cStor StorageClass

We need to provision a StorageClass out of the StoragePool we created. This will be used for the volume claims.

In the StorageClass, you will also be required to determine the replicaCount for the application that will use the cStor volume.

The example below is for a deployment with two replicas.

$ vim openebs-sc-rep.yaml

Add below content:

kind: StorageClass
  name: openebs-sc-statefulset
  annotations: cstor |
      - name: StoragePoolClaim
        value: "cstor-disk-pool"
      - name: ReplicaCount
        value: "2"

Apply the configuration

$ kubectl apply -f openebs-sc-rep.yaml

Step 3 – Create cStor Volume

We will then create a PersistentVolumeClaim called openebs-pvc.yaml using the storage class we defined above.

$ vim openebs-cstor-pvc.yaml

Add content below in the file:

kind: PersistentVolumeClaim
apiVersion: v1
  name: cstor-pvc
  storageClassName: openebs-sc-statefulset
    - ReadWriteOnce
      storage: 2Gi

Apply the configuration:

kubectl apply -f openebs-cstor-pvc.yaml

Check if the PVC has been created:

$ kubectl get pvc

Sample output:

root@bazenga:~# kubectl get pvc
NAME                      STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS             AGE
cstor-pvc                 Bound     pvc-8929c731-706d-4813-be6b-05099bc80df0   2Gi        RWO            openebs-sc-statefulset   12s

At this point, you can now deploy an application that will use the PersistentVolume under the storage class created.

root@bazenga:~# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS             REASON   AGE
pvc-8929c731-706d-4813-be6b-05099bc80df0   2Gi        RWO            Delete           Bound    default/cstor-pvc               openebs-sc-statefulset            10m

Provision Volumes using Jiva

Jiva is the second alternative for OpenEBS storage engines.

The required operations to create a Jiva volume are:

  1. Create a Jiva pool
  2. Create a storage class
  3. Create the persistent volume.

Step 1. Create a Jiva Pool

Jiva runs on disks that have been prepared (formatted) and mounted on the nodes. This means that you have to provision the disks on the nodes before setting up a Jiva pool.

The steps below will guide you through preparing a Jiva disk.

  • Create the partition using fdisk.
sudo fdisk /dev/sd
  • Make filesystem
sudo mkfs.ext4 /dev/
  • Create mountpoint and mount the disk
sudo mkdir /home/openebs-gpd
sudo mount /dev/sdb  /home/openebs-gpd

Proceed to creating a Jiva pool using a jiva-gpd-yaml file as below.

$ vim jiva-gpd-yaml

Add content below:

 kind: StoragePool
   name: gpdpool            
   type: hostdir
   path: "/home/openebs-gpd" 

Apply the configuration file to create your pool.

kubectl apply -f jiva-gpd-pool.yaml 

Step 2 – Create Jiva StorageClass

Create a SrorageClass that will be used for Persistent volume claims.

vim jiva-gpd-2repl-sc.yaml

Add content below:

kind: StorageClass
  name: openebs-jiva
  annotations: jiva |
      - name: ReplicaCount
        value: "2"
      - name: StoragePool
        value: gpdpool

Apply the configuration:

kubectl apply -f jiva-gpd-2repl-sc.yaml

Step 3 – Create Volume from Jiva

Create a jiva PVC with the config file below:

vim jiva-pvc.yaml

Add content below:

kind: PersistentVolumeClaim
apiVersion: v1
  name: jiva-pvc
  storageClassName: openebs-jiva
    - ReadWriteOnce
      storage: 2Gi

Then deploy the PVC:

kubectl apply -f jiva-pvc.yaml

Provision Volumes using LocalPV

LocalPV utilizes the host paths and the local disks. OpenEBS comes with a default StorageClass for Hostpath and device. You can however create your own SC if you want to specify a separate path on your host. This can be configured in the YAML file below:

vim custom-local-hostpath-sc.yaml

Add content below:

kind: StorageClass
  name: local-hostpath
  annotations: local |
      - name: StorageType
        value: hostpath
      - name: BasePath
        value: /var/local-hostpath
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer

You will be required to specify the Basepath value for the hostpath.

Apply the configuration:

kubectl apply -f custom-local-hostpath-sc.yaml

To verify that the LocalPV SC is running:

kubectl get sc

Create a PVC for LocalPV

Create a pvc yaml file that uses the storage class created above or the default SC.

vim local-hostpath-pvc.yaml

Add the following:

kind: PersistentVolumeClaim
apiVersion: v1
  name: local-hostpath-pvc
  storageClassName: openebs-hostpath
    - ReadWriteOnce
      storage: 5G

Apply the configuration:

kubectl apply -f local-hostpath-pvc.yaml

That’s all that is needed to provision volumes for OpenEBS using the three available storage engines. You can find more detailed documentation here.

Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.