Install MicroK8s Kubernetes on Rocky Linux 9 / AlmaLinux 9

Posted on 155 views

Kubernetes is a free and open-source orchestration tool that has been highly adopted in modern software development. It allows one to automate, scale and manage the application deployments. Normally, applications are run in containers, with the workloads distributed across the cluster. Containers make use of the microservices architecture, where applications are immutable, portable, and optimized for resource usage. Kubernetes has several distributions that include:

  • OpenShift: this is a Kubernetes distribution developed by RedHat. It can be run both on-premise and in the cloud.
  • Google Kubernetes Engine: This is a simple and flexible Kubernetes distribution that runs on Google Cloud.
  • Azure Kubernetes Service: This is a cloud-only Kubernetes distribution for the Azure cloud
  • Rancher: This Kubernetes distribution has a key focus on multi-cluster Kubernetes deployments. This distribution is similar to OpenShift but it integrates Kubernetes with several other tools.
  • Canonical Kubernetes: This Kubernetes distribution is developed by the Canonical company(The company that develops Ubuntu Linux). It is an umbrella for two CNF-certified Kubernetes distributions, MicroK8s and Charmed Kubernetes. It can be run both on-premise or in the cloud.

In this guide, we will be learning how to install MicroK8s Kubernetes on Rocky Linux 9 / AlmaLinux 9. MicroK8s is a powerful and lightweight enterprise-grade Kubernetes distribution. It has a small disk and memory footprint but still offers innumerable add-ons that include Knative, Cilium, Istio, Grafana e.t.c This is the fastest multi-node Kubernetes that can work on Windows, Linux, and Mac systems. Microk8s can be used to reduce the complexity and time involved when deploying a Kubernetes cluster.

Microk8s is preferred due to the following reasons:

  • Simplicity: it is simple to install and manage. It has a single-package install with all the dependencies bundled.
  • Secure: Updates are provided for all the security issues and can be applied immediately or scheduled as per your maintenance cycle.
  • Small: This is the smallest Kubernetes distro that can be installed on a laptop or home workstation. It is compatible with Amazon EKS, Google GKE, and Azure AKS, when it is run on Ubuntu.
  • Comprehensive: it includes an innumerable collection of manifests that are used for common Kubernetes capabilities such as Ingress, DNS, Dashboard, Clustering, Monitoring, and updates to the latest Kubernetes version e.t.c
  • Current: It tracts the upstream and releases beta, RC, and final bits the same day as upstream K8s.

Now let’s plunge in!

Step 1 – Install Snapd on Rocky Linux 9 / AlmaLinux 9

Microk8s is a snap package and so snapd is required on the Rocky Linux 9 / AlmaLinux 9 system. The below commands can be used to install snapd on Rocky Linux 9 / AlmaLinux 9.

Enable the EPEL repository.

sudo dnf install epel-release

Install snapd:

sudo dnf install snapd

Once installed, you need to create a symbolic link for classic snap support.

sudo ln -s /var/lib/snapd/snap /snap

Export the snaps $PATH.

echo 'export PATH=$PATH:/var/lib/snapd/snap/bin' | sudo tee -a /etc/profile.d/
source /etc/profile.d/

Start and enable the service:

sudo systemctl enable --now snapd.socket

Verify if the service is running:

$ systemctl status snapd.socket
 snapd.socket - Socket activation for snappy daemon
     Loaded: loaded (/usr/lib/systemd/system/snapd.socket; enabled; vendor preset: disabled)
     Active: active (listening) since Tue 2022-07-26 09:58:46 CEST; 7s ago
      Until: Tue 2022-07-26 09:58:46 CEST; 7s ago
   Triggers: ● snapd.service
     Listen: /run/snapd.socket (Stream)
             /run/snapd-snap.socket (Stream)
      Tasks: 0 (limit: 23441)
     Memory: 0B
        CPU: 324us
     CGroup: /system.slice/snapd.socket

Set SELinux in permissive mode:

sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config

Step 2 – Install Microk8s on Rocky Linux 9 / AlmaLinux 9

Once Snapd has been installed, you can easily install Microk8s by issuing the command:

$ sudo snap install microk8s --classic 
2022-07-26T10:00:17+02:00 INFO Waiting for automatic snapd restart...
microk8s (1.24/stable) v1.24.3 from Canonical✓ installed

To be able to execute the commands smoothly, you need to set the below permissions:

sudo usermod -a -G microk8s $USER
sudo chown -f -R $USER ~/.kube

For the changes to apply, run the command:

newgrp microk8s

Now verify the installation by checking the Microk8s status

$ microk8s status
microk8s is running
high-availability: no
  datastore master nodes:
  datastore standby nodes: none
    ha-cluster           # (core) Configure high availability on the current node
    community            # (core) The community addons repository
    dashboard            # (core) The Kubernetes dashboard
    dns                  # (core) CoreDNS
    gpu                  # (core) Automatic enablement of Nvidia CUDA
    helm                 # (core) Helm 2 - the package manager for Kubernetes
    helm3                # (core) Helm 3 - Kubernetes package manager
    host-access          # (core) Allow Pods connecting to Host services smoothly
    hostpath-storage     # (core) Storage class; allocates storage from host directory

Get the available nodes:

$ microk8s kubectl get nodes
master   Ready       3m38s   v1.24.3-2+63243a96d1c393

Step 3 – Install and Configure kubectl for MicroK8s

Microk8s comes with its own kubectl version to avoid interference with any version available on the system. This is used on the terminal as:

microk8s kubectl

However, Microk8s can be configured to work with your host’s kubectl. First, obtain the Mikrok8s configs using the command:

$ microk8s config
apiVersion: v1
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREekNDQWZlZ0F3SUJBZ0lVWlZURndTSVFhOU13Rm1VdmR1S09pM0ErY3hvd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0Z6...
  name: microk8s-cluster
- context:
    cluster: microk8s-cluster
    user: admin
  name: microk8s
current-context: microk8s

Install kubectl on Rocky Linux 9 / AlmaLinux 9 using the command:

curl -LO`curl -s`/bin/linux/amd64/kubectl
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Generate the required config:

cd $HOME
microk8s config > ~/.kube/config

Get the available nodes:

$ kubectl get nodes
master   Ready       5m35s   v1.24.3-2+63243a96d1c393

Step 4 – Add Nodes to the Microk8s Cluster

For improved performance and high availability, you can add nodes to the Kubernetes cluster.

On the master node, allow the required ports through the firewall:

sudo firewall-cmd --add-port=25000/tcp,16443/tcp,12379/tcp,10250/tcp,10255/tcp,10257/tcp,10259/tcp --permanent
sudo firewall-cmd --reload

Also, generate the command to be used by the nodes to join the cluster;

$ microk8s add-node
microk8s join

Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join --worker

If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join

Install and configure Microk8s on the Nodes

You need to install Microk8s on the nodes just as we did in steps 1 and 2. After installing Microk8s on the nodes, run the following commands:

export OPENSSL_CONF=/var/lib/snapd/snap/microk8s/current/etc/ssl/openssl.cnf
sudo firewall-cmd --add-port=25000/tcp,10250/tcp,10255/tcp --permanent
sudo firewall-cmd --reload

Now use the generated command on the master to join the nodes to the Microk8s cluster.

$ microk8s join --worker
Contacting cluster at

The node has joined the cluster and will appear in the nodes list in a few seconds.

Currently this worker node is configured with the following kubernetes API server endpoints:
    - and port 16443, this is the cluster node contacted during the join operation.

If the above endpoints are incorrect, incomplete or if the API servers are behind a loadbalancer please update

Once added, check the available nodes:

$ kubectl get nodes
master   Ready       41m     v1.24.3-2+63243a96d1c393
node1    Ready       7m52s   v1.24.3-2+63243a96d1c393

To remove a node from a cluster, run the command below on the node:

microk8s leave

Step 5 – Deploy an Application with Microk8s

Deploying an application in Microk8s is similar to other Kubernetes distros. To demonstrate this, we will deploy the Nginx application as shown:

$ kubectl create deployment webserver --image=nginx
deployment.apps/webserver created

Verify the deployment:

$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
webserver-566b9f9975-cwck4   1/1     Running   0          28s

Step 6 – Deploy Kubernetes Services on Microk8s

For the deployed application to be accessible, we will expose our created pod using NodePort as shown:

$ kubectl expose deployment webserver --type="NodePort" --port 80
service/webserver exposed

Get the service port:

$ kubectl get svc webserver
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
webserver    NodePort           80:30281/TCP   29s

Try accessing the application using the exposed port via the web.


Step 7 – Scaling applications on Microk8s

Scaling is defined as creating replications on pods/deployments for high availability. This feature is highly embraced in Kubernetes, allowing it to handle as many requests as possible.

To create replicas, use the command with the syntax below:

$ kubectl scale deployment webserver --replicas=4
deployment.apps/webserver scaled

Get the pods:

$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
webserver-566b9f9975-cwck4   1/1     Running   0          8m40s
webserver-566b9f9975-ts2rz   1/1     Running   0          28s
webserver-566b9f9975-t656s   1/1     Running   0          28s
webserver-566b9f9975-7z6zq   1/1     Running   0          28s

It is that simple!

Step 8 – Enabling the microk8s Dashboard

The dashboard provides an easy way to manage the Kubernetes cluster. Since it is an add-on, we need to enable it by issuing the command:

$ microk8s enable dashboard dns
Infer repository core for addon dashboard
Infer repository core for addon dns
Enabling Kubernetes Dashboard
Infer repository core for addon metrics-server
Enabling Metrics-Server
serviceaccount/metrics-server created created created created created

Create the token to be used to access the dashboard.

kubectl create token default

Verify this:

$ kubectl get services -n kube-system
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
metrics-server              ClusterIP           443/TCP                  77s
kubernetes-dashboard        ClusterIP           443/TCP                  58s
dashboard-metrics-scraper   ClusterIP            8000/TCP                 58s
kube-dns                    ClusterIP            53/UDP,53/TCP,9153/TCP   53

Allow the port(10443) through the firewall:

sudo firewall-cmd --permanent --add-port=10443/tcp
sudo firewall-cmd --reload

Now forward the traffic to the local port(10443) using the command:

kubectl port-forward -n kube-system service/kubernetes-dashboard --address 10443:443

Now access the dashboard using the URL In some browsers such as chrome, you may find an error with invalid certificates when accessing the dashboard remotely. On Firefox, proceed as shown


Provide the generated token to sign in. On successful login, you will see the Microk8s dashboard below.


From the above dashboard, you can easily manage your Kubernetes cluster.

Step 9 – Enable In-built storage on Microk8s

Microk8s comes with an in-built storage addon that allows quick creation of PVCs. To enable and make this storage available to use by pods, execute the below commands:

microk8s enable hostpath-storage

Once enabled, verify if the hostpath provisioned has been created as a pod.

$ kubectl -n kube-system get pods
NAME                                         READY   STATUS    RESTARTS       AGE
calico-kube-controllers-7f85f9c7b9-v7lk5     1/1     Running   0              3h42m
metrics-server-5f8f64cb86-82nn2              1/1     Running   1 (165m ago)   165m
calico-node-hljcb                            1/1     Running   0              3h13m
calico-node-sjzd2                            1/1     Running   0              3h9m
coredns-66bcf65bb8-m6x44                     1/1     Running   0              163m
dashboard-metrics-scraper-6b6f796c8d-scwtx   1/1     Running   0              163m
kubernetes-dashboard-765646474b-256qb        1/1     Running   0              163m
hostpath-provisioner-f57964d5f-sh4wj         1/1     Running   0              24s

Also, confirm that a storage class has been created:

$ kubectl get sc
microk8s-hostpath (default)   Delete          WaitForFirstConsumer   false                  83s

Now we can use the storage class above to create PVCs.

Create a Persistent Volume

To demonstrate if the storage class is working properly, create a PV using it.

$ vim sample-pv.yml
apiVersion: v1
kind: PersistentVolume
  name: sampe-pv
  # Here we are asking to use our custom storage class
  storageClassName: microk8s-hostpath
    storage: 5Gi
    - ReadWriteOnce
    # Should be created upfront
    path: '/data/demo'

Create the hostpath with the required permissions.

sudo mkdir -p /data/demo
sudo chmod 777 /data/demo
sudo chcon -Rt svirt_sandbox_file_t /data/demo

Create the PV:

 kubectl create -f sample-pv.yml

Verify the creation:

$ kubectl get pv
sampe-pv   5Gi        RWO            Retain           Available           microk8s-hostpath            7s

Create a Persistent Volume Claim

Once the PV has been created, now create the PVC using the StorageClass:

vim sample-pvc.yml

Add the below line to the file:

apiVersion: v1
kind: PersistentVolumeClaim
  name: my-pvc
  namespace: default
  # Once again our custom storage class here
  storageClassName: microk8s-hostpath
    - ReadWriteOnce
      storage: 5Gi

Apply the manifest:

kubectl create -f sample-pvc.yml

Verify the creation:

$ kubectl get pvc
my-pvc   Pending                                      microk8s-hostpath   13s

Deploy an application that uses the PVC.

$ vim pod.yml
apiVersion: v1
kind: Pod
  name: task-pv-pod
    - name: task-pv-storage
        claimName: my-pvc
    - name: task-pv-container
      image: nginx
        - containerPort: 80
          name: "http-server"
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

Apply the manifest:

kubectl create -f pod.yml

Now verify if the PVC is bound:

$ kubectl get pv
sampe-pv   5Gi        RWO            Retain           Bound    default/my-pvc   microk8s-hostpath            7m23s

$ kubectl get pvc
my-pvc   Bound    sampe-pv   5Gi        RWO            microk8s-hostpath   98s

Step 10 – Enable Logging With Prometheus and Grafana

Microk8s has the Prometheus add-on that can be enabled. This tool offers visualization of logs through the Grafana interface.

To enable the add-on, execute:

$ microk8s enable prometheus
Infer repository core for addon prometheus
Adding argument --authentication-token-webhook to nodes.
Configuring node
Restarting nodes.
Configuring node
Infer repository core for addon dns
Addon core/dns is already enabled

After a few minutes, verify that the required pods are up:

$ kubectl get pods -n monitoring
NAME                                   READY   STATUS    RESTARTS      AGE
prometheus-adapter-85455b9f55-w975k    1/1     Running   0             89s
node-exporter-jnmmk                    2/2     Running   0             89s
grafana-789464df6b-kt5hr               1/1     Running   0             89s
prometheus-adapter-85455b9f55-2g9rs    1/1     Running   0             89s
blackbox-exporter-84c68b59b8-5lkw4     3/3     Running   0             89s
prometheus-k8s-0                       2/2     Running   1 (43s ago)   77s
node-exporter-dzj66                    2/2     Running   0             89s
prometheus-operator-65cdb77c59-gfk4v   2/2     Running   0             89s
kube-state-metrics-55b87f58f6-m6rnv    3/3     Running   0             89s
alertmanager-main-0                    2/2     Running   0             78s

To access the Prometheus and Grafana services, you need to forward them:

$ kubectl get services -n monitoring
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
prometheus-operator     ClusterIP   None                     8443/TCP                     2m31s
alertmanager-main       ClusterIP           9093/TCP                     2m22s
blackbox-exporter       ClusterIP           9115/TCP,19115/TCP           2m21s
grafana                 ClusterIP           3000/TCP                     2m20s
kube-state-metrics      ClusterIP   None                     8443/TCP,9443/TCP            2m20s
node-exporter           ClusterIP   None                     9100/TCP                     2m20s
prometheus-adapter      ClusterIP           443/TCP                      2m20s
prometheus-k8s          ClusterIP           9090/TCP                     2m19s
alertmanager-operated   ClusterIP   None                     9093/TCP,9094/TCP,9094/UDP   93s
prometheus-operated     ClusterIP   None                     9090/TCP                     93s

Allow the ports intended to be used through the firewall:

sudo firewall-cmd --add-port=9090,3000/tcp --permanent
sudo firewall-cmd --reload

Now expose the ports:

kubectl port-forward -n monitoring service/prometheus-k8s --address 9090:9090

Access the Prometheus using the URL http://IP_Address:9090


For Grafana, you also need to expose the port:

kubectl port-forward -n monitoring service/grafana --address 3000:3000

Now access the service using the URL http://IP_Address:3000


Login with the default credentials:

  • username=admin
  • Password=admin

Once logged in, change the password.


Now access the dashboard and visualize graphs. Navigate to Dashboards-> Manage-> Default and select the dashboard to load.


For Kubernetes API


For the Kubernetes Namespace Networking

Final Thoughts

That marks the end of this detailed guide on how to install MicroK8s Kubernetes on Rocky Linux 9 / AlmaLinux 9. You are also equipped with the required knowledge on how to use Microk8s to set up and manage a Kubernetes cluster.


Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.