In recent years, the popularity of Kubernetes and its ecosystem has immensely increased due to its ability to its behavior, ability to design patterns, and workload types. Kubernetes also known as k8s, is an open-source software used to orchestrate system deployments, scale, and manage containerized applications across a server farm. This is achieved by distributing the workload across a cluster of servers. Furthermore, it works continuously to maintain the desired state of container applications, allocating storage and persistent volumes e.t.c.
The cluster of servers in Kubernetes has two types of nodes:
- Control plane: it is used to make the decision about the cluster(includes scheduling e.t.c) and also to detect and respond to cluster events such as starting up a new pod. It consists of several other components such as:
- kube-apiserver: it is used to expose the Kubernetes API
- etcd: it stores the cluster data
- kube-scheduler: it watches for the newly created Pods with no assigned node, and selects a node for them to run on.
- Worker nodes: they are used to run the containerized workloads. They host the pods that er the basic components of an application. A cluster must consist of at least one worker node.
The smallest deployable unit in Kubernetes is known as a pods. A pod may be made up of one or many containers, each with its own configurations.
There are 3 different resources provided when deploying pods in Kubernetes:
- Deployments: this is the most used and easiest resource to deploy. They are usually used for stateless applications. However, the application can be made stateful by attaching a persistent volume to it.
- StatefulSets: this resource is used to manage the deployment and scale a set of Pods. It provides the guarantee about ordering and uniqueness of these Pods.
- DaemonSets: it ensures all the pod runs on all the nodes of the cluster. In case a node is added/removed from the cluster, DaemonSet automatically adds or removes the pod.
There are several methods to deploy a Kubernetes Cluster on Linux. This includes using tools such as Minikube, Kubeadm, Kubernetes on AWS (Kube-AWS), Amazon EKS e.t.c. In this guide, we will learn how to deploy a k0s Kubernetes Cluster on Rocky Linux 9 using k0sctl
What is k0s?
K0s is an open-source, simple, solid, and certified Kubernetes distribution that can be deployed on any infrastructure. It offers the simplest way with all the features required to set up a Kubernetes cluster. Due to its design and flexibility, it can be used on bare metal, cloud, Edge and IoT.
K0s exists as a single binary with no dependencies aside from the host OS kernel required. This reduces the complexity and time involved when setting up a Kubernetes cluster.
The other features associated with k0s are:
- It is certified and 100% upstream Kubernetes
- It has multiple installation methods such as single-node, multi-node, airgap and Docker.
- It offers automatic lifecycle management with k0sctl where you can upgrade, backup and restore.
- Flexible deployment options with control plane isolation as default
- It offers scalability from a single node to large, high-available clusters.
- Supports a variety of datastore backends. etcd is the default for multi-node clusters, SQLite for single node clusters, MySQL, and PostgreSQL can be used as well.
- Supports x86-64, ARM64 and ARMv7
- It Includes Konnectivity service, CoreDNS and Metrics Server
- Minimum CPU requirements (1 vCPU, 1 GB RAM)
k0sctl is a command-line tool used for bootstrapping and managing k0s clusters. Normally, it connects to the hosts using SSH and collects information about them. The information gathered is then used to create a cluster by configuring the hosts, deploying k0s, and then connecting them together.
The below image can be used to demonstrate how k0sctl works
Using k0sctl is the recommended way to create a k0s cluster for production. Since you can create multi-node clusters in an easy and automatic manner.
Now let’s dive in!
For this guide, we will have the 4 Rocky Linux 9 servers configured as shown:
The other Rocky Linux 9 server is my working space on which I will install k0sctl and run the cluster on the above nodes
Once the hostnames have been set, edit /etc/hosts on the Workspace as shown:
$ sudo vi /etc/hosts 192.168.205.16 master.computingpost.com master 192.168.205.17 worker1.computingpost.com worker1 192.168.205.18 worker2.computingpost.com worker2
Since k0sctl uses SSH to access the hosts, we will generate SSH keys on the Workspace as shown:
$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/rocky9/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rocky9/.ssh/id_rsa Your public key has been saved in /home/rocky9/.ssh/id_rsa.pub The key fingerprint is: SHA256:wk0LRhNDWM1PA2pm9RZ1EDFdx9ZXvhh4PB99mrJypeU [email protected] The key's randomart image is: +---[RSA 3072]----+ | +B+o...*=.o*| | .. =o.o.oo..B| | B .ooo = o=| | * + o. . =o+| | o S ..=o | | . B | | . + E | | o | | | +----[SHA256]-----+
Ensure root login is permitted on the 3 nodes by editing /etc/ssh/sshd_config as below
# Authentication: PermitRootLogin yes
Save the file and restart the SSH service:
sudo systemctl restart sshd
Copy the keys to the 3 nodes.
ssh-copy-id [email protected] ssh-copy-id [email protected] ssh-copy-id [email protected]
Once copied, verify if you can log in to any of the nodes without a password:
$ ssh [email protected] Activate the web console with: systemctl enable --now cockpit.socket Last login: Sat Aug 20 11:38:29 2022 [[email protected] ~]# exit
Step 1 – Install the k0sctl tool on Rocky Linux 9
The k0sctl tool can be installed on the Rocky Linux 9 Workspace by downloading the file from the GitHub release page.
You can also use wget to pull the archive. First, obtain the latest version tag:
VER=$(curl -s https://api.github.com/repos/k0sproject/k0sctl/releases/latest|grep tag_name | cut -d '"' -f 4) echo $VER
Now download the latest file for your system:
### For 64-bit ### wget https://github.com/k0sproject/k0sctl/releases/download/$VER/k0sctl-linux-x64 -O k0sctl ###For ARM ### wget https://github.com/k0sproject/k0sctl/releases/download/$VER/k0sctl-linux-arm -O k0sctl
Once the file has been downloaded, make it executable and copy it to your PATH:
chmod +x k0sctl sudo cp -r k0sctl /usr/local/bin/ /bin
Verify the installation:
$ k0sctl version version: v0.13.2 commit: 7116025
To enable shell completions, use the commands:
### Bash ### sudo sh -c 'k0sctl completion >/etc/bash_completion.d/k0sctl' ### Zsh ### sudo sh -c 'k0sctl completion > /usr/local/share/zsh/site-functions/_k0sctl' ### Fish ### k0sctl completion > ~/.config/fish/completions/k0sctl.fish
Step 2 – Configure the k0s Kubernetes Cluster
We will create a configuration file for the cluster. To generate the default configuration, we will use the command:
k0sctl init > k0sctl.yaml
Now modify the generated config file to work for your environment:
Update the config file as shown:
apiVersion: k0sctl.k0sproject.io/v1beta1 kind: Cluster metadata: name: k0s-cluster spec: hosts: - ssh: address: master.computingpost.com user: root port: 22 keyPath: /home/$USER/.ssh/id_rsa role: controller - ssh: address: worker1.computingpost.com user: root port: 22 keyPath: /home/$USER/.ssh/id_rsa role: worker - ssh: address: worker2.computingpost.com user: root port: 22 keyPath: /home/$USER/.ssh/id_rsa role: worker k0s: dynamicConfig: false
We have a configuration file with 1 control plane and 2 worker nodes. It is also possible to have a single node deployment where you have a single server to act as a control plane and worker node as well:
For that case, you will a configuration file appear as shown:
apiVersion: k0sctl.k0sproject.io/v1beta1 kind: Cluster metadata: name: k0s-cluster spec: hosts: - ssh: address: IP_Address user: root port: 22 keyPath: /home/$USER/.ssh/id_rsa role: controller+worker k0s: dynamicConfig: false
Step 3 – Create the k0s Kubernetes Cluster on Rocky Linux 9 using k0sctl
Once the configuration has been made, you can start the cluster by applying the configuration file:
First, allow the service through the firewall on the control plane
sudo firewall-cmd --add-port=6443/tcp --permanent sudo firewall-cmd --reload
Now apply the config
k0sctl apply --config k0sctl.yaml
⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███ ⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███ ⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███ ⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███ ⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████ k0sctl v0.13.2 Copyright 2021, k0sctl authors. Anonymized telemetry of usage will be sent to the authors. By continuing to use k0sctl you agree to these terms: https://k0sproject.io/licenses/eula INFO ==> Running phase: Connect to hosts INFO [ssh] master:22: connected INFO [ssh] worker1:22: connected INFO [ssh] worker2:22: connected INFO ==> Running phase: Detect host operating systems INFO [ssh] master:22: is running Rocky Linux 9.0 (Blue Onyx) INFO [ssh] worker1:22: is running Rocky Linux 9.0 (Blue Onyx) INFO [ssh] worker2:22: is running Rocky Linux 9.0 (Blue Onyx) INFO ==> Running phase: Acquire exclusive host lock INFO ==> Running phase: Prepare hosts INFO ==> Running phase: Gather host facts ......... INFO [ssh] worker2:22: validating api connection to https://192.168.205.16:6443 INFO [ssh] master:22: generating token INFO [ssh] worker1:22: writing join token INFO [ssh] worker2:22: writing join token INFO [ssh] worker1:22: installing k0s worker INFO [ssh] worker2:22: installing k0s worker INFO [ssh] worker1:22: starting service INFO [ssh] worker2:22: starting service INFO [ssh] worker1:22: waiting for node to become ready INFO [ssh] worker2:22: waiting for node to become ready
Once complete, you will see this:
You may need to install
kubectl on the workspace to help you manage the cluster with ease.
Download the binary file and install it with the command:
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin/ /bin
Verify the installation:
$ kubectl version --client Client Version: version.InfoMajor:"1", Minor:"24", GitVersion:"v1.24.4", GitCommit:"95ee5ab382d64cfe6c28967f36b53970b8374491", GitTreeState:"clean", BuildDate:"2022-08-17T18:54:23Z", GoVersion:"go1.18.5", Compiler:"gc", Platform:"linux/amd64" Kustomize Version: v4.5.4
To be able to access the cluster with
kubectl, you need to get the kubeconfig file and set the environment.
k0sctl kubeconfig > kubeconfig export KUBECONFIG=$PWD/kubeconfig
Now get the nodes in the cluster:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION worker1.computingpost.com Ready
7m59s v1.24.3+k0s worker2.computingpost.com Ready 7m59s v1.24.3+k0s
The above command will only list the worker nodes. This is because K0s ensures that the controllers and workers are isolated.
Get all the pods running:
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-88b745646-djcjh 1/1 Running 0 11m kube-system coredns-88b745646-v9vfn 1/1 Running 0 9m34s kube-system konnectivity-agent-8bm85 1/1 Running 0 9m36s kube-system konnectivity-agent-tsllr 1/1 Running 0 9m37s kube-system kube-proxy-cdvjv 1/1 Running 0 9m37s kube-system kube-proxy-n6ncx 1/1 Running 0 9m37s kube-system kube-router-fhm65 1/1 Running 0 9m37s kube-system kube-router-v5srj 1/1 Running 0 9m36s kube-system metrics-server-7d7c4887f4-gv94g 0/1 Running 0 10m
Step 4 – Advanced K0sctl File Configurations
Once a cluster has been deployed, the default configuration file for the cluster is created. To view the file, access the file, use the command below on the control plane:
# k0s default-config > /etc/k0s/k0s.yaml
The file looks as shown:
# cat /etc/k0s/k0s.yaml # generated-by-k0sctl 2022-08-20T11:57:29+02:00 apiVersion: k0s.k0sproject.io/v1beta1 kind: ClusterConfig metadata: creationTimestamp: null name: k0s spec: api: address: 192.168.205.16 k0sApiPort: 9443 port: 6443 sans: - 192.168.205.16 - fe80::e4f8:8ff:fede:e1a5 - master - 127.0.0.1 tunneledNetworkingMode: false controllerManager: extensions: helm: charts: null repositories: null storage: create_default_storage_class: false type: external_storage images: calico: cni: image: docker.io/calico/cni version: v3.23.3 kubecontrollers: image: docker.io/calico/kube-controllers version: v3.23.3 node: image: docker.io/calico/node version: v3.23.3 coredns: image: k8s.gcr.io/coredns/coredns version: v1.7.0 default_pull_policy: IfNotPresent konnectivity: image: quay.io/k0sproject/apiserver-network-proxy-agent version: 0.0.32-k0s1 kubeproxy: image: k8s.gcr.io/kube-proxy version: v1.24.3 kuberouter: cni: image: docker.io/cloudnativelabs/kube-router version: v1.4.0 cniInstaller: image: quay.io/k0sproject/cni-node version: 1.1.1-k0s.0 metricsserver: image: k8s.gcr.io/metrics-server/metrics-server version: v0.5.2 pushgateway: image: quay.io/k0sproject/pushgateway-ttl version: [email protected]:7031f6bf6c957e2fdb496161fe3bea0a5bde3de800deeba7b2155187196ecbd9 installConfig: users: etcdUser: etcd kineUser: kube-apiserver konnectivityUser: konnectivity-server kubeAPIserverUser: kube-apiserver kubeSchedulerUser: kube-scheduler konnectivity: adminPort: 8133 agentPort: 8132 network: calico: null clusterDomain: cluster.local dualStack: kubeProxy: mode: iptables kuberouter: autoMTU: true mtu: 0 peerRouterASNs: "" peerRouterIPs: "" podCIDR: 10.244.0.0/16 provider: kuberouter serviceCIDR: 10.96.0.0/12 podSecurityPolicy: defaultPolicy: 00-k0s-privileged scheduler: storage: etcd: externalCluster: null peerAddress: 192.168.205.16 type: etcd telemetry: enabled: true status:
You can modify the file as desired and then apply the changes made with the command:
sudo k0s install controller -c
The file can be modified if the cluster is running. But for the changes to apply, restart the cluster with the command:
sudo k0s stop sudo k0s start
Configure Cloud Providers
The K0s-managed Kubernetes doesn’t include the built-in cloud provider service. You need to manually configure and add its support. There are two ways of doing this:
- Using K0s Cloud Provider
K0s provides its own lightweight cloud provider that can be used to assign static external IP to expose the worker nodes. This can be done using either of the commands:
#worker sudo k0s worker --enable-cloud-provider=true #controller sudo k0s controller --enable-k0s-cloud-provider=true
After this, you can add the IPv4 and IPv6 static node IPs:
kubectl annonate node
- Using Built-in Cloud Manifest
Manifests allow one to run the cluster with preferred extensions. Normally, the controller reads the manifests from /var/lib/k0s/manifests
This can be verified from the control node:
$ ls -l /var/lib/k0s/ total 12 drwxr-xr-x. 2 root root 120 Aug 20 11:57 bin drwx------. 3 etcd root 20 Aug 20 11:57 etcd -rw-r--r--. 1 root root 241 Aug 20 11:57 konnectivity.conf drwxr-xr-x. 15 root root 4096 Aug 20 11:57 manifests drwxr-x--x. 3 root root 4096 Aug 20 11:57 pki
With this option, you need to create a manifest with the below syntax:
--- apiVersion: v1 kind: ServiceAccount metadata: name: cloud-controller-manager namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:cloud-controller-manager roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cloud-controller-manager namespace: kube-system --- apiVersion: apps/v1 kind: DaemonSet metadata: labels: k8s-app: cloud-controller-manager name: cloud-controller-manager namespace: kube-system spec: selector: matchLabels: k8s-app: cloud-controller-manager template: metadata: labels: k8s-app: cloud-controller-manager spec: serviceAccountName: cloud-controller-manager containers: - name: cloud-controller-manager # for in-tree providers we use k8s.gcr.io/cloud-controller-manager # this can be replaced with any other image for out-of-tree providers image: k8s.gcr.io/cloud-controller-manager:v1.8.0 command: - /usr/local/bin/cloud-controller-manager - --cloud-provider=[YOUR_CLOUD_PROVIDER] # Add your own cloud provider here! - --leader-elect=true - --use-service-account-credentials # these flags will vary for every cloud provider - --allocate-node-cidrs=true - --configure-cloud-routes=true - --cluster-cidr=172.17.0.0/16 tolerations: # this is required so CCM can bootstrap itself - key: node.cloudprovider.kubernetes.io/uninitialized value: "true" effect: NoSchedule # this is to have the daemonset runnable on master nodes # the taint may vary depending on your cluster setup - key: node-role.kubernetes.io/master effect: NoSchedule # this is to restrict CCM to only run on master nodes # the node selector may vary depending on your cluster setup nodeSelector: node-role.kubernetes.io/master: ""
Step 5 – Deploy an Application on k0s
To test if the cluster is working as desired, we will create a deployment for the Nginx application:
The command below can be used to create and apply the manifest:
kubectl apply -f - <
To verify if the pod is running, use the command:
$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-deployment-544dc8b7c4-frprq 1/1 Running 0 14s nginx-deployment-544dc8b7c4-rgdqz 1/1 Running 0 14s
Step 6 – Deploy Kubernetes Service on k0s
In order for the service to be accessed, you need to expose the deployment with a Kubernetes service. The service can be deployed as NodePort, ClusterIP, or LoadBalancer
For this guide, we will expose the application using the NodePort service:
$ kubectl expose deployment nginx-deployment --type=NodePort --port=80 service/nginx-deployment exposed
Verify if the service is running:
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1
443/TCP 16m nginx-deployment NodePort 10.101.222.227 80:32309/TCP 8s
Now you can access the service using the node IP and port to which the service has been exposed to. For this case, the port is 32309
Allow this port through the firewall:
sudo firewall-cmd --add-port=32309/tcp --permanent sudo firewall-cmd --reload
Now access the service on a web browser
It is also possible to deploy an Ingress service with the routing rules into the Kubernetes environment.
Step 7 – Destroy the k0s Kubernetes cluster
To completely remove the k0s Kubernetes cluster you can use the command below:
k0sctl reset -c k0sctl.yaml
k0sctl v0.13.2 Copyright 2021, k0sctl authors. Anonymized telemetry of usage will be sent to the authors. By continuing to use k0sctl you agree to these terms: https://k0sproject.io/licenses/eula ? Going to reset all of the hosts, which will destroy all configuration and data, Are you sure? (y/N) y
The cluster will be removed as shown:
That is it!
There are many other
k0sctlcommands, view help:
$ k0sctl help NAME: k0sctl - k0s cluster management tool USAGE: k0sctl [global options] command [command options] [arguments...] COMMANDS: version Output k0sctl version apply Apply a k0sctl configuration kubeconfig Output the admin kubeconfig of the cluster init Create a configuration template reset Remove traces of k0s from all of the hosts backup Take backup of existing clusters state config Configuration related sub-commands completion help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --debug, -d Enable debug logging (default: false) [$DEBUG] --trace Enable trace logging (default: false) [$TRACE] --no-redact Do not hide sensitive information in the output (default: false) --help, -h show help (default: false)
Today we have learned the easiest way to deploy k0s Kubernetes Cluster on Rocky Linux 9 using k0sctl. I hope this was important to you.