Deploy HA Kubernetes Cluster on Rocky Linux 8 using RKE2

Posted on 126 views

Kubernetes, abbreviated as K8s is an open-source tool used to orchestrate containerized workloads to run on a cluster of hosts. It is used to automate system deployments, scale, and manage containerized applications.

Normally, Kubernetes distributes workloads across the cluster and automates the container networking needs. It also allocates storage and persistent volumes and works continuously to maintain the desired state of container applications.

There are several tools one can use to set up a Kubernetes cluster. These tools include MinikubeKubeadm, Kubernetes on AWS (Kube-AWS), Amazon EKS e.t.c. In this guide, we will walk through how to deploy HA Kubernetes Cluster on Rocky Linux 8 using RKE2.

What is RKE2?

RKE stands for Rancher Kubernetes Engine. RKE2 also known as the (RKE Government) is a combination of RKE1 and K3s. It inherits usability, ease-of-operations, and deployment model from K3s and close alignment with upstream Kubernetes from RKE1. Normally, RKE2 doesn’t rely on docker, it launches the control plane components as static pods that are managed by the kubelet.

The diagram below will help you understand the RKE2 cluster topology.

Deploy-HA-Kubernetes-Cluster-on-Rocky-Linux-8-using-RKE2

RKE2 ships a number of open-source components that include:

  • K3s
    • Helm Controller
  • K8s
    • API Server
    • Controller Manager
    • Kubelet
    • SchedulerSet up Linux Nodes
    • Proxy
  • etcd
  • containerd/cri
  • runc
  • Helm
  • Metrics Server
  • NGINX Ingress Controller
  • CoreDNS
  • CNI: Canal (Calico & Flannel), Cilium or Calico

System Requirements

Use a system that meets the below requirements:

  • RAM: 4GB Minimum (we recommend at least 8GB)
  • CPU: 2 Minimum (we recommend at least 4CPU)
  • 3 Rocky Linux 8 Nodes
  • Zero or more agent nodes that are designated to run your apps and services
  • load balancer to direct front-end traffic to the three nodes.
  • DNS record to map a URL to the load balancer

Step 1 – Set up Rocky Linux 8 Nodes

For this guide, we will use 3 Rocky Linux nodes, a load balancer, and RKE2 agents(1 or more).

TASK HOSTNAME IP ADDRESS
Server Node 1 server1.computingpost.com 192.168.205.2
Server Node 2 server2.computingpost.com 192.168.205.3
Server Node 3 server3.computingpost.com 192.168.205.33
Load Balancer rke.computingpost.com 192.168.205.9
Agent Node1 agent1.computingpost.com 192.168.205.43
Agent Node2 agent2.computingpost.com 192.168.205.44

Set the hostnames as shown:

##On Node1
sudo hostnamectl set-hostname server1.computingpost.com

##On Node2
sudo hostnamectl set-hostname server2.computingpost.com

##On Node3
sudo hostnamectl set-hostname server3.computingpost.com

##On Loadbalancer(Node4)
sudo hostnamectl set-hostname rke.computingpost.com

##On Node5
sudo hostnamectl set-hostname agent1.computingpost.com

##On Node6
sudo hostnamectl set-hostname agent2.computingpost.com

Add the hostnames to /etc/hosts on each node

$ sudo vim /etc/hosts
192.168.205.2 server1.computingpost.com
192.168.205.3 server2.computingpost.com
192.168.205.33 server3.computingpost.com
192.168.205.43 agent1.computingpost.com
192.168.205.44 agent2.computingpost.com
192.168.205.9 rke.computingpost.com

Configure the firewall on all the nodes as shown:

sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo systemctl start nftables
sudo systemctl enable nftables

Step 2 – Configure the Fixed Registration Address

To achieve high availability, you are required to set up an odd number of server plane nodes(runs etcd, the Kubernetes API, and other control plane services). The other server nodes and agent nodes need a URL they can use to register against. This is either an IP or domain name of any of the control nodes. This is mainly done to maintain quorum so that the cluster can afford to lose connection with one of the nodes without impacting the functionality cluster.

This can be achieved using the following:

  • A layer 4 (TCP) load balancer
  • Round-robin DNS
  • Virtual or elastic IP addresses

In this guide, we will configure NGINX as a layer 4 (TCP) load balancer to forward the connection to one of the RKE nodes.

Install and configure Nginx on Node4

sudo yum install nginx

Create a config file:

sudo mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak
sudo vim /etc/nginx/nginx.conf

Create a new Nginx file with the below lines replacing where required:

user nginx;
worker_processes 4;
worker_rlimit_nofile 40000;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

# Load dynamic modules. See /usr/share/doc/nginx/README.dynamic.
include /usr/share/nginx/modules/*.conf;

events 
    worker_connections 8192;


stream 
upstream backend 
        least_conn;
        server :9345 max_fails=3 fail_timeout=5s;
        server :9345 max_fails=3 fail_timeout=5s;
        server :9345 max_fails=3 fail_timeout=5s;
   

   # This server accepts all traffic to port 9345 and passes it to the upstream. 
   # Notice that the upstream name and the proxy_pass need to match.
   server 

      listen 9345;

          proxy_pass backend;
   
    upstream rancher_api 
        least_conn;
        server :6443 max_fails=3 fail_timeout=5s;
        server :6443 max_fails=3 fail_timeout=5s;
        server :6443 max_fails=3 fail_timeout=5s;
    
        server 
        listen     6443;
        proxy_pass rancher_api;
        
    upstream rancher_http 
        least_conn;
        server 192.168.205.2:80 max_fails=3 fail_timeout=5s;
        server 192.168.205.3:80 max_fails=3 fail_timeout=5s;
        server 192.168.205.33:80 max_fails=3 fail_timeout=5s;
    
        server 
        listen     80;
        proxy_pass rancher_http;
        
    upstream rancher_https 
        least_conn;
        server 192.168.205.2:443 max_fails=3 fail_timeout=5s;
        server 192.168.205.3:443 max_fails=3 fail_timeout=5s;
        server 192.168.205.33:443 max_fails=3 fail_timeout=5s;
    
        server 
        listen     443;
        proxy_pass rancher_https;
        

Save the file, disable SELinux and restart Nginx:

sudo setenforce 0
sudo systemctl restart nginx

Step 3 – Download installer script on Rocky Linux 8 Nodes

All the Rocky Linux 8 nodes intended for this use need to be configured with the RKE2 repositories that provide the required packages. Instal curl tool on your system:

sudo yum -y install curl vim wget

With curl download the script used to install RKE2 server on your Rocky Linux 8 servers.

curl -sfL https://get.rke2.io --output install.sh

Make the script executable:

chmod +x install.sh

To see script usage options run:

less ./install.sh 

Once added, you can install and configure both the RKE2 server and agent on the desired nodes.

Step 4 – Set up the First Server Node (Master Node)

Install RKE2 server:

sudo INSTALL_RKE2_TYPE=server ./install.sh

Expected output:

[INFO]  finding release for channel stable
[INFO]  using 1.23 series from channel stable
Rocky Linux 8 - AppStream                                                                                                                                              19 kB/s | 4.8 kB     00:00
Rocky Linux 8 - AppStream                                                                                                                                              11 MB/s | 9.6 MB     00:00
Rocky Linux 8 - BaseOS                                                                                                                                                 18 kB/s | 4.3 kB     00:00
Rocky Linux 8 - BaseOS                                                                                                                                                 11 MB/s | 6.7 MB     00:00
Rocky Linux 8 - Extras                                                                                                                                                 13 kB/s | 3.5 kB     00:00
Rocky Linux 8 - Extras                                                                                                                                                 41 kB/s |  11 kB     00:00
Rancher RKE2 Common (stable)                                                                                                                                          1.7 kB/s | 1.7 kB     00:00
Rancher RKE2 1.23 (stable)                                                                                                                                            4.8 kB/s | 4.6 kB     00:00
Dependencies resolved.
======================================================================================================================================================================================================
.......

Transaction Summary
======================================================================================================================================================================================================
Install  5 Packages

Total download size: 34 M
Installed size: 166 M
Downloading Packages:
.....

Once installed, you need to create a config file manually. The config file contains the tls-sanparameter which avoids certificate errors with the fixed registration address.

The config file can be created with the command:

sudo vim /etc/rancher/rke2/config.yaml

Add the below lines to the file replacing where required.

write-kubeconfig-mode: "0644"
tls-san:
  - rke.computingpost.com
  - 192.168.205.9

Replace rke.computingpost.com with your fixed registration address and 192.168.205.9 with its IP address.

Save the file and start the service;

sudo systemctl start rke2-server
sudo systemctl enable rke2-server

Confirm status of the service after starting it:

$ systemctl status rke2-server
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
   Loaded: loaded (/usr/lib/systemd/system/rke2-server.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2022-08-27 10:17:17 UTC; 1min 32s ago
     Docs: https://github.com/rancher/rke2#readme
  Process: 3582 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 3576 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
  Process: 3573 ExecStartPre=/bin/sh -xc ! /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service (code=exited, status=0/SUCCESS)
 Main PID: 3587 (rke2)
    Tasks: 163
   Memory: 1.8G
   CGroup: /system.slice/rke2-server.service
           ├─3587 /usr/bin/rke2 server
....

Install kubectl

curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin

Export the config:

$ vim ~/.bashrc
#Add line below
export PATH=$PATH:/var/lib/rancher/rke2/bin
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml

#Source bashrc file
$ source ~/.bashrc

After some time, check if the node and pods are up:

kubectl get nodes
kubectl get pods -A

Sample Output:

Deploy-HA-Kubernetes-Cluster-on-Rocky-Linux-8-using-RKE2-1

Obtain the token:

$ sudo cat /var/lib/rancher/rke2/server/node-token
K1079187d01ac73b1a17261a475cb1b8486144543fc59a189e0c4533ef252a26450::server:33f5c1a2b7721992be25e340ded19cac

Accessing the Cluster from Outside with kubectl

Copy /etc/rancher/rke2/rke2.yaml on your machine located outside the cluster as ~/.kube/config. Then replace 127.0.0.1 with the IP or hostname of your RKE2 server. kubectl can now manage your RKE2 cluster.

scp /etc/rancher/rke2/rke2.yaml [email protected]:~/.kube/config

Step 5 – Set up additional Server Nodes (Master Nodes)

Now install RKE2 on the other two server nodes;

curl -sfL https://get.rke2.io --output install.sh
chmod +x install.sh
sudo INSTALL_RKE2_TYPE=server ./install.sh

Once installed, create the config file:

sudo vim /etc/rancher/rke2/config.yaml

Add the below lines to the file

server: https://rke.computingpost.com:9345
token: [token from /var/lib/rancher/rke2/server/node-token on server node 1]
write-kubeconfig-mode: "0644"
tls-san:
  - rke.computingpost.com

If you don’t have DNS server map A record for Load Balancer in /etc/hosts file:

$ sudo vi /etc/hosts
192.168.205.9 rke.computingpost.com

Save the file and restart the rke2-server each at a time

sudo systemctl start rke2-server
sudo systemctl enable rke2-server

After some time, check the status of the nodes

Deploy-HA-Kubernetes-Cluster-on-Rocky-Linux-8-using-RKE2-2

We have 3 master nodes configured.

Step 6 – Set up Agent Nodes (Worker Nodes)

To set up an agent node, install the RKE2 agent package using the commands below:

curl -sfL https://get.rke2.io --output install.sh
chmod +x install.sh
sudo INSTALL_RKE2_TYPE=agent ./install.sh

If you don’t have DNS server map A record for Load Balancer in /etc/hosts file:

$ sudo vi /etc/hosts
192.168.205.9 rke.computingpost.com

Create and modify configuration file to suit your use.

$ sudo vim /etc/rancher/rke2/config.yaml
server: https://rke.computingpost.com:9345
token: [token from /var/lib/rancher/rke2/server/node-token on server node 1]

Start and enable the service:

sudo systemctl start rke2-agent
sudo systemctl enable rke2-agent

Check the nodes:

Deploy-HA-Kubernetes-Cluster-on-Rocky-Linux-8-using-RKE2-3

From the output, we have one agent node added to the cluster.

Check pods.

Deploy-HA-Kubernetes-Cluster-on-Rocky-Linux-8-using-RKE2-4

This output shows all the pods available, we have pods for the rke2 ingress and metrics deployed by default in the kube-system namespace.

Step 7 – Deploy an Application.

Once the above configurations have been made, deploy and application on your cluster. For this guide, we will deploy a demo Nginx application.

kubectl apply -f - <

Check if the pod is up:

$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-cc7df4f8f-frv65   1/1     Running   0          13s
nginx-deployment-cc7df4f8f-l9xdb   1/1     Running   0          13s

Now expose the service:

$ kubectl expose deployment nginx-deployment --type=NodePort --port=80
service/nginx-deployment exposed

Obtain the port to which the service has been exposed:

$ kubectl get svc
NAME               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes         ClusterIP   10.43.0.1               443/TCP          85m
nginx-deployment   NodePort    10.43.135.164           80:31042/TCP     2s

In my case, the service has been exposed to port 31042. Access the application using any controller or worker node IP address with the syntax http://IP_Address:31042

Deploy-HA-Kubernetes-Cluster-on-Rocky-Linux-8-using-RKE2-6

That is it for now.

You have successfully deployed HA Kubernetes Cluster on Rocky Linux 8 using RKE2.

coffee

Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.