Deploy Kubernetes Cluster on AlmaLinux 8 with Kubeadm

Posted on 125 views

Kubernetes also known as Kube or k8s is one of the highly used container orchestration tools developed by Google. It is used to automate the deployment, administration, and scaling of containers. In other words, Kubernetes handles the below tasks:

  • Load balancing requests among many instances of an application automatically.
  • Controlling and managing the use of resources by an application.
  • Monitoring resource use and resource limits to automatically stop apps from consuming excessive amounts of resources and resume them again.
  • When a new host is added to the cluster, extra resources are automatically made accessible.
  • Transfer an application instance from one host to another is a viable option when a host dies/is exhausted.

Kubeadmin is a tool used to create and, manage Kubernetes clusters by offering commands such as kubeadm init , kubeadm joine.t.c. Aside from this, it also supports other cluster life-cycle functions such as upgrades and bootstrap tokens.

This guide aims to demonstrate how to deploy Kubernetes Cluster on AlmaLinux 8 servers with Kubeadm. You will require the following:

  • Min memory 2GB
  • 2 CPUs
  • AlmaLinux 8 hosts to be used as below:
TASK IP_ADDRESS HOSTNAME
Master 192.168.205.3 master.computingpost.com
Worker node1 192.168.205.13 woker1.computingpost.com
Worker node2 192.168.205.23 woker2.computingpost.com

Update your system.

sudo yum update

With the requirements above met, proceed as below.

Step 1 – Configure Hostnames

On all the 3 systems, you will need to set the appropriate hostnames as below.

##On Master Node##
sudo hostnamectl set-hostname master.computingpost.com

##On Worker Node1##
sudo hostnamectl set-hostname woker1.computingpost.com

##On Worker Node2##
sudo hostnamectl set-hostname woker2.computingpost.com

Also, remember to edit the /etc/hosts file on the servers as below.

sudo vi /etc/hosts

In the file, add the client hostnames and IP addresses.

192.168.205.3  master.computingpost.com
192.168.205.13 worker1.computingpost.com
192.168.205.23 worker2.computingpost.com

Step 2 – Configure SELinux and Firewall on Alma Linux 8

For the systems to be able to access the required file systems, networks, and other pod services, we need to disable SELinux or set it to permissive mode on AlmaLinux 8. This can be achieved using the command:

sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux

Check the SELinux status:

$ sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

You also need to allow the required ports through the firewall.

  • On the master node
sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10259/tcp
sudo firewall-cmd --permanent --add-port=10257/tcp
sudo firewall-cmd --permanent --add-port=179/tcp
sudo firewall-cmd --permanent --add-port=4789/udp
sudo firewall-cmd --reload
  • On the worker nodes
sudo firewall-cmd --permanent --add-port=179/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --permanent --add-port=4789/udp
sudo firewall-cmd --reload

Step 3 – Disable Swap

On both the master and worker node, disable swap otherwise the kubelet service will not start

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

Step 4 – Install Docker on Alma Linux 8

Docker will be used to build containers to be managed by Kubernetes. To install Docker on Alma Linux 8, first, add the repository.

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

Then install Docker CE.

sudo yum install -y docker-ce docker-ce-cli containerd.io --allowerasing

Start and enable docker.

sudo systemctl start docker
sudo systemctl enable docker

Edit the below file to change the docker cgroup driver.

sudo vi /etc/docker/daemon.json

Add the lines below to the file.


  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": 
    "max-size": "100m"
  ,
  "storage-driver": "overlay2"

Reload the system daemons and restart docker.

sudo systemctl daemon-reload
sudo systemctl restart docker

Install cri-dockerd that makes Docker Engine

Step 5 – Install Kubernetes and Kubeadmin on Alma Linux 8.

Begin by adding the Kubernetes repository on the Master and Worker nodes.

cat <

With the repository added, install kubeadm, kubelet, kubectl on the Master and Worker nodes.

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=Kubernetes

Start and enable the Kubelet on all the servers.

sudo systemctl enable kubelet
sudo systemctl start kubelet

Step 6 – Initialize Kubernetes On AlmaLinux 8.

Start the Kubernetes cluster on the master node.

sudo kubeadm init

# With specific CRI socket path
sudo kubeadm init --cri-socket /run/cri-dockerd.sock 

Execution output:

[init] Using Kubernetes version: v1.23.3
[preflight] Running pre-flight checks
	[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.computingpost.com] and IPs [10.96.0.1 192.168.205.3]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
.......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.205.3:6443 --token 8ltbm7.0mp5iinzu33hx692 \
	--discovery-token-ca-cert-hash sha256:9afc96eb0ae2ad75c9f9739b342742820be1130b43ac7b75d75b8f94982b824a 

Export the certificate file as below

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Check the status of the nodes.

$ kubectl get nodes
NAME                           STATUS     ROLES                  AGE     VERSION
master.computingpost.com   NotReady   control-plane,master   3m45s   v1.23.3

As seen, the Master node is not ready since the POD network is not configured.

Step 7 – Configure POD Network

POD network in Kubernetes consists of interconnected components. This concept can be implemented in several ways. In this guide, we will use the Calico Network.

On the master node, run the below commands:

curl https://docs.projectcalico.org/manifests/calico.yaml -O

Once downloaded, apply the YAML file.

kubectl apply -f calico.yaml

Sample Output:

configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
....
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created

Now check the status of the pods.

kubectl get pods -n kube-system

The output of the command:

NAME                                                   READY   STATUS    RESTARTS   AGE
calico-kube-controllers-566dc76669-f87pj               1/1     Running   0          36s
calico-node-gg87m                                      1/1     Running   0          36s
coredns-64897985d-shv9j                                1/1     Running   0          7m20s
coredns-64897985d-w645x                                1/1     Running   0          7m20s
etcd-master.computingpost.com                      1/1     Running   0          7m26s
kube-apiserver-master.computingpost.com            1/1     Running   0          7m23s
kube-controller-manager-master.computingpost.com   1/1     Running   0          7m23s
kube-proxy-w9s4d                                       1/1     Running   0          7m20s
kube-scheduler-master.computingpost.com            1/1     Running   0          7m23s

Also, verify the status of the master node.

$ kubectl get nodes
NAME                           STATUS   ROLES                  AGE     VERSION
master.computingpost.com   Ready    control-plane,master   9m11s   v1.23.3

Step 8 – Join the Worker Nodes

With the master node ready, we will join the two worker nodes using the kubeadm join command. Remember, we will use the generated token in step 6.

sudo kubeadm join 192.168.205.3:6443 --token 8ltbm7.0mp5iinzu33hx692 \
	--discovery-token-ca-cert-hash sha256:9afc96eb0ae2ad75c9f9739b342742820be1130b43ac7b75d75b8f94982b824a

Output:

[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Once joined successfully, the nodes should be available as below.

$ kubectl get nodes
NAME                           STATUS   ROLES                  AGE   VERSION
master.computingpost.com   Ready    control-plane,master   25m   v1.23.3
woker1.computingpost.com   Ready                     91s   v1.23.3
woker2.computingpost.com   Ready                     91s   v1.23.3

Now all pods can be viewed.

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                                                   READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-566dc76669-f87pj               1/1     Running   0          18m
kube-system   calico-node-gg87m                                      1/1     Running   0          18m
kube-system   calico-node-r86ms                                      1/1     Running   0          2m1s
kube-system   calico-node-sf2t6                                      1/1     Running   0          2m1s
kube-system   coredns-64897985d-shv9j                                1/1     Running   0          25m
kube-system   coredns-64897985d-w645x                                1/1     Running   0          25m
kube-system   etcd-master.computingpost.com                      1/1     Running   0          25m
kube-system   kube-apiserver-master.computingpost.com            1/1     Running   0          25m
kube-system   kube-controller-manager-master.computingpost.com   1/1     Running   0          25m
kube-system   kube-proxy-ntjcp                                       1/1     Running   0          2m1s
kube-system   kube-proxy-rq5qf                                       1/1     Running   0          2m1s
kube-system   kube-proxy-w9s4d                                       1/1     Running   0          25m
kube-system   kube-scheduler-master.computingpost.com            1/1     Running   0          25m

Generate a new token if the previous one has expired using the command:

kubeadm token create

List generated tokens:

$ kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
8ltbm7.0mp5iinzu33hx692   23h         2022-02-05T14:11:46Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

Also check our how to install and configure MetalLB on Kubernetes:

Conclusion

That is it!

By Following this guide to the end, you should be able to deploy a Kubernetes Cluster on AlmaLinux 8 servers using Kubeadm.

coffee

Gravatar Image
A systems engineer with excellent skills in systems administration, cloud computing, systems deployment, virtualization, containers, and a certified ethical hacker.