In this guide, as the title suggests, we shall be focusing on setting up a highly available Kubernetes cluster with HAProxy and Keepalived ensure that all services continue as usual in case any of the master nodes have technical difficulties. We shall be leveraging on the power of Kubespray to make our work as simple as possible.
As for the architecture, the figure below the installation pre-requisites section makes it all clear for you. We shall install HAProxy and Keepalived on the three master nodes to co-exist with etd and api-server. Moreover, in this setup, we are going to use containerd as the container runtime in place of docker.
With this, you will continue building your images using docker and Kubernetes will pull and run them using containerd.
In order for this deployment to start and succeed, we are going to need an extra server or computer that will be used as the installation server. This machine will contain Kubespray files and will connect to your servers where kubernetes will be installed and proceed to setup kubernetes in them. The deployment architecture is simplified by the diagram below with three masters, three etcd and two worker nodes.
- prod-master1 10.38.87.251
- prod-master2 10.38.87.252
- prod-master3 10.38.87.253
- prod-worker1 10.38.87.254
- prod-worker2 10.38.87.249
- Virtual IP for Keepalived: 10.38.87.250
Make sure you generate SSH keys and copy your public key to all of the CentOS 7 servers where Kubernetes will be built.
Step 1: Prepare your servers
Preparing your servers is a crucial step which ensures that every aspect of the deployment runs smoothly till the very end. In this step, we shall be doing simple updates, installing haproxy and keepalived on the master nodes and make sure that important packages have been installed. Issue the commands below in each of your servers to kick everything off.
sudo yum -y update
On the master nodes, install haproxy and keepalived as follows
sudo yum install epel-release sudo yum install haproxy keepalived -y
Configure SELinux as Permissive on all master and worker nodes as follows
sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Step 2: Configure Keepalived
From its GitHub Page, Keepalived implements a set of checkers to dynamically and adaptively maintain and manage load balanced server pool according their health. On the other hand high-availability is achieved by the Virtual Router Redundancy Protocol (VRRP).
On first master, configure keepalived as follows:
$ sudo vim /etc/keepalived/keepalived.conf vrrp_script chk_haproxy script "killall -0 haproxy" interval 2 weight 2 vrrp_instance VI_1 interface eth0 state MASTER advert_int 1 virtual_router_id 51 priority 101 unicast_src_ip 10.38.87.251 ##Master 1 IP Address unicast_peer 10.38.87.252 ##Master 2 IP Address 10.38.87.253 ##Master 2 IP Address virtual_ipaddress 10.38.87.250 ##Shared Virtual IP address track_script chk_haproxy
On Second master, configure keepalived as follows:
$ sudo vim /etc/keepalived/keepalived.conf vrrp_script chk_haproxy script "killall -0 haproxy" interval 2 weight 2 vrrp_instance VI_1 interface eth0 state BACKUP advert_int 3 virtual_router_id 50 priority 100 unicast_src_ip 10.38.87.252 ##Master 2 IP Address unicast_peer 10.38.87.253 ##Master 3 IP Address 10.38.87.251 ##Master 1 IP Address virtual_ipaddress 10.38.87.250 ##Shared Virtual IP address track_script chk_haproxy
On third master, configure keepalived as follows:
$ sudo vim /etc/keepalived/keepalived.conf vrrp_script chk_haproxy script "killall -0 haproxy" interval 2 weight 2 vrrp_instance VI_1 interface eth0 state BACKUP advert_int 3 virtual_router_id 49 priority 99 unicast_src_ip 10.38.87.253 ##Master 3 IP Address unicast_peer 10.38.87.251 ##Master 1 IP Address 10.38.87.252 ##Master 2 IP Address virtual_ipaddress 10.38.87.250 ##Shared Virtual IP address track_script chk_haproxy
- vrrp_instance defines an individual instance of the VRRP protocol running on an interface. This has arbitrarily named as VI_1.
- state defines the initial state that the instance should start in.
- interface defines the interface that VRRP runs on.
- virtual_router_id is the unique identifier of the nodes.
- priority is the advertised priority that you learned about in the first article of this series. As you will learn in the next article, priorities can be adjusted at runtime.
- advert_int specifies the frequency that advertisements are sent (3 seconds in this case).
- authentication specifies the information necessary for servers participating in VRRP to authenticate with each other. In this case, it has not been configured.
- virtual_ipaddress defines the IP addresses (there can be multiple) that VRRP is responsible for.
Start and Enable keepalived
After the configuration has been done in each of the master nodes, start and enable keepalived as follows
sudo systemctl start keepalived sudo systemctl enable keepalived
Once Keepalived is running in each node, you should see a new IP added in your interface as follows
mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:f2:92:fd brd ff:ff:ff:ff:ff:ff inet 10.38.87.252/24 brd 10.38.87.255 scope global noprefixroute eth0 valid_lft forever preferred_lft forever inet 10.38.87.250/32 scope global eth0
Step 3: Configure HAproxy
HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications. It is particularly suited for very high traffic web sites and powers quite a number of the world’s most visited ones. Over the years it has become the de-facto standard opensource load balancer, is now shipped with most mainstream Linux distributions.
We shall configure HAProxy in the three master nodes as follows:
$ sudo vim /etc/haproxy/haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # apiserver frontend which proxys to the masters #--------------------------------------------------------------------- frontend apiserver bind *:8443 mode tcp option tcplog default_backend apiserver #--------------------------------------------------------------------- # round robin balancing for apiserver #--------------------------------------------------------------------- backend apiserver option httpchk GET /healthz http-check expect status 200 mode tcp option ssl-hello-chk balance roundrobin server prod-master1 10.38.87.251:6443 check server prod-master2 10.38.87.252:6443 check server prod-master3 10.38.87.253:6443 check
After making the configuration details, simply allow the configured port on your firewall then start and enable haproxy service.
sudo firewall-cmd --permanent --add-port=8443/tcp && sudo firewall-cmd --reload sudo systemctl restart haproxy sudo systemctl enable haproxy
Step 4: Clone Kubespray Git repository and add configurations
In this step, we are going to fetch Kubespray files in our local machine (the installer machine) then make the necessary configurations by choosing containerd as the container run time as well as populating the requisite files with the details of our servers (etc, masters, workers).
cd ~ git clone https://github.com/kubernetes-sigs/kubespray.git Cloning into 'kubespray'...
Change to the project directory:
$ cd kubespray
This directory contains the inventory files and playbooks used to deploy Kubernetes.
Step 5: Prepare Local machine
On the Local machine where you will run deployment from, you need to install pip Python package manager.
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python3 get-pip.py --user
Step 6: Create Kubernetes Cluster inventory file and Install dependencies
The inventory is composed of 3 groups:
- kube-node : list of kubernetes nodes where the pods will run.
- kube-master : list of servers where kubernetes master components (apiserver, scheduler, controller) will run.
- etcd: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.
There are also two special groups:
- calico-rr : explained for advanced Calico networking cases
- bastion : configure a bastion host if your nodes are not directly reachable
Create an inventory file:
cp -rfp inventory/sample inventory/mycluster
Define your inventory with your server’s IP addresses and map to correct node purpose.
$ vim inventory/mycluster/inventory.ini master0 ansible_host=10.38.87.251 ip=10.38.87.251 master1 ansible_host=10.38.87.252 ip=10.38.87.252 master2 ansible_host=10.38.87.253 ip=10.38.87.253 worker1 ansible_host=10.38.87.254 ip=10.38.87.254 worker2 ansible_host=10.38.87.249 ip=10.38.87.249 # ## configure a bastion host if your nodes are not directly reachable # bastion ansible_host=x.x.x.x ansible_user=some_user [kube-master] master0 master1 master2 [etcd] master0 master1 master2 [kube-node] worker1 worker2 [calico-rr] [k8s-cluster:children] kube-master kube-node calico-rr
Add A records to /etc/hosts on your workstation.
$ sudo vim /etc/hosts master0 10.38.87.251 master1 10.38.87.252 master2 10.38.87.253 worker1 10.38.87.254 worker2 10.38.87.249
If your private ssh key has passphrase, save it before starting deployment.
$ eval `ssh-agent -s` && ssh-add Agent pid 4516 Enter passphrase for /home/centos/.ssh/id_rsa: Identity added: /home/tech/.ssh/id_rsa (/home/centos/.ssh/id_rsa)
Install dependencies from requirements.txt
# Python 2.x sudo pip install --user -r requirements.txt # Python 3.x sudo pip3 install -r requirements.txt
Confirm ansible installation.
$ ansible --version ansible 2.9.6 config file = /etc/ansible/ansible.cfg configured module search path = ['/home/tech/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3/dist-packages/ansible executable location = /usr/bin/ansible python version = 3.8.5 (default, Jan 28 2021, 12:59:40) [GCC 9.3.0]
Review and change parameters under inventory/mycluster/group_vars
We shall review and change parameters under inventory/mycluster/group_vars to ensure that Kubespray uses containerd.
##Change from docker to containerd at around line 176 and add the two lines below $ vim inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml container_manager: containerd etcd_deployment_type: host kubelet_deployment_type: host
Then in “inventory/mycluster/group_vars/all/all.yml” file, make the following changes
$ vim inventory/mycluster/group_vars/all/all.yml ##Add Load Balancer Details at around line 20 apiserver_loadbalancer_domain_name: "haproxy.computingforgeeks.com" loadbalancer_apiserver: address: 10.38.87.250 port: 8443 ## Deactivate Internal loadbalancers for apiservers at around line 26 loadbalancer_apiserver_localhost: false
Make sure the Load Balancer Domain name can be resolved by your nodes.
Step 7: Allow requisite Kubernetes ports on the firewall
Kubernetes uses many ports for different services. Due to that, we need to allow them to be accessed on the firewall as follows.
On the three master nodes, allow the ports as follows
sudo firewall-cmd --permanent --add-port=6443,2379-2380,10250-10252,179/tcp --add-port=4789/udp && sudo firewall-cmd --reload
On the worker nodes, allow the requisite ports as follows:
sudo firewall-cmd --permanent --add-port=10250,30000-32767,179/tcp --add-port=4789/udp && sudo firewall-cmd --reload
Then allow ip forwading on all nodes as follows:
sudo modprobe br_netfilter sudo sh -c "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables" sudo sh -c "echo '1' > /proc/sys/net/ipv4/ip_forward"
Step 8: Deploy Kubernetes Cluster with Kubespray Ansible Playbook
Now execute the playbook to deploy Production ready Kubernetes with Ansible. Please note that the target servers must have access to the Internet in order to pull images.
Start the deployment by running the command:
ansible-playbook -i inventory/mycluster/inventory.ini --become \ --user=tech --become-user=root cluster.yml
Replace “tech” with the remote user ansible will connect to the nodes as. You should not get failed task in execution. The very last messages will look like the screenshot shared below.
Once the playbook executes to the tail end, login to the master node and check cluster status.
$ sudo kubectl cluster-info Kubernetes master is running at https://haproxy.computingforgeeks.com:8443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
You can also check the nodes
$ sudo kubectl get nodes NAME STATUS ROLES AGE VERSION master0 Ready master 33h v1.19.5 master1 Ready master 29h v1.19.5 master2 Ready master 29h v1.19.5 worker1 Ready
29h v1.19.5 worker2 Ready 29h v1.19.5
Step 6: Install Kubernetes Dashboard (Optional)
This is an optional step in case you do not have other options to access your Kubernetes cluster via a cool interface like Lens. To get the dashboard installed, follow the detailed guide below.
How To Install Kubernetes Dashboard with NodePort
And once it is working, you will need to create an admin user to access your cluster. Use the guide below to fix that:
Create Admin User to Access Kubernetes Dashboard
Kubespray makes the deployment of Kubernetes a cinch. Thanks to the team that developed the playbooks involved in achieving this complex deployment, we now have a ready platform just waiting for your applications that will serve the world. In case you have a bigger cluster you intend to setup, simply place the various components (etcd, master, workers etc) in the deployment scripts and Kubespray will handle the rest. May your year flourish, your endeavour bear good fruits and your investments pay off. Let us face it with fortitude, with laughter, hard work and grace.