Kubernetes is undoubtedly the most advanced and adopted container orchestration platform powering millions of applications in production environments. One big challenge to most new Linux and Kubernetes users is on setting up the cluster. Though we have a number of guides on installation and configuration of Kubernetes clusters, this is our first guide on setting up Kubernetes cluster in AWS cloud environment with Amazon EKS.
For users new to Amazon EKS it is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. It runs Kubernetes control plane instances across multiple Availability Zones to ensure high availability. Since Amazon EKS is fully compatible with Community version of Kubernetes you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification required.
Amazon EKS eliminates headaches around high availability by automatically detecting and replacing unhealthy control plane instances. It also becomes easy to perform upgrades in an automated version. Amazon EKS is integrated with many AWS services to provide scalability and security for your applications, including the following:
- Amazon ECR for container images
- Elastic Load Balancing for load distribution
- IAM for authentication
- Amazon VPC for isolation
How To Deploy EKS Kubernetes Cluster on AWS
The next sections will get more deep into installation of a Kubernetes Cluster on AWS with Amazon EKS managed service. The setup diagram looks like one shown below.
Step 1: Install and Configure AWS CLI Tool
We need to setup AWS CLI tooling since our installation will be command line based. This is done in your local Workstation machine. Our installation is for both Linux and macOS.
--- Install AWS CLI on macOS ---
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
--- Install AWS CLI on Linux ---
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
You can then determine the version of AWS CLI that you have installed with the commands below.
$ aws --version
aws-cli/2.1.38 Python/3.8.8 Darwin/20.3.0 exe/x86_64 prompt/off
Configure AWS CLI credentials
After installation we need to configure our AWS CLI credentials. We’ll use the aws configure command to set up AWS CLI installation for general use.
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]:
Default output format [None]: json
Your AWS CLI details will be saved in the ~/.aws directory:
$ ls ~/.aws
config
credentials
Step 2: Install eksctl on Linux | macOS
eksctl is the a simple CLI tool used to create EKS clusters on AWS. This tool is written in Go, and uses CloudFormation. With this tool you can have a running cluster in minutes.
It has the following features as of this article writing:
- Create, get, list and delete clusters
- Create, drain and delete nodegroups
- Scale a nodegroup
- Update a cluster
- Use custom AMIs
- Configure VPC Networking
- Configure access to API endpoints
- Support for GPU nodegroups
- Spot instances and mixed instances
- IAM Management and Add-on Policies
- List cluster Cloudformation stacks
- Install coredns
- Write kubeconfig file for a cluster
Install eksctl tool on Linux or macOS machine with the commands below.
--- Linux ---
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
--- macOS ---
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
brew upgrade eksctl && brew link --overwrite eksctl # When upgrading
Test that your installation was successful with the following command.
$ eksctl version
0.45.0
Enable Shell Completion:
--- Bash ---
echo ". <(eksctl completion bash)" >> ~/.bashrc
--- Zsh ---
mkdir -p ~/.zsh/completion/
eksctl completion zsh > ~/.zsh/completion/_eksctl
# and put the following in ~/.zshrc:
fpath=($fpath ~/.zsh/completion)
# Note if you're not running a distribution like oh-my-zsh you may first have to enable autocompletion:
autoload -U compinit
compinit
Step 3: Install and configure kubectl on Linux | macOS
The kubectl command line tool is used to control Kubernetes clusters from a command line interface. The tool is installed by running the following commands in your terminal.
--- Linux ---
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
--- macOS ---
curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/darwin/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin
After you install kubectl, you can verify its version with the following command:
$ kubectl version --short --client
Client Version: v1.19.6-eks-49a6c0
The kubectl tool looks for a file named config in the $HOME/.kube directory. You can as well specify a different kubeconfig files by setting the KUBECONFIG environment variable or by setting the –kubeconfig flag.
Step 4: Create an Amazon EKS cluster and compute
With all the dependencies setup, we can now create an Amazon EKS cluster with a compute option to run our microservice applications. We’ll be performing an installation of the latest Kubernetes version available in Amazon EKS so we can take advantage of the latest EKS features.
You have the option of creating a cluster with one compute option then you can add any of the other options after your cluster is created. There are two standard compute options you can use.
- AWS Fargate: Create a cluster that only runs Linux applications on AWS Fargate. You can only use AWS Fargate with Amazon EKS in some regions
- Managed nodes: If you want to run Linux applications on Amazon EC2 instances.
In this setup we’ll be doing installation of an EKS Cluster running Kubernetes version 1.19 and using Managed EC2 compute nodes. These are my cluster details:
- Region: Ireland (eu-west-1)
- Cluster name: cs-dev-eks-cluster
- Version: 1.19 – See all available EKS versions
- Node type: t3.medium – See all AWS Node types available
- Total number of nodes (for a static ASG): 2
- Maximum nodes in ASG: 3
- Minimum nodes in ASG: 1
- SSH public key to use for nodes (import from local path, or use existing EC2 key pair): ~/.ssh/eks.pub
- Make nodegroup networking private
- Let eksctl manage cluster credentials under ~/.kube/eksctl/clusters directory,
eksctl create cluster \
--version 1.19 \
--name prod-eks-cluster \
--region eu-west-1 \
--nodegroup-name eks-ec2-linux-nodes \
--node-type t3.medium \
--nodes 2 \
--nodes-min 1 \
--nodes-max 3 \
--ssh-access \
--ssh-public-key ~/.ssh/eks.pub \
--managed \
--auto-kubeconfig \
--node-private-networking \
--verbose 3
You can also use config file instead of flags to create a cluster. Read config schema document for how to create config file.
$ vim eks-cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: prod-eks-cluster
region: eu-west-1
version: 1.19
managedNodeGroups:
- name: eks-ec2-linux-nodes
instanceType: t3.medium
desiredCapacity: 2
minSize: 1
maxSize: 3
volumeSize: 80
ssh:
allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
publicKeyPath: ~/.ssh/eks.pub
privateNetworking: true
$ eksctl create cluster -f eks-cluster.yaml
The eksctl installer will automatically create and configure VPC, Internet gateway, nat gateway and routing tables for you.
Subnets:
Be patient as the installation may take some time.
[ℹ] eksctl version 0.25.0
[ℹ] using region eu-west-1
[ℹ] setting availability zones to [eu-west-1a eu-west-1c eu-west-1b]
[ℹ] subnets for eu-west-1a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for eu-west-1c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for eu-west-1b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] using SSH public key "/Users/jkmutai/.cheat/.ssh/eks.pub" as "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes-52:ad:b5:4f:a6:01:10:b6:c1:6b:ba:eb:5a:fb:0c:b2"
[ℹ] using Kubernetes version 1.19
[ℹ] creating EKS cluster "prod-eks-cluster" in "eu-west-1" region with managed nodes
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-west-1 --cluster=prod-eks-cluster'
[ℹ] CloudWatch logging will not be enabled for cluster "prod-eks-cluster" in "eu-west-1"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=eu-west-1 --cluster=prod-eks-cluster'
[ℹ] Kubernetes API endpoint access will use default of publicAccess=true, privateAccess=false for cluster "prod-eks-cluster" in "eu-west-1"
[ℹ] 2 sequential tasks: create cluster control plane "prod-eks-cluster", 2 sequential sub-tasks: no tasks, create managed nodegroup "eks-ec2-linux-nodes"
[ℹ] building cluster stack "eksctl-prod-eks-cluster-cluster"
[ℹ] deploying stack "eksctl-prod-eks-cluster-cluster"
[ℹ] building managed nodegroup stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ] deploying stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ] waiting for the control plane availability...
[✔] saved kubeconfig as "/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster"
[ℹ] no tasks
[✔] all EKS cluster resources for "prod-eks-cluster" have been created
[ℹ] nodegroup "eks-ec2-linux-nodes" has 4 node(s)
[ℹ] node "ip-192-168-21-191.eu-west-1.compute.internal" is ready
[ℹ] node "ip-192-168-35-129.eu-west-1.compute.internal" is ready
[ℹ] node "ip-192-168-49-234.eu-west-1.compute.internal" is ready
[ℹ] node "ip-192-168-78-146.eu-west-1.compute.internal" is ready
[ℹ] waiting for at least 1 node(s) to become ready in "eks-ec2-linux-nodes"
[ℹ] nodegroup "eks-ec2-linux-nodes" has 4 node(s)
[ℹ] node "ip-192-168-21-191.eu-west-1.compute.internal" is ready
[ℹ] node "ip-192-168-35-129.eu-west-1.compute.internal" is ready
[ℹ] node "ip-192-168-49-234.eu-west-1.compute.internal" is ready
[ℹ] node "ip-192-168-78-146.eu-west-1.compute.internal" is ready
[ℹ] kubectl command should work with "/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster", try 'kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes'
[✔] EKS cluster "prod-eks-cluster" in "eu-west-1" region is ready
To list available clusters use the command below:
$ eksctl get cluster
NAME REGION
prod-eks-cluster eu-west-1
Use generated kubeconfig file to confirm if your installation was successful.
$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-21-191.eu-west-1.compute.internal Ready 18m v1.19.6-eks-49a6c0
ip-192-168-35-129.eu-west-1.compute.internal Ready 14m v1.19.6-eks-49a6c0
ip-192-168-78-146.eu-west-1.compute.internal Ready 14m v1.19.6-eks-49a6c0
$ kubectl --kubeconfig=/Users/jkmutai/.kube/eksctl/clusters/prod-eks-cluster get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-254fk 1/1 Running 0 19m
kube-system aws-node-nmjwd 1/1 Running 0 14m
kube-system aws-node-z47mq 1/1 Running 0 15m
kube-system coredns-6987776bbd-8s5ct 1/1 Running 0 14m
kube-system coredns-6987776bbd-bn5js 1/1 Running 0 14m
kube-system kube-proxy-79bcs 1/1 Running 0 14m
kube-system kube-proxy-bpznt 1/1 Running 0 15m
kube-system kube-proxy-xchxs 1/1 Running 0 19m
Get info about used node group:
$ eksctl get nodegroup --cluster prod-eks-cluster
CLUSTER NODEGROUP CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID
prod-eks-cluster eks-ec2-linux-nodes 2020-08-11T19:21:46Z 1 4 3 t3.medium
List created stacks:
$ eksctl utils describe-stacks --cluster prod-eks-cluster
To obtain cluster credentials at any point in time from an EKS cluster deployed with eksctl, run:
$ eksctl utils write-kubeconfig --cluster= [--kubeconfig=][--set-kubeconfig-context=]
Create a cluster from existing VPC:
When using existing existing VPC create a configuration file similar to one shown below.
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: prod-eks-cluster
region: eu-west-1
version: "1.19"
cloudWatch:
clusterLogging:
enableTypes: ["*"]
vpc:
clusterEndpoints:
publicAccess: false # True if you want to enable
privateAccess: true
subnets:
private:
eu-west-1a: id: subnet-03b817aa79a015507
eu-west-1b: id: subnet-099fb0be9b96334d7
managedNodeGroups:
- name: node-group-01
labels: role: workers
instanceType: t3.medium
privateNetworking: true
desiredCapacity: 2
minSize: 1
maxSize: 3
volumeSize: 80
ssh:
allow: true # will use ~/.ssh/id_rsa.pub as the default ssh key
publicKeyPath: ~/.ssh/eks.pub
Step 5: Install Kubernetes Metrics Server
Kubernetes Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. We have a separate guide on how you can install Metrics Server in an EKS Kubernetes cluster.
Install Kubernetes Metrics Server on Amazon EKS Cluster
Step 6: Enable Control Plane Logging (Optional)
Amazon EKS control plane logging provides audit and diagnostic logs directly from the Amazon EKS control plane to CloudWatch Logs in your account which makes it easy to secure and run your clusters.
Follow the guide below for more details on setting it up.
Enable CloudWatch logging in EKS Kubernetes Cluster
Deleting EKS cluster
If you have active services in your cluster that are associated with a load balancer, you must delete those services before deleting the cluster. So that load balancers are deleted properly and you don’t end up with orphaned resources in your VPC that prevent you from being able to delete the VPC.
List all services running in your cluster:
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.100.0.1 443/TCP 33h
kube-system kube-dns ClusterIP 10.100.0.10 53/UDP,53/TCP 33h
kube-system metrics-server ClusterIP 10.100.221.28 443/TCP 18h
Delete any services that have an associated EXTERNAL-IP
value. These services are fronted by an Elastic Load Balancing load balancer, and you must delete them in Kubernetes to allow the load balancer and associated resources to be properly released.
kubectl delete svc service-name
You can then delete the cluster with its associated nodes replacing eu-west-1 with correct Cluster region and prod-eks-cluster with the name of your cluster.
$ eksctl delete cluster --region=eu-west-1 --name=prod-eks-cluster
The removal process will have an output similar to one shown below.
[ℹ] eksctl version 0.25.0
[ℹ] using region eu-west-1
[ℹ] deleting EKS cluster "prod-eks-cluster"
[ℹ] deleted 0 Fargate profile(s)
[ℹ] cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
[ℹ] 2 sequential tasks: delete nodegroup "eks-ec2-linux-nodes", delete cluster control plane "prod-eks-cluster" [async]
[ℹ] will delete stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes"
[ℹ] waiting for stack "eksctl-prod-eks-cluster-nodegroup-eks-ec2-linux-nodes" to get deleted
[ℹ] will delete stack "eksctl-prod-eks-cluster-cluster"
[✔] all cluster resources were deleted
Will be updating this article with other settings before we can conclude Kubernetes cluster setup on AWS using EKS service.