By default, your Kubernetes Cluster will not schedule pods on the control-plane node for security reasons. It is recommended you keep it this way, but for test environments you may want to schedule Pods on control-plane node to maximize resource usage.
If you want to be able to schedule pods on the Kubernetes control-plane node, you need to remove a taint on the master nodes.
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
The output will look something like:
node/k8smaster01.computingforgeeks.com untainted
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
This will remove the node-role.kubernetes.io/master taint from any nodes that have it, including the control-plane node, meaning that the scheduler will then be able to schedule pods everywhere.
For single node the command to use is:
kubectl taint nodes node-role.kubernetes.io/master-
Testing Pod Scheduling on Kubernetes Control plane node(s)
I have a cluster with three worker nodes and one control plane node.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8smaster01.computingpost.com Ready master 12h v1.24.3
k8snode01.computingpost.com Ready 12h v1.24.3
k8snode02.computingpost.com Ready 12h v1.24.3
k8snode03.computingpost.com Ready 9h v1.24.3
Create a demo namespace:
kubectl create namespace demo
Will create a deployment with 5 replicas.
vim nginx-deployment.yaml
It has the data below:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: demo
labels:
app: nginx
color: green
spec:
replicas: 5
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
color: green
spec:
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: IfNotPresent
ports:
- name: http
protocol: TCP
containerPort: 80
resources:
limits:
cpu: "200m"
memory: "256Mi"
requests:
cpu: 100m
memory: 128Mi
---
apiVersion: v1
kind: Service
metadata:
annotations:
name: nginx-demo-service
namespace: demo
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: nginx
sessionAffinity: None
type: NodePort
Apply manifest:
kubectl apply -f nginx-deployment.yaml
Check if a pod is scheduled to the control node plane.
$ kubectl get pods -n demo -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-675bf5bc87-666jg 1/1 Running 0 17m 192.168.213.131 k8snode01.computingpost.com
nginx-675bf5bc87-mc6px 1/1 Running 0 17m 192.168.94.13 k8smaster01.computingpost.com
nginx-675bf5bc87-v5q87 1/1 Running 0 17m 192.168.144.129 k8snode03.computingpost.com
nginx-675bf5bc87-vctqm 1/1 Running 0 17m 192.168.101.195 k8snode02.computingpost.com
nginx-675bf5bc87-w5pmh 1/1 Running 0 17m 192.168.213.130 k8snode01.computingpost.com
We can see there is a pod in master node. Confirm service is live.
$ kubectl get svc -n demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service NodePort 10.96.184.67 80:31098/TCP 21m
Since we’re using NodePort, we should be able to access the service on any cluster node IP on port 31098.
We can now clean demo objects.
$ kubectl delete -f nginx-deployment.yaml
deployment.apps "nginx" deleted
service "nginx-service" deleted
$ kubectl get pods,svc -n demo
No resources found in demo namespace.
That’s all on how to Schedule Pods on Kubernetes Control plane Node.