This example illustrates how the placement of scheduled pods can be influenced. For these examples, we assume that you have Minikube installed and running as described in here.
The simplest way to influence the scheduling process is to use a nodeSelector.
Apply our simple example random-generator application with a node selector:
kubectl create -f https://k8spatterns.com/AutomatedPlacement/node-selector.ymlYou will notice that this Pod does not get scheduled because the nodeSelector can’t find any node with the label disktype=ssd:
kubectl describe pod node-selector.... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 8s (x2 over 8s) default-scheduler 0/1 nodes are available: 1 node(s) didnt match node selector.
Let’s change this:
kubectl label node minikube disktype=ssdkubectl get podsNAME READY STATUS RESTARTS AGE random-generator 1/1 Running 0 65s
Let’s now use Node affinity rules for scheduling our Pod:
kubectl create -f https://k8spatterns.com/AutomatedPlacement/node-affinity.ymlAgain, our Pod will only schedule as no node fulfills the affinity rules. We can change this with
kubectl label node minikube numberCores=4Does the Pod start up now? What if you choose two instead of 4 for the number of cores?
To test Pod affinity, we need to install a Pod to connect our Pod. We are trying to create both Pods with
kubectl create -f https://k8spatterns.com/AutomatedPlacement/pod-affinity.ymlkubectl get podsNAME READY STATUS RESTARTS AGE confidential-high 1/1 Running 0 22s pod-affinity 0/1 Pending 0 22s
"confidential-high" is a placeholder pod with a label matched by our "pod-affinity" Pod. However, our node still needs to get the proper topology key. That can be changed with
kubectl label --overwrite node minikube security-zone=highkubectl get podsNAME READY STATUS RESTARTS AGE confidential-high 1/1 Running 0 9m39s pod-affinity 1/1 Running 0 9m39s
For testing taints and tolerations, we first have to taint our Minikube node so that, by default, no Pods are scheduled on it:
kubectl taint nodes minikube node-role.kubernetes.io/master="":NoScheduleYou can check that this taint works by reapplying the previous pod-affinity.yml example and seeing that the confidential-high Pod is not scheduled.
kubectl delete -f https://k8spatterns.com/AutomatedPlacement/pod-affinity.yml
kubectl create -f https://k8spatterns.com/AutomatedPlacement/pod-affinity.ymlkubectl get podsNAME READY STATUS RESTARTS AGE confidential-high 0/1 Pending 0 2s pod-affinity 0/1 Pending 0 2s
But our Pod in tolerations.yml can be scheduled as it tolerates this new taint on Minikube:
kubectl create -f https://k8spatterns.com/AutomatedPlacement/tolerations.ymlkubectl get podsNAME READY STATUS RESTARTS AGE confidential-high 0/1 Pending 0 2m51s pod-affinity 0/1 Pending 0 2m51s tolerations 1/1 Running 0 4s