Taints , Tolerations and Node Affinity in Kubernetes
- maheshkamineni35
- Jun 11, 2024
- 4 min read

Taints and tolerations in Kubernetes are mechanisms used to ensure that pods are not scheduled onto inappropriate nodes. This feature helps in creating constraints and control over pod placement based on specific node characteristics. Here's a detailed explanation with examples:
Taints
A taint is applied to a node to mark it as having special properties or constraints. It consists of three components:
Key: The identifier of the taint.
Value: The value associated with the taint key.
Effect: Determines what happens to pods that do not tolerate the taint. There are three possible effects:
NoSchedule: Pods that do not tolerate the taint will not be scheduled on the node.
PreferNoSchedule: Kubernetes will try to avoid placing a pod that does not tolerate the taint on the node but will not guarantee it.
NoExecute: Pods that do not tolerate the taint will be evicted if they are already running on the node and will not be scheduled onto the node.
Example
kubectl taint nodes node1 key=value:NoScheduleThis command taints node1 with key=value and sets the effect to NoSchedule. This means that any pod that does not tolerate this taint will not be scheduled on node1.
Tolerations
A toleration is applied to pods to indicate that they can be scheduled on on nodes with specific taints. It allows the pod to "tolerate" a node taints
apiVersion: v1kind: Podmetadata: name: mypodspec: tolerations: - key: "key" operator: "Equal" value: "value" effect: "NoSchedule" containers: - name: mycontainer image: myimageIn this YAML file, the pod mypod has a toleration for the taint key=value:NoSchedule. This means it can be scheduled on nodes with this taint.
Practical Use Cases
1. Dedicated Nodes for Specific Workloads
Scenario: You have a set of nodes dedicated to running high-priority workloads.
Taint:
kubectl taint nodes high-priority-node dedicated=high-priority:NoScheduleToleration:
apiVersion: v1kind: Podmetadata: name: high-priority-podspec: tolerations: - key: "dedicated" operator: "Equal" value: "high-priority" effect: "NoSchedule" containers: - name: mycontainer image: myimageIn this example, only pods with the toleration for dedicated=high-priority will be scheduled on high-priority-node.
2. Avoiding Nodes with Special Conditions
Scenario: Certain nodes are known to have issues or are undergoing maintenance, and you want to prevent new pods from being scheduled on them.
Taint:
kubectl taint nodes problematic-node maintenance=true:NoScheduleToleration: (Optional, if you want specific pods to still be scheduled on these nodes)
apiVersion: v1kind: Podmetadata: name: special-podspec: tolerations: - key: "maintenance" operator: "Equal" value: "true" effect: "NoSchedule" containers: - name: mycontainer image: myimageHere, problematic-node is tainted with maintenance=true:NoSchedule, preventing new pods from being scheduled unless they have the corresponding toleration.
3. Evicting Pods from Nodes
Scenario: You want to evict all non-critical pods from a node for maintenance but allow critical ones to remain.
kubectl taint nodes node-under-maintenance maintenance=true:NoExecuteToleration for Critical Pods:
apiVersion: v1kind: Podmetadata: name: critical-podspec: tolerations: - key: "maintenance" operator: "Equal" value: "true" effect: "NoExecute" containers: - name: mycontainer image: myimageIn this setup, node-under-maintenance will evict all pods without the toleration for maintenance=true:NoExecute, but critical pods with the toleration will stay running on the node.
By using taints and tolerations effectively, you can fine-tune pod placement across your Kubernetes cluster to meet various operational requirements.
Node Affinity
Node affinity in Kubernetes is a set of rules used to influence the scheduling of pods to specific nodes based on node labels. Unlike taints and tolerations, which are more about prohibiting certain pods from being scheduled on certain nodes, node affinity is about guiding the scheduler to prefer or require scheduling on nodes that match specific criteria.
Node affinity is specified in the pod specification and includes two types: requiredDuringSchedulingIgnoredDuringExecution and preferredDuringSchedulingIgnoredDuringExecution.
Types of Node Affinity
requiredDuringSchedulingIgnoredDuringExecution:
This is a hard requirement. The pod will only be scheduled on nodes that match the specified criteria.
If no nodes match the criteria, the pod will not be scheduled.
preferredDuringSchedulingIgnoredDuringExecution:
This is a soft preference. The scheduler will try to place the pod on nodes that match the specified criteria, but if none are available, it will still schedule the pod on other nodes.
Consider a scenario where you want to schedule certain pods on nodes labeled with disktype=ssd. First, label a node:
kubectl label nodes <node-name> disktype=ssdSpecifying Node Affinity in a Pod
Now, define a pod with node affinity:
apiVersion: v1kind: Podmetadata: name: mypodspec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: disktype operator: In values: - ssd containers: - name: mycontainer image: myimageIn this YAML file:
The pod mypod will only be scheduled on nodes with the label disktype=ssd because of the requiredDuringSchedulingIgnoredDuringExecution rule.
Combining Required and Preferred Affinity
You can also combine both types of node affinity to specify hard requirements and preferences.
apiVersion: v1kind: Podmetadata: name: mypodspec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: disktype operator: In values: - ssd preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: anotherkey operator: In values: - somevalue containers: - name: mycontainer image: myimageIn this YAML file:
The pod mypod must be scheduled on nodes with the label disktype=ssd (hard requirement).
The scheduler will prefer nodes with the label anotherkey=somevalue if available, but this is a soft preference.
Node affinity is a powerful feature in Kubernetes that provides fine-grained control over pod scheduling based on node labels. It enhances the flexibility and control you have over the deployment of your applications, ensuring they run on the most appropriate nodes according to your specified criteria.
High-Performance Workloads:
You may have nodes with high-performance SSDs for certain workloads. Using node affinity, you can ensure these workloads are scheduled on SSD-equipped nodes.
Geographical Node Selection:
If your nodes are spread across multiple data centers or geographical regions, you can use node labels to indicate the location and node affinity to ensure pods are scheduled in the desired locations.
Resource Segregation:
You might want to segregate different types of workloads, such as separating development and production environments by using node labels and node affinity rules.


Comments