Descheduler
Scope: Cluster
Descheduler is a description of a single descheduler instance.
- array of objects
List of label expressions that a node should have to qualify for the filter condition.
Example:
matchExpressions: - key: tier operator: In values: - production - staging - key: tier operator: NotIn values: - production
- object
Limiting the pods which are processed to fit evicted pods by labels in set representation. If set,
nodeSelector
must not be set.- array of objects
List of label expressions that a node should have to qualify for the filter condition.
Example:
matchExpressions: - key: tier operator: In values: - production - staging - key: tier operator: NotIn values: - production
- array of objects
List of label expressions that a node should have to qualify for the filter condition.
Example:
matchExpressions: - key: tier operator: In values: - production - staging - key: tier operator: NotIn values: - production
- object
Limiting the pods which are processed by priority class. Only pods under the threshold can be evicted.
You can specify either the name of the priority class (priorityClassThreshold.name), or the actual value of the priority class (priorityThreshold.value).
By default, this threshold is set to the value of
system-cluster-critical
priority class. - object
This strategy finds nodes that are under utilized and evicts Pods from the nodes in the hope that these pods will be scheduled compactly into fewer nodes. When combined with node auto-scaling, it helps reduce the number of underutilized nodes. The strategy works with the
MostAllocated
scheduler.In GKE, you cannot configure the default scheduler, but you can use the
optimize-utilization
strategy or deploy a second custom scheduler.Node resource usage takes into account extended resources and is based on pod requests and limits, not actual consumption.
- object
Sets threshold values to identify to identify under utilized nodes.
If the resource usage of the node is below all threshold values, then the node is considered under utilized.
- object
This strategy identifies under utilized nodes and evicts pods from other over utilized nodes. The strategy assumes that the evicted pods will be recreated on the under utilized nodes (following normal scheduler behavior).
Under utilized node — A node whose resource usage is below all the threshold values specified in the thresholds section.
Over utilized node — A node whose resource usage exceeds at least one of the threshold values specified in the targetThresholds section.
Node resource usage takes into account extended resources and is based on pod requests and limits, not actual consumption.
- object
Sets threshold values to identify to identify over utilized nodes.
If the resource usage of the node exceeds at least one of the threshold values, then the node is considered over utilized.
- object
Sets threshold values to identify to identify under utilized nodes.
If the resource usage of the node is below all threshold values, then the node is considered under utilized.
- object
The strategy ensures that no more than one pod of a ReplicaSet, ReplicationController, StatefulSet, or pods of a single Job is running on the same node. If there are two or more such pods, the module evicts the excess pods so that they are better distributed across the cluster.
- object
The strategy ensures that pods violating inter-pod affinity and anti-affinity rules are evicted from nodes.
- object
The strategy makes sure all pods violating node affinity are eventually removed from nodes.
Essentially, depending on the settings of the parameter nodeAffinityType, the strategy temporarily implement the rule
requiredDuringSchedulingIgnoredDuringExecution
of the pod’s node affinity as the rulerequiredDuringSchedulingRequiredDuringExecution
, and the rulepreferredDuringSchedulingIgnoredDuringExecution
as the rulepreferredDuringSchedulingPreferredDuringExecution
.- array of strings
Defines the list of node affinity rules used.
Default:
["requiredDuringSchedulingIgnoredDuringExecution"]
Deprecated resource. Support for the resource might be removed in a later release.
Descheduler is a description of a single descheduler instance.
- object
List of strategies with corresponding parameters for a given Descheduler instances.
- object
This strategy finds nodes that are under utilized and evicts Pods from the nodes in the hope that these Pods will be scheduled compactly into fewer nodes.
- object
This strategy finds nodes that are under utilized and evicts Pods, if possible, from other nodes in the hope that recreation of evicted Pods will be scheduled on these underutilized nodes.
- object
This strategy makes sure that there is only one Pod associated with a ReplicaSet (RS), ReplicationController (RC), StatefulSet, or Job running on the same node.
- object
This strategy evicts Pods that are in
failed
status phase. - object
This strategy makes sure that Pods having too many restarts are removed from nodes.
- object
This strategy makes sure that Pods violating interpod anti-affinity are removed from nodes.
- object
This strategy makes sure all Pods violating node affinity are eventually removed from nodes.
- object
This strategy makes sure that Pods violating
NoSchedule
taints on nodes are removed. - object
This strategy makes sure that Pods violating topology spread constraints are evicted from nodes.