Pod Topology Spread Constraints allow you to control how pods are distributed across your cluster based on topology domains like zones, regions, or nodes. They're especially useful for high-availability applications where you want to avoid having too many pods in a single failure domain.
maxSkew
: The maximum allowed difference between the number of pods in any two topology domains
topologyKey
: The node label that the system uses to denote the topology domain (like "zone" or "region")
whenUnsatisfiable
: What to do when the constraint can't be satisfied (DoNotSchedule
or ScheduleAnyway
)
labelSelector
: Which pods the constraint applies to
C) This option has maxSkew: 1
, which ensures minimal imbalance (at most 1 pod difference between any two zones). It uses topologyKey: "zone"
, correctly targeting zone-level distribution. Most importantly, whenUnsatisfiable: "DoNotSchedule"
ensures that if the constraint can't be met, new pods won't be scheduled, preserving the balanced distribution. For a critical application where high availability is essential, this strict approach is best.
#######
A) This sets maxSkew: 1
, which is good as it ensures zones will have at most 1 pod difference between them. It uses topologyKey: "zone"
, which correctly targets the zones we want to spread across. However, whenUnsatisfiable: "ScheduleAnyway"
means that if the constraint can't be met, pods will be scheduled anyway, potentially creating an imbalance. For a critical application, this is risky.
B) This uses topologyKey: "region"
instead of "zone". Regions typically contain multiple zones, so this doesn't ensure distribution across zones within a region. Also, maxSkew: 2
allows for more imbalance than needed.
D) his has maxSkew: 3
, which allows for a significant imbalanceâone zone could have 3 more pods than another. Like option A, it uses ScheduleAnyway
, which could compromise availability if zones become too imbalanced.