What steps did you take and what happened?
Testing the rollout machinedeployment feature of clusterctl i have notived that when a MachineSet's metadata.creationTimestamp equals exactly the MachineDeployment's spec.rollout.after time, the MachineSet is incorrectly marked as not up-to-date, causing the MachineDeployment RollingOut status to keep the True value forever.
This seems to be a boundary condition bug in the timestamp comparison logic:
|
if s.owningMachineDeployment.Spec.Rollout.After.Time.Before(s.reconciliationTime) && !s.machineSet.CreationTimestamp.After(s.owningMachineDeployment.Spec.Rollout.After.Time) { |
when timestamps are equal (in that case the after() will false).
I suppose that using the before() function could help to resolve the issue here.
What did you expect to happen?
MachineDeployment to have a correct RollingOut status type.
Cluster API version
v1.11.4
Kubernetes version
v1.34.3
Anything else you would like to add?
Output showing the issue:
k get ms k8s-ppi-md-0-fqqrs-bmsxw
NAME CLUSTER DESIRED CURRENT READY AVAILABLE UP-TO-DATE AGE VERSION
k8s-ppi-md-0-fqqrs-bmsxw k8s-ppi 1 1 1 1 0 71m v1.34.3
k get ms k8s-ppi-md-0-fqqrs-bmsxw -o jsonpath='{"creationTimestamp:"}{.metadata.creationTimestamp}{"\n"}{"MachinesUpToDate:"}{.status.conditions[?(@.type=="MachinesUpToDate")]}{"\n"}{"owner md:"}{.metadata.ownerReferences[?(@.kind=="MachineDeployment")].name}'
creationTimestamp:2025-12-22T14:11:22Z
MachinesUpToDate:{"lastTransitionTime":"2025-12-22T14:11:25Z","message":"* Machine k8s-ppi-md-0-fqqrs-bmsxw-67b7f:\n * MachineDeployment spec.rolloutAfter expired","observedGeneration":1,"reason":"NotUpToDate","status":"False","type":"MachinesUpToDate"}
owner md:k8s-ppi-md-0-fqqrs
k get md k8s-ppi-md-0-fqqrs
NAME CLUSTER AVAILABLE DESIRED CURRENT READY AVAILABLE UP-TO-DATE PHASE AGE VERSION
k8s-ppi-md-0-fqqrs k8s-ppi True 1 1 1 1 0 Running 5d23h v1.34.3
k get md k8s-ppi-md-0-fqqrs -o jsonpath='{"rolloutAfter:"}{.spec.rollout.after}{"\n"}{"RollingOut:"}{.status.conditions[?(@.type=="RollingOut")]}{"\n"}'
rolloutAfter:2025-12-22T14:11:22Z
RollingOut:{"lastTransitionTime":"2025-12-22T14:11:23Z","message":"Rolling out 1 not up-to-date replicas\n* MachineDeployment spec.rolloutAfter expired","observedGeneration":12,"reason":"RollingOut","status":"True","type":"RollingOut"}
Label(s) to be applied
/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
What steps did you take and what happened?
Testing the rollout machinedeployment feature of
clusterctli have notived that when a MachineSet'smetadata.creationTimestampequals exactly the MachineDeployment'sspec.rollout.aftertime, the MachineSet is incorrectly marked as not up-to-date, causing the MachineDeploymentRollingOutstatus to keep theTruevalue forever.This seems to be a boundary condition bug in the timestamp comparison logic:
cluster-api/internal/controllers/machineset/machineset_controller.go
Line 639 in 5226d3f
when timestamps are equal (in that case the
after()will false).I suppose that using the
before()function could help to resolve the issue here.What did you expect to happen?
MachineDeployment to have a correct
RollingOutstatus type.Cluster API version
v1.11.4
Kubernetes version
v1.34.3
Anything else you would like to add?
Output showing the issue:
Label(s) to be applied
/kind bug
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.