Skip to content

OCPBUGS-82144: Remove EnsureMemberRemoved from graceful shutdown test#30981

Open
fonta-rh wants to merge 2 commits intoopenshift:mainfrom
fonta-rh:OCPBUGS-82144-remove-ensure-member-removed
Open

OCPBUGS-82144: Remove EnsureMemberRemoved from graceful shutdown test#30981
fonta-rh wants to merge 2 commits intoopenshift:mainfrom
fonta-rh:OCPBUGS-82144-remove-ensure-member-removed

Conversation

@fonta-rh
Copy link
Copy Markdown
Contributor

@fonta-rh fonta-rh commented Apr 9, 2026

Summary

  • Remove EnsureMemberRemoved assertion from the graceful shutdown recovery test — it's incompatible with the force-new-cluster recovery path where member removal + re-addition happens in ~1s (invisible to the test's 5s polling)
  • Replace with a post-recovery journal check that logs which recovery path was taken (path=clean-leave vs path=force-new-cluster) as a structured [RecoveryPath] line
  • Remove unused memberHasLeftTimeout constant

The race between clean-leave and force-new-cluster

When shutdown -r 1 is issued, systemd sets a shutdown inhibitor that pacemaker detects immediately (~1s) via the shutdown attribute — it does not wait for the 1-minute reboot delay. Pacemaker then schedules a transition to stop etcd:0 on the departing node, which runs the RA's podman_stop()leave_etcd_member_list() sequence.

At the same time, the node is in the process of shutting down. knet (corosync's network layer) monitors the link to the departing node with ~84ms detection granularity, and TOTEM has a 3000ms token timeout.

The race: podman_stop() must complete etcdctl member remove before knet detects the link is down and TOTEM expires. If it wins, the departing node cleanly removes itself from the etcd cluster (clean-leave path). If it loses — because the node's network stack goes down before the RA finishes — the surviving node loses quorum and falls back to force-new-cluster.

Path When it fires Member absent window Observable at 5s polling?
Clean leave podman_stop() completes before knet drops ~28s (until manage_peer_membership() re-adds as learner) Yes
Force-new-cluster Node dies before RA stop completes ~1s (implicit removal + immediate re-add) No

In validation across 6 runs with a patched payload, clean-leave won 5/6 times, but looking at past data it is happening much less frequently. The 1 failure was the force-new-cluster path — podman_stop() never ran, knet dropped 2s after pacemaker detected the shutdown attribute, and the test's EnsureMemberRemoved assertion waited 1800s for a member removal that had already happened and been reversed within 1 second.

This is not a regression — it's a pre-existing race inherent to the graceful shutdown mechanism. The EnsureMemberRemoved assertion tested a guarantee the system does not provide.

What this PR does

Removes the flaky assertion and replaces it with a logRecoveryPath() call that runs after recovery completes. It reads the surviving node's journal and emits a structured log line:

[sig-etcd][RecoveryPath] node=master-0 path=clean-leave
[sig-etcd][RecoveryPath] node=master-0 path=force-new-cluster journal="..."

This gives us parseable data to track how often each path fires in CI without failing the test.

Test plan

  • e2e-metal-ovn-two-node-fencing-ipv6-recovery-techpreview passes the graceful shutdown test
  • Ungraceful, network disruption, sequential, and all other recovery tests unchanged
  • No other test files modified

🤖 Generated with Claude Code

The EnsureMemberRemoved assertion is incompatible with the
force-new-cluster recovery path, where the member is removed and
re-added within ~1s — invisible to the test's polling interval.

Replace the assertion with a post-recovery journal check that logs
which recovery path was taken (clean-leave vs force-new-cluster)
as a structured [RecoveryPath] line for CI tracking.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@openshift-ci-robot
Copy link
Copy Markdown

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. labels Apr 9, 2026
@openshift-ci-robot
Copy link
Copy Markdown

@fonta-rh: This pull request references Jira Issue OCPBUGS-82144, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

The bug has been updated to refer to the pull request using the external bug tracker.

Details

In response to this:

Summary

  • Remove EnsureMemberRemoved assertion from the graceful shutdown recovery test — it's incompatible with the force-new-cluster recovery path where member removal + re-addition happens in ~1s (invisible to the test's 5s polling)
  • Replace with a post-recovery journal check that logs which recovery path was taken (path=clean-leave vs path=force-new-cluster) as a structured [RecoveryPath] line
  • Remove unused memberHasLeftTimeout constant

Context

After graceful shutdown, two recovery paths exist:

  • Clean leave: podman_stop() completes → member absent for ~28s → re-added as learner
  • Force-new-cluster: node dies before RA stop → quorum lost → member removed + re-added in ~1s

The EnsureMemberRemoved assertion assumed clean leave always happens, but it's a race — 1/6 runs in validation used force-new-cluster. The test timed out after 1800s waiting for a member removal it could never observe.

The new logRecoveryPath function reads the surviving node's journal after recovery completes and emits a parseable log line:

[sig-etcd][RecoveryPath] node=master-0 path=clean-leave
[sig-etcd][RecoveryPath] node=master-0 path=force-new-cluster journal="..."

Test plan

  • e2e-metal-ovn-two-node-fencing-ipv6-recovery-techpreview passes the graceful shutdown test
  • Ungraceful, network disruption, sequential, and all other recovery tests unchanged
  • No other test files modified

🤖 Generated with Claude Code

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 9, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: fonta-rh
Once this PR has been reviewed and has the lgtm label, please assign jeff-roche for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 9, 2026

Walkthrough

A helper was added to inspect openshift-etcd journal logs and log the etcd recovery path; the test removed the explicit etcd-member-removal wait and now emits recovery-path logs after recovery validation.

Changes

Cohort / File(s) Summary
Test Recovery Logging
test/extended/two_node/tnf_recovery.go
Added strings import and logRecoveryPath(oc, survivedNode, targetNode) to parse journalctl for force.new.cluster and emit [sig-etcd][RecoveryPath] logs (path=force-new-cluster, path=clean-leave, or path=unknown). Removed the explicit member-removal wait/assertion and deleted the unused memberHasLeftTimeout constant.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.11.4)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions


Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot requested review from eggfoobar and jaypoulz April 9, 2026 05:39
@openshift-ci-robot
Copy link
Copy Markdown

@fonta-rh: This pull request references Jira Issue OCPBUGS-82144, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.22.0) matches configured target version for branch (4.22.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)
Details

In response to this:

Summary

  • Remove EnsureMemberRemoved assertion from the graceful shutdown recovery test — it's incompatible with the force-new-cluster recovery path where member removal + re-addition happens in ~1s (invisible to the test's 5s polling)
  • Replace with a post-recovery journal check that logs which recovery path was taken (path=clean-leave vs path=force-new-cluster) as a structured [RecoveryPath] line
  • Remove unused memberHasLeftTimeout constant

The race between clean-leave and force-new-cluster

When shutdown -r 1 is issued, systemd sets a shutdown inhibitor that pacemaker detects immediately (~1s) via the shutdown attribute — it does not wait for the 1-minute reboot delay. Pacemaker then schedules a transition to stop etcd:0 on the departing node, which runs the RA's podman_stop()leave_etcd_member_list() sequence.

At the same time, the node is in the process of shutting down. knet (corosync's network layer) monitors the link to the departing node with ~84ms detection granularity, and TOTEM has a 3000ms token timeout.

The race: podman_stop() must complete etcdctl member remove before knet detects the link is down and TOTEM expires. If it wins, the departing node cleanly removes itself from the etcd cluster (clean-leave path). If it loses — because the node's network stack goes down before the RA finishes — the surviving node loses quorum and falls back to force-new-cluster.

Path When it fires Member absent window Observable at 5s polling?
Clean leave podman_stop() completes before knet drops ~28s (until manage_peer_membership() re-adds as learner) Yes
Force-new-cluster Node dies before RA stop completes ~1s (implicit removal + immediate re-add) No

In validation across 6 runs with a patched payload, clean-leave won 5/6 times. The 1 failure was the force-new-cluster path — podman_stop() never ran, knet dropped 2s after pacemaker detected the shutdown attribute, and the test's EnsureMemberRemoved assertion waited 1800s for a member removal that had already happened and been reversed within 1 second.

This is not a regression — it's a pre-existing race inherent to the graceful shutdown mechanism. The EnsureMemberRemoved assertion tested a guarantee the system does not provide.

What this PR does

Removes the flaky assertion and replaces it with a logRecoveryPath() call that runs after recovery completes. It reads the surviving node's journal and emits a structured log line:

[sig-etcd][RecoveryPath] node=master-0 path=clean-leave
[sig-etcd][RecoveryPath] node=master-0 path=force-new-cluster journal="..."

This gives us parseable data to track how often each path fires in CI without failing the test.

Test plan

  • e2e-metal-ovn-two-node-fencing-ipv6-recovery-techpreview passes the graceful shutdown test
  • Ungraceful, network disruption, sequential, and all other recovery tests unchanged
  • No other test files modified

🤖 Generated with Claude Code

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/extended/two_node/tnf_recovery.go`:
- Around line 54-55: The command invocation using
exutil.DebugNodeRetryWithOptionsAndChroot silently swallows failures and may
miss logs due to the fixed 30-minute window; update the command string (the call
that sets output, err for survivedNode.Name) to (1) remove the redirection
"2>/dev/null" and the "|| true" so errors are returned to err, (2) extend or
make the lookback configurable (e.g., use an hour or a test-configurable window
instead of "30 min ago"), and (3) after the call, check err and the output
explicitly to classify recovery path instead of assuming success when output is
empty; ensure the modifications are applied to the other similar calls around
lines 60-64 as well.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6910c19c-6a79-4d2b-bc0e-a5e7a471f85a

📥 Commits

Reviewing files that changed from the base of the PR and between e03cfa1 and f80c1de.

📒 Files selected for processing (1)
  • test/extended/two_node/tnf_recovery.go

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
test/extended/two_node/tnf_recovery.go (1)

54-57: ⚠️ Potential issue | 🟠 Major

Do not swallow journal command failures in recovery-path detection

Line 55 appends || true, which masks failures and can misclassify failures as path=clean-leave when output is empty. That weakens CI recovery-path telemetry.

Suggested fix
 output, err := exutil.DebugNodeRetryWithOptionsAndChroot(oc, survivedNode.Name, "openshift-etcd",
-	"bash", "-c", "journalctl --since '60 min ago' --no-pager | grep -m1 'force.new.cluster' || true")
+	"bash", "-c", "journalctl --since '60 min ago' --no-pager | grep -m1 -E 'force[.-]new[.-]cluster'; rc_j=${PIPESTATUS[0]}; rc_g=${PIPESTATUS[1]}; if [ $rc_j -ne 0 ]; then exit $rc_j; fi; if [ $rc_g -eq 1 ]; then exit 0; fi; exit $rc_g")
#!/bin/bash
# Verify that the recovery-path command still suppresses errors with `|| true`
# and does not use PIPESTATUS-based handling.
rg -n -C3 "logRecoveryPath|journalctl --since '60 min ago'|\\|\\| true|PIPESTATUS" test/extended/two_node/tnf_recovery.go

Expected result: the command in logRecoveryPath shows || true and lacks explicit PIPESTATUS handling, confirming masked failures are possible.

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/two_node/tnf_recovery.go` around lines 54 - 57, The
recovery-path journal command currently swallows errors by appending "|| true"
in the call to DebugNodeRetryWithOptionsAndChroot inside logRecoveryPath
(tnf_recovery.go); remove the "|| true" and run the command under bash with "set
-o pipefail" so journalctl failures propagate (e.g. "bash -c 'set -o pipefail;
journalctl --since ... --no-pager | grep -m1 \"force.new.cluster\"'"), then let
the existing error handling around DebugNodeRetryWithOptionsAndChroot/reporting
handle non-zero exits so true failures aren't misclassified as clean-leave.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@test/extended/two_node/tnf_recovery.go`:
- Around line 54-57: The recovery-path journal command currently swallows errors
by appending "|| true" in the call to DebugNodeRetryWithOptionsAndChroot inside
logRecoveryPath (tnf_recovery.go); remove the "|| true" and run the command
under bash with "set -o pipefail" so journalctl failures propagate (e.g. "bash
-c 'set -o pipefail; journalctl --since ... --no-pager | grep -m1
\"force.new.cluster\"'"), then let the existing error handling around
DebugNodeRetryWithOptionsAndChroot/reporting handle non-zero exits so true
failures aren't misclassified as clean-leave.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ed9cf105-0686-4c33-9f7b-d74bff318ede

📥 Commits

Reviewing files that changed from the base of the PR and between f80c1de and a3a1d33.

📒 Files selected for processing (1)
  • test/extended/two_node/tnf_recovery.go

@openshift-ci-robot
Copy link
Copy Markdown

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 9, 2026

@fonta-rh: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@fonta-rh
Copy link
Copy Markdown
Contributor Author

fonta-rh commented Apr 9, 2026

/payload-job periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-ipv6-recovery-techpreview

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci bot commented Apr 9, 2026

@fonta-rh: trigger 1 job(s) for the /payload-(with-prs|job|aggregate|job-with-prs|aggregate-with-prs) command

  • periodic-ci-openshift-release-main-nightly-4.22-e2e-metal-ovn-two-node-fencing-ipv6-recovery-techpreview

See details on https://pr-payload-tests.ci.openshift.org/runs/ci/79753760-3402-11f1-8b14-d39fd3e90f64-0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants