Skip to content

fix(e2e): reduce E2E test flakiness#8480

Open
r2k1 wants to merge 2 commits intomainfrom
e2e-flakiness-fixes
Open

fix(e2e): reduce E2E test flakiness#8480
r2k1 wants to merge 2 commits intomainfrom
e2e-flakiness-fixes

Conversation

@r2k1
Copy link
Copy Markdown
Contributor

@r2k1 r2k1 commented May 8, 2026

Summary

Fixes several E2E test issues found by analyzing 55 builds on main over 3 weeks.

Changes

1. Remove FailedCreatePodSandBox early exit (kube.go)

The old code listed pod events every 3s and killed the test on the first FailedCreatePodSandBox event. But kubelet retries sandbox creation automatically — transient errors like "loopback interrupted system call" resolve on their own. The early exit was killing tests before kubelet could recover.

Removed the event check entirely. The 5-minute poll timeout handles persistent failures. CrashLoopBackOff still fails fast.

Also removed the unused WaitUntilPodRunningWithRetry wrapper and maxRetries parameter.

2. Fix duplicate CSE event extraction (cse_timing.go)

The find command searched two overlapping paths: cseEventsDir (.../events/) AND its parent directory with -path '*/events/*'. Same files matched twice → duplicate tasks → spurious Task_installDeps#01 subtests (observed in 11/46 builds).

Fixed by using a single starting path.

3. Strict wireserver validation (validation.go)

Previously retried for 1 minute, passing if curl eventually timed out. But any successful curl means wireserver was reachable from pods — a security issue, not a transient condition to retry through.

Changed to single curl per check, fail immediately if reachable. Added diagnostics on failure: FORWARD chain, KUBE-FORWARD chain, iptables-save, conntrack entries.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR targets major sources of E2E test flakiness by tightening readiness gating, reducing validator false positives, and stabilizing timing-based assertions. It also extends scenario mutator hooks to receive cluster context, enabling scenarios that depend on cluster-level derived values (e.g., a proxy URL).

Changes:

  • Make node readiness waiting more accurate by requiring cloud-provider uninitialized taint removal before proceeding.
  • Reduce false-positive failures by expanding the eBPF/iptables allowlist and deduplicating CSE timing events; relax overly tight CSE perf thresholds.
  • Update E2E scenario mutator function signatures to accept *Cluster, adjust specific scenarios (e.g., Flatcar AzureCNI), and skip a consistently failing Ubuntu 20.04 FIPS lane.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated no comments.

Show a summary per file
File Description
e2e/kube.go Improves node readiness gating; adds proxy ConfigMap/DaemonSet + proxy discovery; fixes DaemonSet CreateOrUpdate mutation.
e2e/cluster.go Adds cluster-level ProxyURL plumbing, sets up private DNS for API server, and broadens retryable cluster-create errors.
e2e/aks_model.go Adds 409-conflict handling/waits for private DNS zone and VNet link creation.
e2e/validation.go Allows a standard DHCP INPUT rule in the eBPF-host-routing iptables compatibility allowlist.
e2e/cse_timing.go Deduplicates extracted CSE event timings that can appear in multiple event directories.
e2e/scenario_cse_perf_test.go Adjusts full-install timing thresholds (notably installDeps) and updates mutator signatures.
e2e/types.go Updates BootstrapConfigMutator / AKSNodeConfigMutator signatures to include *Cluster.
e2e/test_helpers.go Threads *Cluster through mutator invocations (including pre-provision flow).
e2e/scenario_test.go Removes forced Azure CNI plugin settings for Flatcar AzureCNI, skips Ubuntu2004FIPS, adds HTTPS proxy + private DNS scenario, updates mutator signatures across scenarios.
e2e/scenario_win_test.go Updates bootstrap mutator signatures for Windows scenarios.
e2e/scenario_localdns_hosts_test.go Updates mutator signatures for LocalDNS hosts plugin scenarios.
e2e/scenario_gpu_managed_experience_test.go Updates mutator signatures for GPU managed experience scenarios.
e2e/scenario_gpu_daemonset_test.go Updates mutator signatures for GPU daemonset scenario.
Comments suppressed due to low confidence (2)

e2e/kube.go:330

  • EnsureDebugDaemonsets now creates the proxy ConfigMap/DaemonSet for every non-network-isolated cluster. If the proxy is only needed for a subset of scenarios, consider decoupling it from the general "debug daemonsets" setup so failures in pulling/starting the proxy don’t break unrelated tests during cluster preparation.
		return nil
	})
	if err != nil {
		return err
	}
	return nil
}

func (k *Kubeclient) createKubernetesSecret(ctx context.Context, namespace, secretName, registryName, username, password string) error {
	defer toolkit.LogStepCtxf(ctx, "creating kubernetes secret %s in namespace %s for registry %s", secretName, namespace, registryName)()

e2e/kube.go:606

  • The proxy DaemonSet runs with HostNetwork: true and exposes a fixed HostPort (8888) while tolerating all taints. This effectively opens an unauthenticated forward proxy on every system-pool node IP, which is reachable from within the VNet and may be abused or interfere with other host processes that might already bind the port. Consider scoping this down (e.g., run on a single chosen node/instance, tighten tolerations/nodeSelectors, and/or add network-level restrictions) and document the intended threat model for this test-only proxy.

@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from ba7f563 to e29f86b Compare May 8, 2026 04:56
@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from e29f86b to fb806d0 Compare May 8, 2026 04:57
Copilot AI review requested due to automatic review settings May 8, 2026 04:57

This comment was marked as outdated.

Add E2E test for node bootstrapping with HTTPProxyConfig set and
private DNS zone for the API server FQDN. Regression coverage for
IcM 603699115 / ADO#31707996.

Changes:
- Refactor BootstrapConfigMutator and AKSNodeConfigMutator to accept
  *Cluster parameter, enabling scenarios to access cluster properties
- Deploy Python-based CONNECT proxy DaemonSet on all non-isolated
  clusters using mcr.microsoft.com/cbl-mariner/base/python:3
- Create private DNS zone for API server FQDN on all non-isolated
  clusters, linked to VNet with A record
- Add Test_Ubuntu2204_HTTPProxy_PrivateDNS scenario
- Fix cluster creation retry to handle NotFound errors

Test verified: node boots, CSE completes, kubelet starts, node Ready,
test pod runs. Proxy receives CONNECT traffic from CSE outbound check.

Fixes: ADO#31707996

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from b1e817d to beeaaaa Compare May 8, 2026 05:32
@r2k1 r2k1 changed the title fix(e2e): address multiple sources of E2E test flakiness fix(e2e): reduce E2E test flakiness May 8, 2026
Copilot AI review requested due to automatic review settings May 10, 2026 20:42
@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from beeaaaa to 3e2dd81 Compare May 10, 2026 20:42

This comment was marked as outdated.

@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from 3e2dd81 to af07286 Compare May 10, 2026 20:47
Copilot AI review requested due to automatic review settings May 10, 2026 20:50
@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from af07286 to 7480aaf Compare May 10, 2026 20:50
@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from 7480aaf to 2957903 Compare May 10, 2026 20:51

This comment was marked as outdated.

@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from 2957903 to bb79253 Compare May 10, 2026 20:57
Copilot AI review requested due to automatic review settings May 10, 2026 21:01
@r2k1 r2k1 force-pushed the e2e-flakiness-fixes branch from bb79253 to 29ff11a Compare May 10, 2026 21:01

This comment was marked as outdated.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

Comments suppressed due to low confidence (1)

e2e/kube.go:83

  • PR description mentions tolerating transient FailedCreatePodSandBox events by aggregating event.Count and allowing retries (e.g., maxRetries=3), but WaitUntilPodRunning no longer has any sandbox-event handling and the prior retry-aware method was removed. Either implement the described sandbox-failure tolerance (including using event.Count) or adjust the PR description so reviewers/users don’t assume this mitigation exists.
func (k *Kubeclient) WaitUntilPodRunning(ctx context.Context, namespace string, labelSelector string, fieldSelector string) (*corev1.Pod, error) {
	defer toolkit.LogStepCtxf(ctx, "waiting for pod %s %s in %q namespace", labelSelector, fieldSelector, namespace)()
	var pod *corev1.Pod

	err := wait.PollUntilContextTimeout(ctx, 3*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) {
		pods, err := k.Typed.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{
			FieldSelector: fieldSelector,

Comment thread e2e/validation.go
// absent when kube-proxy or CNI flush/recreate iptables chains during node setup.
// We resolve the debug pod once up front (outside the retry budget) so that pod
// scheduling latency doesn't eat into the iptables-check timeout.
// validateWireServerBlocked checks that unprivileged pods cannot reach WireServer.
Comment thread e2e/validation.go Outdated
Comment on lines +295 to +303
for _, check := range checks {
var lastResult *podExecResult
err := wait.PollUntilContextTimeout(ctx, 10*time.Second, 1*time.Minute, true, func(ctx context.Context) (bool, error) {
execResult, execErr := execOnUnprivilegedPod(ctx, s.Runtime.Cluster.Kube, nonHostPod.Namespace, nonHostPod.Name, check.cmd)
if execErr != nil {
s.T.Logf("wireserver check %q: exec error (retrying): %v", check.desc, execErr)
return false, nil
}
lastResult = execResult
if lastResult.exitCode == "28" {
return true, nil
}
s.T.Logf("wireserver check %q: expected exit code 28, got %s (retrying)", check.desc, lastResult.exitCode)
return false, nil
})
if err != nil {
s.T.Logf("host IPTABLES: %s", execScriptOnVMForScenario(ctx, s, "sudo iptables -t filter -L FORWARD -v -n --line-numbers").String())
if lastResult == nil {
require.NoErrorf(s.T, err, "curl to %s did not complete before polling stopped", check.desc)
}
s.T.Logf("last curl result for %s: %s", check.desc, lastResult.String())
assert.Equal(s.T, "28", lastResult.exitCode, "curl to %s expected to fail with timeout, but it didn't after retries", check.desc)
s.T.FailNow()
execResult, execErr := execOnUnprivilegedPod(ctx, s.Runtime.Cluster.Kube, nonHostPod.Namespace, nonHostPod.Name, check.cmd)
require.NoError(s.T, execErr, "failed to exec wireserver check %q on debug pod", check.desc)
if execResult.exitCode != "28" {
iptablesFwd := execScriptOnVMForScenario(ctx, s, "sudo iptables -t filter -L FORWARD -v -n --line-numbers").String()
iptablesKubeFwd := execScriptOnVMForScenario(ctx, s, "sudo iptables -t filter -L KUBE-FORWARD -v -n --line-numbers 2>/dev/null || echo 'chain not found'").String()
iptablesSave := execScriptOnVMForScenario(ctx, s, "sudo iptables-save -t filter 2>/dev/null | head -80").String()
conntrack := execScriptOnVMForScenario(ctx, s, "sudo conntrack -L -d 168.63.129.16 2>/dev/null || echo 'conntrack not available'").String()
s.T.Fatalf("wireserver must not be reachable from pods: curl to %s returned exit code %s (expected 28/timeout)\n"+
Copy link
Copy Markdown
Contributor Author

@r2k1 r2k1 May 10, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every single request to wireserver is expected to fail. If doesn't fail ocasionally, then it highlights a potential issue that needs to be addressed

Comment thread e2e/validation.go Outdated
Comment on lines 269 to 273
// validateWireServerBlocked checks that unprivileged pods cannot reach WireServer.
// The iptables FORWARD DROP rules blocking pod→WireServer traffic can be transiently
// absent when kube-proxy or CNI flush/recreate iptables chains during node setup.
// We resolve the debug pod once up front (outside the retry budget) so that pod
// scheduling latency doesn't eat into the iptables-check timeout.
// validateWireServerBlocked checks that unprivileged pods cannot reach WireServer.
// Wireserver must never be reachable from pods — any successful connection is a
// security issue, not a transient condition to retry through.
func validateWireServerBlocked(ctx context.Context, s *Scenario) {
Comment thread e2e/cse_timing.go
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated no new comments.

Comments suppressed due to low confidence (2)

e2e/validation.go:26

  • ValidatePodRunningWithRetry currently will not compile: for i := range maxRetries attempts to range over an int. Use a standard counted loop (e.g., for i := 0; i < maxRetries; i++) and ensure the function still performs at least one validation attempt when maxRetries is 0 or 1 (depending on intended semantics).
func ValidatePodRunningWithRetry(ctx context.Context, s *Scenario, pod *corev1.Pod, maxRetries int) {
	var err error
	for i := range maxRetries {
		err = validatePodRunning(ctx, s, pod)
		if err != nil {
			time.Sleep(1 * time.Second)
			s.T.Logf("retrying pod %q validation (%d/%d)", pod.Name, i+1, maxRetries)
			continue

e2e/kube.go:82

  • PR description says WaitUntilNodeReady() was updated to wait for removal of the node.cloudprovider.kubernetes.io/uninitialized:NoSchedule taint before returning success, but the implementation still returns as soon as NodeReady=True (no taint check). Either add the taint-removal wait as described, or update the PR description to match the actual behavior.
func (k *Kubeclient) WaitUntilPodRunning(ctx context.Context, namespace string, labelSelector string, fieldSelector string) (*corev1.Pod, error) {
	defer toolkit.LogStepCtxf(ctx, "waiting for pod %s %s in %q namespace", labelSelector, fieldSelector, namespace)()
	var pod *corev1.Pod

	err := wait.PollUntilContextTimeout(ctx, 3*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) {
		pods, err := k.Typed.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

Comments suppressed due to low confidence (2)

e2e/cse_timing.go:135

  • listCmd no longer references cseEventsDir, but the file still declares the cseEventsDir constant. That constant is now unused, which will cause go test/go vet to fail for the e2e module. Either remove cseEventsDir or reintroduce it into the command (without reintroducing duplicate file matches).
		"sudo find /var/log/azure/Microsoft.Azure.Extensions.CustomScript/ -name '*.json' -path '*/events/*' -exec sh -c 'cat \"$1\"; echo' _ {} \\; 2>/dev/null",
	)
	result, err := execScriptOnVm(ctx, s, s.Runtime.VM, listCmd)
	if err != nil {
		return nil, fmt.Errorf("failed to read CSE events: %w", err)
	}

e2e/kube.go:82

  • PR description mentions raising the installDeps CSE perf threshold in scenario_cse_perf_test.go, but that file is not part of the PR diff (only validation.go, kube.go, cse_timing.go are modified). Either include the intended threshold change in this PR or update the description to reflect the actual changes.
func (k *Kubeclient) WaitUntilPodRunning(ctx context.Context, namespace string, labelSelector string, fieldSelector string) (*corev1.Pod, error) {
	defer toolkit.LogStepCtxf(ctx, "waiting for pod %s %s in %q namespace", labelSelector, fieldSelector, namespace)()
	var pod *corev1.Pod

	err := wait.PollUntilContextTimeout(ctx, 3*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) {
		pods, err := k.Typed.CoreV1().Pods(namespace).List(ctx, metav1.ListOptions{

Comment thread e2e/validation.go Outdated
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

Comment thread e2e/validation.go
Comment on lines +295 to +300
execResult, execErr := execOnUnprivilegedPod(ctx, s.Runtime.Cluster.Kube, nonHostPod.Namespace, nonHostPod.Name, check.cmd)
require.NoError(s.T, execErr, "failed to exec wireserver check %q on debug pod", check.desc)
if execResult.exitCode == "0" {
iptablesFwd := execScriptOnVMForScenario(ctx, s, "sudo iptables -t filter -L FORWARD -v -n --line-numbers").String()
iptablesKubeFwd := execScriptOnVMForScenario(ctx, s, "sudo iptables -t filter -L KUBE-FORWARD -v -n --line-numbers 2>/dev/null || echo 'chain not found'").String()
iptablesSave := execScriptOnVMForScenario(ctx, s, "sudo iptables-save -t filter 2>/dev/null | head -80").String()
Revert a1bebdc (feat(e2e): add HTTP_PROXY + private DNS test scenario)
which had issues on the e2e-flakiness-fixes branch.

Analysis of 55 E2E builds on main (3 weeks) showed 84% failure rate.
Root causes identified and fixed:

1. Node readiness race (kube.go): WaitUntilNodeReady() returned success
   on NodeReady=True even when node still had the cloud-provider
   uninitialized taint, preventing test pod scheduling. Now waits for
   taint removal before declaring node ready.

2. IPtables false positives (validation.go): iptables eBPF-host-routing
   validator rejected a normal host DHCP INPUT rule (UDP/68) not in its
   allowlist. Added to allowlist.

3. CSE timing threshold (scenario_cse_perf_test.go): installDeps 90s
   threshold was set with 'no direct prod data' and consistently
   exceeded by the network-heavy apt workflow. Raised to 120s.

4. Duplicate CSE events (cse_timing.go): events appearing in both GA
   events directory and handler subdirectories created spurious
   Task_installDeps#01 subtests. Added deduplication.

5. Broken Ubuntu2004FIPS lane (scenario_test.go): Test added on
   2026-04-22 without VMSS FIPS capability setup, never green. Skipped
   until properly fixed.

Dropped from earlier version: Flatcar AzureCNI networkPlugin removal.
Rubber duck review found removing networkPlugin=azure defaults to
kubenet (not none), which would break tests differently. Proper fix
requires PR #7463 (set to none instead).

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copy link
Copy Markdown
Contributor Author

@r2k1 r2k1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed review comment:

validation.go:300 — curl exit code check: Fixed. Now only fails on exit code 0 (successful connection), not on any non-28 code. Exit codes like 7 (connection refused), 28 (timeout), or 127 (curl missing) all correctly indicate wireserver is not reachable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants