Skip to content

tests: Add virtio-net tests + supporting testing framework improvements#603

Draft
mtjhrc wants to merge 18 commits intocontainers:mainfrom
mtjhrc:network-testing
Draft

tests: Add virtio-net tests + supporting testing framework improvements#603
mtjhrc wants to merge 18 commits intocontainers:mainfrom
mtjhrc:network-testing

Conversation

@mtjhrc
Copy link
Copy Markdown
Collaborator

@mtjhrc mtjhrc commented Mar 24, 2026

Changes:

  • Move network namespace isolation (unshare) from a single global wrapper to per-test isolation in the runner
  • Test runner can now clean up background processes (e.g. gvproxy, vmnet-helper) after each test
  • Add configurable per-test timeout to prevent hanging the suite
  • Tests can now specify a Containerfile to build a guest rootfs via podman — the runner builds the image, exports it, and mounts it as the guest's virtiofs root. Used by iperf3 tests to get a Fedora-based rootfs with iperf3 pre-installed.
  • Introduce TestOutcome::Report so tests can produce structured output (terminal text + GitHub-flavored markdown for CI summaries) instead of just pass/fail
  • Introduce functional virtio-net tests for passt, tap, gvproxy, and vmnet-helper backends using guest DHCP
  • Introduce parametrized iperf3 performance tests that run iperf3 client in the guest against a host server and report throughput

.arg("--fd")
.arg(helper_fd.to_string())
.arg("--enable-tso")
.arg("--enable-checksum-offload")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be useful to test with and without offloading. Offloading does not work for use cases, for example when clients is in a Kubernets pod network. We get very low bandwidth (1000x slower) and huge amount of retransmits.

https://github.com/nirs/vmnet-helper/blob/615b5bb5dc4def4e9453e7f4caaa7ff9d7cdd3b3/performance/2026-02/M2/minikube/report.md#pod-network---iperf3-on-vm

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, makes sense.

Also I am still unsure how I want to go about the performance testing. I mean if we should have many fixed variants of the tests here or just make them more parametrized and make the user pick.
Thing is we probably won't be running these performance tests in a CI for the foreseeable future anyway (no full macOS CI, not sure how consistent the performance baseline is on Linux...)

I put the performance tests here mostly because it's a really convenient way to run them locally for me when making changes to libkrun code (and reviewing PRs).

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can check vmnet-helper benchmarking infrastructure. It uses yaml to describe benchmarks parameters:
https://github.com/nirs/vmnet-helper/tree/main/benchmarks

and plots parameters:
https://github.com/nirs/vmnet-helper/blob/main/plots/offloading.yaml

The bench run command read the yaml file and run all the benchmarks, saving results to json files:
https://github.com/nirs/vmnet-helper/blob/main/bench

The bench plot command read plot yaml files describing the plots and read data generated by the bench run.

The benchmarks are very noisy since we run multiple vms and we cannot control what macOS run in the background.

To compare results for PR you need to run the same benchmark twice, once with the previous version and once with the change. The run must be long enough to mitigate random noise during the run.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes the graphs and everything is nice! But makes we wonder if it is necessary to replicate the whole testing infrastructure here (but I guess we want that for other back-ends on Linux).
Anyways such improvements are out-of-scope for this PR, but they can be added later.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Network benchmarks for different programs and network proxies sounds like a separate project. This can be used by vfkit, krunkit, qemu, lima, minikube, vmnet-helper, gvproxy, passt and more.

slp and others added 5 commits March 25, 2026 13:22
If there's a eth0 interface present, configure it with DHCP.

Signed-off-by: Sergio Lopez <slp@redhat.com>
Replace the temporary link-local address (169.254.1.1) workaround with
SO_BINDTODEVICE.

The temp address caused the kernel to use 169.254.1.1 as the source IP in DHCP
packets; gvproxy then tried to reply to that address and failed with
"no route to host". With this change the source IP should be 0.0.0.0, which is
what RFC 2131 requires for DHCPDISCOVER.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
When a server answers DHCPDISCOVER with DHCPOFFER instead of an immediate ACK,
send DHCPREQUEST for the and wait for the final ACK.

This makes DHCP work on macOS hosts when using gvproxy for networking.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Signed-off-by: Matej Hrica <mhrica@redhat.com>
Instead of wrapping the entire test runner in a single unshare
namespace from run.sh, perform per-test network namespace isolation
directly in the runner when spawning each test subprocess.

On Linux, each test is wrapped with `unshare --user --map-root-user
--net` and loopback is brought up inside the namespace. If unshare
is unavailable, tests run without isolation (with a warning).
On macOS, tests run directly without unshare.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
mtjhrc added 12 commits March 26, 2026 12:56
Buildah creates rootfs content with mapped UIDs, so the self-hosted
runner needs sudo to remove the test directory before the next run.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Rootfs directories contain files with mapped UIDs that the runner
can't read, breaking the artifact zip upload.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Use `buildah unshare -- unshare --net` instead of
`unshare --user --map-root-user --net` to get proper
UIDs/GIDs inside the test namespace via /etc/subuid and /etc/subgid.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Signed-off-by: Matej Hrica <mhrica@redhat.com>
Move TestOutcome from the runner into test_cases so individual tests
can return their own outcome from check(). The runner now uses the
returned value directly instead of relying solely on catch_unwind to
distinguish pass from fail.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Add a Report variant to TestOutcome that carries a ReportImpl trait
object, allowing tests to produce structured output (text for the
terminal, GitHub-flavored markdown for CI summaries) instead of a
simple pass/fail.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Register background process PIDs (gvproxy, vmnet-helper) for automatic
cleanup after each test. The runner sends SIGTERM, waits up to 5s, then
SIGKILL any survivors.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Each test has a configurable timeout (default 15s). If the child process
doesn't exit within the deadline, the runner kills it, dumps any captured
stdout, cleans up registered PIDs, and reports FAIL.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Stream test stdout directly to stdout.txt in the test artifacts
directory instead of buffering in memory. Read it back for check().
This ensures raw output (e.g. iperf3 JSON) is always available in
artifacts, and shows where the test got stuck if it times out.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Add tests for passt, tap, gvproxy, and vmnet-helper using
guest DHCP setup across the supported network backends.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Add parametrized performance tests for each virtio-net backend
(passt, tap, gvproxy, vmnet-helper) in both upload and download
directions. Each test starts an iperf3 server on the host, runs
the iperf3 client inside a Fedora-based guest VM, and reports
throughput results as structured text/markdown via the Report
outcome.

Tests require IPERF_DURATION to be set at compile time and use a
podman-built rootfs with iperf3 pre-installed. They are skipped
when prerequisites are unavailable.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
- Install buildah for namespace isolation in tests
- Build passt from source (Ubuntu 24.04 apt version is too old)
- Install dnsmasq and iperf3 for tap and perf tests

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Build with NET=1 and run network/iperf3 tests in CI.

Signed-off-by: Matej Hrica <mhrica@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants