In system-tests, a scenario is a set of:
- a tested architecture, which can be a set of docker containers, a single container, or even nothing
- a list of setup executed on this tested architecture
- and a list of test
Every scenario is identified by a unique identifier in capital letters, like APPSEC_IP_BLOCKING_FULL_DENYLIST. To specify a scenario, simply use its name after run.sh:
./run.sh APPSEC_IP_BLOCKING_FULL_DENYLISTIf no scenario is specified, the DEFAULT scenario is executed.
Scenario's architecture is defined in python, in the file utils/_context/scenarios/__init__.py. Most of them are based on EndToEndScenario class. It spawns a container with an weblog (shipping a datadog tracer), a container with an datadog agent, and a proxy that spies everything coming from tracer and agent. Optionally, some other containers can be added (mostly databases).
A setup is a class method with the same name of a test method. They are all executed before any test starts. If a test is not executed (whatever the reason), the setup method won't be executed.
The scenarios singleton is available under utils module. It exposes decorators with all possible scenarios. Simply decorate your test class/method with it :
@scenarios.my_scenario
class Test_Something:
...System-tests contains various testing scenarios; the two most commonly used are called "End-To-End" and "Parametric."
Based on the class EndToEndScenario, they spawn a "weblog" HTTP server designed to mimic customer applications with automatic instrumentation, a "test-agent" to mimic the Datadog Agent, and communication with the Datadog backend via a proxy. The DEFAULT scenario is the main scenario of system tests, and is in this family.
End-To-End scenarios are good for testing real-world scenarios -- they support the full lifecycle of a trace (hence the name, "End-To-End"). Use End-To-End scenarios to test tracing integrations, security products, profiling, dynamic instrumentation, and more. When in doubt, use end-to-end.
Parametric scenarios are designed to validate tracer and span interfaces. They are more lightweight and support testing features with many input parameters. They should be used to test operations such as creating spans, setting tags, setting links, injecting/extracting http headers, getting tracer configurations, etc. You can find dedicated parametric instructions in parametric.md.
Automatic library injection simplifies the APM onboarding experience for customers deploying Java, Node.js, .NET, Python and Ruby applications in VMs and containerized environments. Datadog software installed on the machine will be intercept the startup of your application and it will inject the tracer library automatically. The Onboarding scenarios reproduce different environments and check that the library injection is done correctly. The SSI scenario can be splitted in two scenarios:
- AWS SSI tests: Run an AWS EC2 instance and install a provision. A provision usually is a Datadog SSI software installation and a weblog installation. We check that the weblog is auto instrumented. More detailed documentation can be found here.
- Docker SSI tests: Run on docker and install a provision. A provision usually is a Datadog SSI software installation and a weblog instalation. We check that the weblog is auto instrumented. More detailled documentation can be found here.
The lib-injection project is a feature to allow injection of the Datadog library into a customer's application container without requiring them to modify their application images.
This feature enables applications written in Java, Node.js, Python, .NET or Ruby running in Kubernetes to be automatically instrumented with the corresponding Datadog APM libraries. More detailed documentation can be found here.
The AI_GUARD scenario tests the AI Guard SDK integration across tracer libraries. It uses a VCR cassettes container to replay pre-recorded AI Guard API responses, validating evaluation actions (ALLOW, DENY, ABORT), span metadata, sensitive data scanning, and multi-modal content handling. See ai_guard.md for details.
The IPV6 scenario sets up an IPv6 docker network and uses an IPv6 address as DD_AGENT_HOST to verify that the library is able to communicate to the agent using an IPv6 address. It does not use a proxy between the lib and the agent to not interfere at any point here, so all assertions must be done on the outgoing traffic from the agent.
Please note that it requires the docker daemon to support IPv6. It should be ok on Linux CI and Mac OS, but has not been tested on Windows.
A user has seen his network function altered after running it on a linux laptop (to be investigated). If it happen, docker network prune may solve the issue.
flowchart LR
%% Nodes
A("Test runner")
B("Proxy (Envoy or HAProxy)")
C("Go security processor")
D("HTTP app")
E("Proxy")
F("Agent")
G("Backend")
%% Edge connections between nodes
A --> B --> D
B --> C --> B
C --> E --> F --> G
%% D -- Mermaid js --> I --> J
System tests spawn several services before starting. Here is the lifecycle:
- Starts agent proxy and library proxy
- Starts runner => the runner will spy communication thanks to proxies, and will start tests only when all components are up and running
- Starts agent
- Starts weblog
- Execute tests
- Exports all images logs
- End of process
- How to run a scenario -- running tests and selecting scenarios
- How to add a new scenario -- creating a new scenario
- Architecture overview -- how the test components fit together
- Weblogs -- the test applications used across scenarios
- Back to documentation index