This directory contains integration testing utilities for NetGraph scenarios. The framework provides modular utilities for validating network topologies, blueprint expansions, failure policies, traffic demands, and flow results.
- ScenarioTestHelper: Main validation class with modular test methods
- NetworkExpectations: Structured expectations for network validation
- ScenarioDataBuilder: Builder pattern for programmatic scenario creation
- ScenarioValidationConfig: Configuration for selective validation control
- SCENARIO_*_EXPECTATIONS: Predefined expectations for each test scenario
- Validation constants: Reusable constants for consistent validation
- Helper functions: Calculations for topology expectations
- NetworkTemplates: Common topology patterns (linear, star, mesh, ring, tree)
- BlueprintTemplates: Reusable blueprint patterns for hierarchies
- FailurePolicyTemplates: Standard failure scenario configurations
- TrafficDemandTemplates: Traffic demand patterns and distributions
- WorkflowTemplates: Common analysis workflow configurations
- ScenarioTemplateBuilder: High-level builder for complete scenarios
- CommonScenarios: Pre-built scenarios for typical use cases
- Tests: Network parsing, link definitions, traffic matrices, single failure policies
- Scale: 6 nodes, 10 links, 4 traffic demands
- Requirements: Basic YAML parsing, graph construction
- Tests: Blueprint expansion, parameter overrides, mesh patterns, hierarchical naming
- Scale: 15+ nodes from blueprint expansion, nested hierarchies 3 levels deep
- Requirements: Blueprint system, DSL parsing, mesh connectivity algorithms
- Tests: Deep blueprint nesting, capacity probing, node/link overrides, flow analysis
- Scale: 20+ nodes, 3-tier hierarchy, regex pattern matching
- Requirements: Clos topology knowledge, capacity probe workflow, override systems
- Tests: Variable expansion, component system, multi-tier hierarchies, workflow transforms
- Scale: 80+ nodes, 4+ hierarchy levels, multiple data centers
- Requirements: Component library, variable expansion, workflow transforms
Each scenario uses two test patterns:
- Detailed validation: Tests network structure, blueprint expansions, traffic matrices, flow results
- Modular structure: Each test method focuses on specific functionality
- Fixtures: Shared scenario setup and graph construction
- Examples:
test_network_structure_validation(),test_blueprint_expansion_validation()
- Basic validation: Verifies scenario parsing and execution without errors
- Fast execution: Minimal overhead for CI/CD pipelines
- Baseline checks: Ensures scenarios load and run successfully
- Error detection: Catches parsing failures and execution errors
When to use each approach:
- Smoke tests: Quick validation and CI checks
- Class-based tests: Detailed validation and debugging
helper = ScenarioTestHelper(scenario)
helper.set_graph(built_graph)
helper.validate_network_structure(expectations)
helper.validate_topology_semantics()
helper.validate_flow_results("step_name", "flow_label", expected_value)SCENARIO_1_EXPECTATIONS = NetworkExpectations(
node_count=6,
edge_count=20, # 10 physical links * 2 directed edges
specific_nodes={"SEA", "SFO", "DEN", "DFW", "JFK", "DCA"},
blueprint_expansions={}, # No blueprints in scenario 1
)scenario = (ScenarioTemplateBuilder("test_network", "1.0")
.with_linear_backbone(["A", "B", "C"], link_capacity=100.0)
.with_uniform_traffic(["A", "C"], demand_value=50.0)
.with_single_link_failures()
.with_capacity_analysis("A", "C")
.build())- Malformed YAML handling
- Blueprint reference validation
- Traffic demand correctness
- Failure policy configuration
- Edge case coverage
- Use fixtures for common scenario setups
- Validate incrementally from basic structure to flows
- Group related tests in focused test classes
- Provide clear error messages with context
- Start with structural validation (node/edge counts)
- Verify specific elements (expected nodes/links)
- Check semantic correctness (topology properties)
- Validate business logic (flow results, policies)
- Prefer templates over manual scenario construction
- Compose templates for scenarios
- Use constants for configuration values
- Document template parameters clearly
- Module and class docstrings
- Parameter and return value documentation
- Usage examples in docstrings
- Clear error message context
- Type annotations for all functions
- Optional parameter handling
- Generic type usage where appropriate
- Union types for flexible interfaces
- Descriptive error messages with context
- Input validation with clear feedback
- Graceful handling of edge cases
- Appropriate exception types
- Constants for magic numbers
- Modular, focused methods
- Consistent naming conventions
- Separated concerns (validation vs data creation)
def test_my_scenario():
scenario = load_scenario_from_file("my_scenario.yaml")
scenario.run()
helper = create_scenario_helper(scenario)
graph = scenario.results.get("build_graph", "graph")
helper.set_graph(graph)
# Validate structure
expectations = NetworkExpectations(node_count=5, edge_count=8)
helper.validate_network_structure(expectations)
# Validate semantics
helper.validate_topology_semantics()def test_custom_topology():
builder = ScenarioDataBuilder()
scenario = (builder
.with_simple_nodes(["Hub", "Spoke1", "Spoke2"])
.with_simple_links([("Hub", "Spoke1", 10), ("Hub", "Spoke2", 10)])
.with_traffic_demand("Spoke1", "Spoke2", 5.0)
.with_workflow_step("BuildGraph", "build_graph")
.build_scenario())
scenario.run()
# ... validation ...def test_blueprint_expansion():
helper = create_scenario_helper(scenario)
helper.set_graph(built_graph)
# Validate blueprint created expected nodes
helper.validate_blueprint_expansions(NetworkExpectations(
blueprint_expansions={
"datacenter_east/spine/": 4,
"datacenter_east/leaf/": 8,
}
))expectations.py: Test expectations and validation constantshelpers.py: Core validation utilities and test helperstest_data_templates.py: Template builders for programmatic scenario creationtest_scenario_*.py: Integration tests for specific scenarios
- Node count thresholds for topology validation
- Link capacity ranges for flow analysis
- Traffic demand bounds for matrix validation
- Timeout values for workflow execution
ScenarioDataBuilder: Programmatic scenario constructionNetworkTemplates: Common topology patterns (star, mesh, tree)ErrorInjectionTemplates: Invalid configuration builders- Network size limits to prevent test timeout
When adding new test scenarios or validation methods:
- Follow naming conventions established in existing code
- Add documentation with usage examples
- Include type annotations for all new functions
- Write focused, modular tests that can be easily understood
- Update expectations in the dedicated expectations.py file
- Add templates for reusable patterns
Run all integration tests:
pytest tests/integration/ -vRun specific scenario tests:
pytest tests/integration/test_scenario_1.py -vRun template examples:
pytest tests/integration/test_template_examples.py -vRun integration tests by directory:
pytest tests/integration/ -vThe integration tests framework follows a hybrid approach for template usage:
- Primary: Use
load_scenario_from_file()with static YAML files - Rationale: These serve as integration references and demonstrate real-world usage
- Template Variants: Also include template-based variants for testing different configurations
- Primary: Use
ScenarioDataBuilderand template builders consistently - Rationale: Easier to create invalid configurations programmatically
- Raw YAML: Only for syntax errors that builders cannot create
- Primary: Full template system usage with all template classes
- Rationale: Demonstrates template capabilities and validates template system
| Test Type | Recommended Approach | Example |
|---|---|---|
| Basic Integration | YAML files + template variants | test_scenario_1.py |
| Error Cases | Template builders | ErrorInjectionTemplates.missing_nodes_builder() |
| Edge Cases | Template builders | EdgeCaseTemplates.empty_network_builder() |
| Performance Tests | Template builders | PerformanceTestTemplates.large_star_network_builder() |
| Parameterized Tests | Template builders | ScenarioTemplateBuilder with loops |
# For testing invalid configurations
builder = ErrorInjectionTemplates.circular_blueprint_builder()
scenario = builder.build_scenario()
with pytest.raises((ValueError, RecursionError)):
scenario.run()# For boundary conditions and edge cases
builder = EdgeCaseTemplates.zero_capacity_links_builder()
scenario = builder.build_scenario()
scenario.run() # Should handle gracefully# For stress testing and performance validation
builder = PerformanceTestTemplates.large_star_network_builder(leaf_count=500)
scenario = builder.build_scenario()
scenario.run() # Performance test# For high-level scenario composition
scenario_yaml = (ScenarioTemplateBuilder("test", "1.0")
.with_linear_backbone(["A", "B", "C"])
.with_uniform_traffic(["A", "C"], 25.0)
.with_single_link_failures()
.build())- ✅ Error case testing with invalid configurations
- ✅ Parameterized tests with different scales
- ✅ Edge case and boundary condition testing
- ✅ Performance and stress testing
- ✅ Rapid prototyping of test scenarios
- ❌ Replacing existing YAML-based integration tests
- ❌ Simple one-off tests where YAML is clearer
- ❌ Tests that need exact YAML syntax validation
# Combine multiple template categories
def test_complex_error_scenario():
builder = ErrorInjectionTemplates.negative_demand_builder()
# Add additional edge case conditions
builder.data["network"]["links"].extend(
EdgeCaseTemplates.zero_capacity_links_builder().data["network"]["links"]
)
scenario = builder.build_scenario()
# Test error handling with multiple conditions# Standard pattern for error case tests
def test_missing_blueprint():
builder = ErrorInjectionTemplates.circular_blueprint_builder()
with pytest.raises((ValueError, RecursionError)):
scenario = builder.build_scenario()
scenario.run()- Keep existing YAML-based tests as integration references
- Add template-based variants for parameterized testing
- Migrate error cases to use template builders
- Start with appropriate template builder
- Use
ScenarioTemplateBuilderfor high-level composition - Use specialized templates for specific test categories
- Choose appropriate template class (Error/EdgeCase/Performance)
- Follow existing naming conventions (
*_builder()methods) - Return
ScenarioDataBuilderinstances for consistency - Add docstrings with usage examples
- Each template should have validation tests
- Test both successful scenario building and execution
- Verify template produces expected network structures