This document describes the test architecture and strategy for SLIM CLI. SLIM CLI employs a comprehensive testing approach that combines traditional pytest unit testing with an innovative YAML-based integration test framework. Our testing philosophy emphasizes selective test execution, parameterized testing, and comprehensive coverage of all best practice scenarios through both isolated unit tests and end-to-end integration tests.
We use the following tests to ensure SLIM CLI's quality, performance, and reliability:
- Unit Tests
- Integration Tests
- Security Tests
Location: tests/jpl/slim/
Purpose: Test individual modules, functions, and classes in isolation to ensure each component works correctly on its own.
# UV automatically handles dependencies and virtual environment
# No need to install pytest separately
# Run ALL tests (unit + integration)
uv run pytest tests/
# Run all unit tests only
uv run pytest tests -m "unit"
# Run specific test modules
uv run pytest tests/jpl/slim/utils/test_git_utils.py
uv run pytest tests/jpl/slim/cli/test_models_command.py
# Run with verbose output
uv run pytest tests/jpl/slim/ -v -s# Install test dependencies
pip install pytest
# Run ALL tests (unit + integration)
pytest tests/
# Run all unit tests only
pytest tests -m "unit"
# Run specific test modules
pytest tests/jpl/slim/utils/test_git_utils.py
pytest tests/jpl/slim/cli/test_models_command.py
# Run with verbose output
pytest tests/jpl/slim/ -v -s
- Trigger: Not yet
- Frequency: Not yet
- Where to see results: Not yet
Testing Framework: pytest 7.x with fixtures and mocking
Tips for Contributing:
- Follow the naming convention
test_*.pyfor test files - Use pytest fixtures for setup and teardown
- Mock external dependencies (API calls, file systems, git operations)
- Test both success and failure paths
- Include docstrings explaining what each test verifies
Location: tests/integration/best_practices_test_commands.yaml
Purpose: Test complete command workflows and interactions between components using a YAML-based configuration system.
# Run ALL tests (unit + integration)
uv run pytest tests/
# Run all YAML-configured integration tests
uv run pytest tests/integration/test_best_practice_commands.py
# Run with verbose output to see individual YAML commands
uv run pytest -v tests/integration/test_best_practice_commands.py
# Enable/disable specific tests via YAML configuration
# Edit tests/integration/best_practices_test_commands.yaml# Run ALL tests (unit + integration)
pytest tests/
# Run all YAML-configured integration tests
pytest tests/integration/test_best_practice_commands.py
# Run with verbose output to see individual YAML commands
pytest -v tests/integration/test_best_practice_commands.py
# Enable/disable specific tests via YAML configuration
# Edit tests/integration/best_practices_test_commands.yamlYAML Configuration Example:
readme:
enabled: true # Enable/disable entire practice
commands:
- command: "slim apply --best-practice-ids readme --repo-dir {temp_git_repo}"
enabled: true # Enable/disable individual commands
- command: "slim deploy --best-practice-ids readme --repo-dir {temp_git_repo_with_remote}"
enabled: falseTemplate Variables:
{temp_git_repo}- Temporary git repository path{temp_git_repo_with_remote}- Git repo with configured remote{temp_dir}- Generic temporary directory{temp_urls_file}- File containing repository URLs{test_ai_model}- AI model for testing{custom_remote}- Custom git remote URL
- Trigger: Not yet
- Frequency: Not yet
- Where to see results: Not yet
Testing Framework: YAML-based test configuration with pytest runner
Tips for Contributing:
- Add new practices to
tests/integration/best_practices_test_commands.yaml - Include both success and error scenarios
- Use template variables for dynamic values
- Test with different AI models when applicable
- Enable/disable toggles for development flexibility
Adding a New Practice:
new-practice:
enabled: true
commands:
- command: "slim apply --best-practice-ids new-practice --repo-dir {temp_git_repo}"
enabled: true
- command: "slim deploy --best-practice-ids new-practice --repo-dir {temp_git_repo_with_remote}"
enabled: true
# Error scenarios
- command: "slim apply --best-practice-ids new-practice --repo-dir /nonexistent/path"
enabled: trueLocation: SonarQube Cloud integration
Purpose: Automated security vulnerability scanning and code quality analysis.
SonarQube Cloud analysis is automatically integrated and cannot be run manually.
- Trigger: Every pull request automatically
- Frequency: On every PR submission and update
- Where to see results: SonarQube Cloud dashboard and PR status checks
Testing Framework: SonarQube Cloud automated scanning
Tips for Contributing:
- SonarQube Cloud automatically scans all code changes
- Address any security vulnerabilities flagged in PR checks
- Maintain security rating above B grade
- Review SonarQube Cloud report before merging PRs
- No manual setup required - fully automated
When adding new test functionality to SLIM CLI:
- Write unit tests for new modules and functions
- Add integration tests to
best_practices_test_commands.yaml - Add AI integration tests to
/tests/integrationunder a new best practice class or existing. See examples for refernece. NOTE: you'll want to ensure your tests read output fromcaplog.textand properly filter the string to review AI generated logs from your debugging logs (i.e.--logging DEBUG) - Include error scenarios and edge cases
- Document test purpose with clear docstrings
- Run full test suite before submitting PR by running
uv run pytest(orpytestif using traditional setup)
For specific guidance on testing new best practices and commands you're adding to SLIM CLI, see the Testing Your Extensions section in CONTRIBUTING.md.
For detailed contribution guidelines, see CONTRIBUTING.md.