Skip to content

Latest commit

Β 

History

History
298 lines (212 loc) Β· 5.96 KB

File metadata and controls

298 lines (212 loc) Β· 5.96 KB

πŸ§ͺ Testing Guide - NumPyMasterPro

This document describes the testing infrastructure and best practices for NumPyMasterPro.


πŸ“‹ Overview

NumPyMasterPro includes a comprehensive test suite covering all utility modules with unit tests, integration tests, and CI/CD automation.

Test Coverage

  • βœ… Array Utilities - test_array_utils.py
  • βœ… Logical Utilities - test_logical_utils.py
  • βœ… K-Means Utilities - test_kmeans_utils.py
  • βœ… Math Utilities - test_math_utils.py
  • πŸ”„ Additional modules can be tested similarly

πŸš€ Quick Start

Prerequisites

Install development dependencies:

pip install -r requirements_dev.txt

Or use the Makefile:

make install-dev

Running Tests

Run all tests:

pytest

Run tests with coverage:

pytest --cov=scripts --cov-report=term-missing

Run specific test file:

pytest tests/test_logical_utils.py -v

Run tests matching a pattern:

pytest -k "test_any_condition" -v

πŸ“Š Using Makefile Commands

We provide convenient Makefile commands for common tasks:

make test              # Run all tests
make test-coverage     # Run tests with HTML coverage report
make test-verbose      # Run tests with detailed output
make lint              # Check code quality
make format            # Auto-format code with black & isort
make clean             # Remove cache and build artifacts
make all               # Run complete check (clean, install, test, lint)

🎯 Test Structure

Directory Layout

tests/
β”œβ”€β”€ __init__.py              # Test package initialization
β”œβ”€β”€ conftest.py              # Shared fixtures and configuration
β”œβ”€β”€ test_array_utils.py      # Tests for array utilities
β”œβ”€β”€ test_logical_utils.py    # Tests for logical operations
β”œβ”€β”€ test_kmeans_utils.py     # Tests for K-Means algorithm
└── test_math_utils.py       # Tests for math operations

Test Organization

Each test file follows this structure:

class TestFeatureName:
    """Group related tests together"""
    
    def test_basic_case(self):
        """Test description"""
        # Arrange
        arr = np.array([1, 2, 3])
        
        # Act
        result = function_under_test(arr)
        
        # Assert
        assert result == expected_value

🧩 Shared Fixtures

Common test fixtures are defined in conftest.py:

  • sample_1d_array - Simple 1D array
  • sample_2d_array - Simple 2D array
  • random_array - Random array with fixed seed
  • array_with_nans - Array containing NaN values
  • array_with_infs - Array with infinite values
  • clustering_data - Synthetic clustering dataset

Usage:

def test_with_fixture(sample_1d_array):
    result = some_function(sample_1d_array)
    assert result.shape == (5,)

πŸ“ˆ Coverage Reports

Terminal Coverage

pytest --cov=scripts --cov-report=term-missing

HTML Coverage Report

pytest --cov=scripts --cov-report=html
open htmlcov/index.html  # View in browser

XML Coverage (for CI/CD)

pytest --cov=scripts --cov-report=xml

🏷️ Test Markers

Use markers to categorize tests:

@pytest.mark.slow
def test_large_dataset():
    """Test with large dataset (takes time)"""
    pass

@pytest.mark.unit
def test_single_function():
    """Unit test for isolated function"""
    pass

@pytest.mark.integration
def test_workflow():
    """Integration test across modules"""
    pass

Run tests by marker:

pytest -m "not slow"      # Skip slow tests
pytest -m "unit"          # Run only unit tests
pytest -m "integration"   # Run only integration tests

πŸ”§ Configuration

Test configuration is defined in pytest.ini:

[pytest]
testpaths = tests
addopts = -v --strict-markers --cov=scripts

πŸ€– Continuous Integration

Tests run automatically via GitHub Actions on:

  • βœ… Push to main or develop branches
  • βœ… Pull requests
  • βœ… Manual workflow dispatch

CI Workflow Includes:

  1. Multi-platform testing (Ubuntu, macOS, Windows)
  2. Python version matrix (3.10, 3.11, 3.12)
  3. Code linting (flake8, black, isort)
  4. Notebook validation
  5. Docker build verification
  6. Security scanning (safety, bandit)
  7. Coverage reporting (Codecov)

✍️ Writing New Tests

Step 1: Create Test File

touch tests/test_new_module.py

Step 2: Import Module and Fixtures

import pytest
import numpy as np
from scripts.new_module import function_to_test

Step 3: Write Test Classes

class TestNewFunction:
    def test_basic_behavior(self):
        result = function_to_test([1, 2, 3])
        assert result == expected
    
    def test_edge_case(self):
        with pytest.raises(ValueError):
            function_to_test([])

Step 4: Run and Verify

pytest tests/test_new_module.py -v

πŸ“ Best Practices

βœ… Test one thing per test - Keep tests focused
βœ… Use descriptive names - test_returns_empty_array_for_missing_values
βœ… Test edge cases - Empty arrays, NaN, inf, negative values
βœ… Use fixtures - Reuse common test data
βœ… Check both success and failure - Use pytest.raises() for exceptions
βœ… Aim for high coverage - Target 80%+ code coverage
βœ… Keep tests fast - Mark slow tests with @pytest.mark.slow


πŸ› Debugging Tests

Run with verbose output:

pytest -vv -s

Run with pdb debugger:

pytest --pdb

Show local variables on failure:

pytest -l

Run last failed tests only:

pytest --lf

πŸ“š Additional Resources


Β© 2025 Satvik Praveen – NumPyMasterPro Testing Guide