Skip to content

Feat/benchmarks#567

Open
FBumann wants to merge 4 commits intoPyPSA:masterfrom
FBumann:feat/benchmarks
Open

Feat/benchmarks#567
FBumann wants to merge 4 commits intoPyPSA:masterfrom
FBumann:feat/benchmarks

Conversation

@FBumann
Copy link
Collaborator

@FBumann FBumann commented Feb 2, 2026

Internal performance benchmarks

Adds a benchmarks/ directory for tracking linopy's own build time, LP write speed, matrix generation, and peak memory across problem sizes.

Tools

Models

Model Description Sizes
basic Dense NN model, 2N^2 vars/cons 10 — 1600
knapsack N binary variables, 1 constraint 100 — 1M
expression_arithmetic Broadcasting, scaling, summation across dims 10 — 1000
sparse_network Ring network with mismatched bus/line coords 10 — 1000
pypsa_scigrid Real power system (requires pypsa) 10 — 200 snapshots

Timing phases

Phase File What it measures
Build test_build.py Model construction (add_variables, add_constraints, add_objective)
LP write test_lp_write.py Writing the model to an LP file
Matrices test_matrices.py Generating sparse matrices (A, b, c, bounds) from the model

Memory benchmarks

memory.py runs each test in a separate process with pytest-memray for accurate per-test peak memory. Results are saved as JSON for cross-branch comparison.

python benchmarks/memory.py save master
python benchmarks/memory.py save my-feature
python benchmarks/memory.py compare master my-feature

Quick start

pip install -e ".[benchmark]"

# Timing
pytest benchmarks/ --quick
pytest benchmarks/test_build.py --benchmark-save=master

# Memory
python benchmarks/memory.py save master --quick

See benchmarks/README.md for full details.

Checklist

  • Code changes are sufficiently documented
  • A note for the release notes doc/release_notes.rst is included
  • I consent to the release of this PR's code under the MIT license

@FBumann
Copy link
Collaborator Author

FBumann commented Mar 12, 2026

Maybe we should merge sth like this before merging #591

@FBumann FBumann requested a review from lkstrp March 12, 2026 10:12
@FBumann FBumann force-pushed the feat/benchmarks branch 2 times, most recently from 54ca82f to 769616e Compare March 12, 2026 12:14
Adds benchmarks/ directory with pytest-benchmark for timing and
pytest-memray for peak memory measurement across problem sizes.

Models: basic (dense N*N), knapsack (N binary vars), expression
arithmetic (broadcasting/scaling), sparse network (ring topology),
and pypsa_scigrid (real power system).

Timing phases: build (test_build.py), LP write (test_lp_write.py),
matrix generation (test_matrices.py). Memory benchmarks (memory.py)
measure the build phase only — memray tracks all allocations within
a test including setup, so other phases would conflate build and
phase-specific memory.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
FBumann and others added 3 commits March 13, 2026 09:16
Benchmarks are not run in CI and should not affect coverage metrics.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Prevents false failures from minor coverage fluctuations when adding
non-library files like benchmarks or config changes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The codecov/project failure is a pre-existing repo-wide issue (multiple
open PRs fail the same check), not caused by this PR.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant