This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
NeetCode is a scalable Python practice framework for algorithm learning and interview preparation. It provides:
- Knowledge graph-driven learning with interconnected patterns and API kernels
- Industrial-strength testing infrastructure (random test generation, custom judges, benchmarking)
- AI-powered mind maps for pattern discovery
- VS Code integration for one-click testing and debugging
# Windows
scripts\new_problem.bat <leetcode_id>
scripts\new_problem.bat <leetcode_id> --with-tests
# Linux/macOS
./scripts/new_problem.sh <leetcode_id>
./scripts/new_problem.sh <leetcode_id> --with-tests
# Advanced options
scripts\new_problem.bat 104 --solve-mode tiered # For tree/linked list problems# Run all tests for a problem
python runner/test_runner.py <problem_name>
# Run specific test case
python runner/case_runner.py <problem_name> <case_number>
# Benchmark
python runner/test_runner.py <problem_name> --benchmark
# Compare all solutions
python runner/test_runner.py <problem_name> --all --benchmark
# Generate random tests
python runner/test_runner.py <problem_name> --generate 10
python runner/test_runner.py <problem_name> --generate 10 --seed 42
# Estimate complexity
python runner/test_runner.py <problem_name> --estimate# Activate virtual environment first
leetcode\Scripts\activate # Windows
source leetcode/bin/activate # Linux/macOS
# Run all unit tests
python -m pytest .dev/tests -vsolutions/- Solution files (one per LeetCode problem)tests/- Test cases (.in/.out files)generators/- Random test generators (optional)runner/- Test execution enginesrc/- Core packages (see below)tools/- Standalone tools (mindmaps, patterndocs, review-code)ontology/- Algorithm ontology (TOML files)meta/- Problem and pattern metadatadocs/- MkDocs documentation
leetcode_datasource ←── codegen ──→ practice_workspace
| Package | Purpose |
|---|---|
leetcode_datasource |
LeetCode API + SQLite cache, problem metadata |
codegen |
Solution/practice skeleton generation, test extraction |
practice_workspace |
Practice file history and restore |
Dependency rule: tools/ → src/ only, never reverse.
Solutions follow a standardized polymorphic pattern:
# solutions/XXXX_problem_name.py
from typing import List
from _runner import get_solver
SOLUTIONS = {
"default": {
"class": "Solution",
"method": "methodName",
"complexity": "O(n) time, O(n) space",
"description": "Brief description",
},
}
class Solution:
def methodName(self, ...):
# Implementation
pass
def solve():
import sys
import json
lines = sys.stdin.read().strip().split('\n')
# Parse input (canonical JSON format)
param1 = json.loads(lines[0])
param2 = json.loads(lines[1])
# Get solver and call method
solver = get_solver(SOLUTIONS)
result = solver.methodName(param1, param2)
# Output canonical JSON
print(json.dumps(result, separators=(',', ':')))
if __name__ == "__main__":
solve()Test files use canonical JSON literal format (one value per line):
Input file (tests/XXXX_problem_name_N.in):
[2,7,11,15]
9
Output file (tests/XXXX_problem_name_N.out):
[0,1]
Requirements:
- Line ending: LF (Unix format)
- Encoding: UTF-8
- Single newline at end of file
For problems with multiple approaches:
SOLUTIONS = {
"default": {...},
"approach1": {
"class": "SolutionApproach1",
"method": "solve",
...
},
"approach2": {
"class": "SolutionApproach2",
"method": "solve",
...
},
}Run with --method approach1 or --all to compare.
For problems with multiple valid answers:
def judge(actual, expected, input_data: str) -> bool:
"""Custom validation logic."""
# Return True if actual is valid
return is_valid(actual)
JUDGE_FUNC = judgeOr use simple comparison modes:
COMPARE_MODE = "sorted" # Options: "exact" | "sorted" | "set"Create generators/XXXX_problem_name.py:
import random
from typing import Iterator, Optional
def generate(count: int = 10, seed: Optional[int] = None) -> Iterator[str]:
if seed is not None:
random.seed(seed)
for _ in range(count):
# Generate test case
yield f"{json.dumps(param1)}\n{json.dumps(param2)}"
def generate_for_complexity(n: int) -> str:
"""Generate test case with specific size n for complexity estimation."""
return generate_case_of_size(n)Ctrl+Shift+B- Run all tests for current fileF5- Debug with test case #1- Tasks available via
Ctrl+Shift+P→ "Tasks: Run Task"
Key documentation files:
docs/contracts/solution-contract.md- Solution file specificationdocs/contracts/test-file-format.md- Test file formatdocs/contracts/generator-contract.md- Generator specificationdocs/runner/README.md- Test runner referencedocs/packages/codegen/README.md- CodeGen reference
- Python 3.11 (matching LeetCode official environment)
- Virtual environment:
leetcode/ - Activate:
leetcode\Scripts\activate(Windows) orsource leetcode/bin/activate(Linux/macOS)
- Problem files:
XXXX_problem_name.py(4-digit zero-padded LeetCode ID) - Test files:
XXXX_problem_name_N.in/.out(N = case number starting from 1) - Documentation: kebab-case for markdown files
- Always use
json.dumps(result, separators=(',', ':'))for output (no spaces) - Test files must end with a single newline
- Use LF line endings, not CRLF
- The
_runner.pymodule providesget_solver()for polymorphic dispatch - Complexity in SOLUTIONS is declared metadata; use
--estimatefor empirical verification