Build optimal LLM context from your codebase in one command.
Stop manually copying files into ChatGPT. contextpack analyzes your codebase, scores file relevance to your task, respects token budgets, and outputs a clean context bundle ready for any LLM.
contextpack "Add rate limiting to the API endpoints"- Smart file selection - Keyword matching + import graph analysis ranks files by relevance to your specific task. Not just "dump everything."
- Token-aware - Accurate token counting (tiktoken for OpenAI, estimates for Claude/Gemini) with configurable budgets. Never hit context limits again.
- Multi-format output - Markdown, JSON, XML (Claude's preferred format), or straight to clipboard. Works with every LLM.
pip install contextpack
# Basic usage - analyze current directory
contextpack "Fix the authentication bug"
# Target a specific model with its optimal budget
contextpack "Refactor the database layer" --model claude
# Output as XML (Claude-optimized) to a file
contextpack "Add WebSocket support" -f xml -o context.xml
# Copy directly to clipboard
contextpack "Write tests for the user service" -f clipboard
# Analyze a specific directory with custom budget
contextpack "Optimize the build pipeline" -d ./src --budget 80000- Intelligent scoring - Files scored by: keyword match in filename/content, import graph proximity, file type, project structure position
- Import graph analysis - Traces
import/require/usestatements across Python, JS/TS, Go, Rust, Java, Ruby - Respects .gitignore - Plus
.contextpackignorefor additional exclusions - Model-aware budgets - Auto-configures token budget per model (GPT-4: 64k, Claude: 100k, Gemini: 500k)
- Rich terminal output - Color-coded tables showing selected files, token counts, and relevance scores
- Multiple output formats - Markdown, JSON, XML, clipboard
Task Description
|
v
Keyword Extraction -----> File Discovery (respects .gitignore)
| |
v v
Relevance Scoring <------- Import Graph Analysis
|
v
Token Budget Fitting (knapsack-style)
|
v
Format & Output (md / json / xml / clipboard)
Create a .contextpackignore in your project root to exclude additional files:
# Exclude test fixtures
tests/fixtures/
# Exclude generated code
generated/
# Exclude specific large files
data/*.csv
Usage: contextpack [OPTIONS] TASK
Options:
-d, --dir DIRECTORY Root directory to analyze (default: .)
-m, --model TEXT Target model (gpt-4, claude, gemini, default)
-f, --format [md|json|xml|clipboard] Output format (default: md)
-b, --budget INTEGER Token budget override
-o, --output PATH Output file (default: stdout)
-i, --include TEXT Include glob patterns (repeatable)
-e, --exclude TEXT Exclude glob patterns (repeatable)
--tree / --no-tree Show file tree in output
-v, --verbose Show scoring details
--max-file-size INTEGER Max file size in bytes (default: 100KB)
--help Show this message and exit.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Install dev dependencies:
pip install -e ".[dev]" - Make your changes and add tests
- Run tests:
pytest - Submit a pull request
MIT - see LICENSE for details.