This guide covers how to set up your development environment, install dependencies, and build the LLM Interactive Proxy.
- Python 3.10 or higher: The project requires Python 3.10+
- pip: Python package installer (usually included with Python)
- Git: For cloning the repository
- Virtual environment: Recommended for isolation
git clone https://github.com/matdev83/llm-interactive-proxy.git
cd llm-interactive-proxy# Create virtual environment
python -m venv .venv
# Activate virtual environment
# On Windows:
.venv\Scripts\activate
# On Linux/macOS:
source .venv/bin/activate# Install the package in editable mode with development dependencies
./.venv/Scripts/python.exe -m pip install -e .[dev]
# Install with optional OAuth connector package
./.venv/Scripts/python.exe -m pip install -e .[dev,oauth]The oauth extra installs the extracted llm-proxy-oauth-connectors package, which owns OAuth connector implementations and plugin entry points.
This installs:
- The proxy package in editable mode (
-e) - All runtime dependencies
- All development dependencies (
[dev]) - Optionally, extracted OAuth connectors via
[oauth]
All dependencies are managed through pyproject.toml. Never install packages directly with pip install <package>.
Edit pyproject.toml and add the package to the dependencies list:
[project]
dependencies = [
"fastapi>=0.104.0",
"httpx>=0.25.0",
# Add your new dependency here
"new-package>=1.0.0",
]Edit pyproject.toml and add the package to the dev optional dependencies:
[project.optional-dependencies]
dev = [
"pytest>=7.4.0",
"ruff>=0.1.0",
# Add your new dev dependency here
"new-dev-tool>=1.0.0",
]After modifying pyproject.toml, reinstall the package:
./.venv/Scripts/python.exe -m pip install -e .[dev]
# Or with optional OAuth connectors:
./.venv/Scripts/python.exe -m pip install -e .[dev,oauth]llm-interactive-proxy/
├── .venv/ # Virtual environment (created by you)
├── src/ # Source code
├── tests/ # Test suite
├── config/ # Configuration files
├── docs/ # Documentation
├── scripts/ # Utility scripts
├── var/ # Runtime data (logs, captures)
├── pyproject.toml # Project metadata and dependencies
├── setup.py # Package setup
└── dev/README.md # Project overview
# Install in editable mode with dev dependencies
./.venv/Scripts/python.exe -m pip install -e .[dev]
# Install in editable mode with dev + optional OAuth connectors
./.venv/Scripts/python.exe -m pip install -e .[dev,oauth]
# Install in editable mode without dev dependencies
./.venv/Scripts/python.exe -m pip install -e .
# Install from source (non-editable)
./.venv/Scripts/python.exe -m pip install .# Check installed packages
./.venv/Scripts/python.exe -m pip list
# Verify proxy can be imported
./.venv/Scripts/python.exe -c "import src.core.cli; print('Success!')"# Format code with black
./.venv/Scripts/python.exe -m black .
# Check formatting without making changes
./.venv/Scripts/python.exe -m black --check .# Run ruff linter
./.venv/Scripts/python.exe -m ruff check .
# Run ruff with auto-fix
./.venv/Scripts/python.exe -m ruff check --fix .# Run mypy type checker
./.venv/Scripts/python.exe -m mypy src/# Run with default settings
./.venv/Scripts/python.exe -m src.core.cli
# Run with specific backend
./.venv/Scripts/python.exe -m src.core.cli --default-backend openai
# Run with configuration file
./.venv/Scripts/python.exe -m src.core.cli --config config/config.example.yaml# Bind to specific host and port
./.venv/Scripts/python.exe -m src.core.cli --host 127.0.0.1 --port 8000
# Disable authentication (local only)
./.venv/Scripts/python.exe -m src.core.cli --disable-auth
# Enable wire capture (See [Wire Capture](../user_guide/debugging/wire-capture.md))
./.venv/Scripts/python.exe -m src.core.cli --capture-file var/wire_captures_json/capture.json
# Enable CBOR wire capture (See [CBOR Capture](../user_guide/debugging/cbor-capture.md))
./.venv/Scripts/python.exe -m src.core.cli --cbor-capture-file var/wire_captures_cbor/capture.cborSet environment variables for the backends you plan to use:
# [OpenAI](../user_guide/backends/openai.md)
export OPENAI_API_KEY="sk-..."
# [Anthropic](../user_guide/backends/anthropic.md)
export ANTHROPIC_API_KEY="sk-ant-..."
# [Gemini](../user_guide/backends/gemini.md)
export GEMINI_API_KEY="AIza..."
# [OpenRouter](../user_guide/backends/openrouter.md)
export OPENROUTER_API_KEY="sk-or-..."
# [ZAI](../user_guide/backends/zai.md)
export ZAI_API_KEY="..."
# Minimax
export MINIMAX_API_KEY="..."
# For Gemini GCP backend
export GOOGLE_CLOUD_PROJECT="your-project-id"
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"# Enable features
export ENABLE_SANDBOXING=true # See [File Access Sandboxing](../user_guide/features/file-access-sandboxing.md)
export DANGEROUS_COMMAND_PREVENTION_ENABLED=true # See [Dangerous Command Protection](../user_guide/features/dangerous-command-protection.md)
export FIX_THINK_TAGS_ENABLED=true # See [Think Tags Fix](../user_guide/features/think-tags-fix.md)
# [LLM Assessment](../user_guide/features/llm-assessment.md)
export LLM_ASSESSMENT_ENABLED=true
export LLM_ASSESSMENT_BACKEND=openai
export LLM_ASSESSMENT_MODEL=gpt-4o-mini
# [Quality Verifier](../user_guide/features/quality-verifier.md)
export QUALITY_VERIFIER_MODEL="openai:gpt-4o-mini"
export QUALITY_VERIFIER_FREQUENCY=1# Database URL (SQLite is default, no configuration needed)
export DATABASE_URL="sqlite+aiosqlite:///./var/db/proxy.db"
# For PostgreSQL (production)
export DATABASE_URL="postgresql+asyncpg://user:pass@localhost:5432/llm_proxy"
# Connection pool settings (PostgreSQL only)
export DATABASE_POOL_SIZE=5
export DATABASE_MAX_OVERFLOW=10The proxy uses Alembic for database migrations. By default, migrations run automatically on startup.
# Check current migration revision
./.venv/Scripts/python.exe -m alembic current
# Run pending migrations manually
./.venv/Scripts/python.exe -m alembic upgrade head
# Create a new migration (after model changes)
./.venv/Scripts/python.exe -m alembic revision --autogenerate -m "Add new_feature table"
# Rollback one migration
./.venv/Scripts/python.exe -m alembic downgrade -1
# Show migration history
./.venv/Scripts/python.exe -m alembic history# Delete SQLite database and let it recreate
rm -f var/db/proxy.db
# Or reset PostgreSQL (caution: destroys data)
dropdb llm_proxy && createdb llm_proxy
./.venv/Scripts/python.exe -m alembic upgrade headFor detailed database configuration, see the Database Configuration Guide.
- Use
.venv\Scripts\activateto activate the virtual environment - Use
./.venv/Scripts/python.exefor all Python commands - Use backslashes (
\) for paths in commands
- Use
source .venv/bin/activateto activate the virtual environment - Use
./.venv/bin/pythonfor all Python commands - Use forward slashes (
/) for paths in commands
When using WSL, still use the Windows Python interpreter:
# Even in WSL, use the Windows Python executable
./.venv/Scripts/python.exe -m pip install -e .[dev]Problem: Virtual environment not activating
Solution:
# Recreate virtual environment
rm -rf .venv
python -m venv .venv
./.venv/Scripts/python.exe -m pip install -e .[dev]Problem: Dependency version conflicts
Solution:
# Clear pip cache and reinstall
./.venv/Scripts/python.exe -m pip cache purge
./.venv/Scripts/python.exe -m pip install --force-reinstall -e .[dev]Problem: Cannot import modules from src
Solution:
# Ensure package is installed in editable mode
./.venv/Scripts/python.exe -m pip install -e .
# Verify installation
./.venv/Scripts/python.exe -c "import src; print(src.__file__)"Problem: Permission denied when installing packages
Solution:
# On Windows, run as administrator or use --user flag
./.venv/Scripts/python.exe -m pip install --user -e .[dev]
# On Linux/macOS, check virtual environment ownership
sudo chown -R $USER .venvThe build process generates several artifacts:
.venv/: Virtual environment directorysrc/llm_interactive_proxy.egg-info/: Package metadata__pycache__/: Python bytecode cache.mypy_cache/: Mypy type checking cache.ruff_cache/: Ruff linting cache.pytest_cache/: Pytest cache
# Remove Python cache files
find . -type d -name "__pycache__" -exec rm -rf {} +
find . -type f -name "*.pyc" -delete
# Remove build artifacts
rm -rf src/llm_interactive_proxy.egg-info
rm -rf .mypy_cache .ruff_cache .pytest_cache
# Remove virtual environment (if needed)
rm -rf .venvThe project uses GitHub Actions for CI/CD:
- CI Workflow: Runs tests, linting, and type checking on every push
- Architecture Check: Validates architectural constraints
- Coverage: Tracks code coverage with Codecov
See .github/workflows/ for workflow definitions.
- Testing: See testing.md for running tests
- Contributing: See contributing.md for contribution workflow
- Code Organization: See code-organization.md for project structure
- Coding Standards: See AGENTS.md for coding standards