Skip to content

Latest commit

 

History

History
421 lines (292 loc) · 10.6 KB

File metadata and controls

421 lines (292 loc) · 10.6 KB

Building

This guide covers how to set up your development environment, install dependencies, and build the LLM Interactive Proxy.

Prerequisites

  • Python 3.10 or higher: The project requires Python 3.10+
  • pip: Python package installer (usually included with Python)
  • Git: For cloning the repository
  • Virtual environment: Recommended for isolation

Quick Start

1. Clone the Repository

git clone https://github.com/matdev83/llm-interactive-proxy.git
cd llm-interactive-proxy

2. Create Virtual Environment

# Create virtual environment
python -m venv .venv

# Activate virtual environment
# On Windows:
.venv\Scripts\activate

# On Linux/macOS:
source .venv/bin/activate

3. Install Dependencies

# Install the package in editable mode with development dependencies
./.venv/Scripts/python.exe -m pip install -e .[dev]

# Install with optional OAuth connector package
./.venv/Scripts/python.exe -m pip install -e .[dev,oauth]

The oauth extra installs the extracted llm-proxy-oauth-connectors package, which owns OAuth connector implementations and plugin entry points.

This installs:

  • The proxy package in editable mode (-e)
  • All runtime dependencies
  • All development dependencies ([dev])
  • Optionally, extracted OAuth connectors via [oauth]

Dependency Management

Adding Dependencies

All dependencies are managed through pyproject.toml. Never install packages directly with pip install <package>.

Adding Runtime Dependencies

Edit pyproject.toml and add the package to the dependencies list:

[project]
dependencies = [
    "fastapi>=0.104.0",
    "httpx>=0.25.0",
    # Add your new dependency here
    "new-package>=1.0.0",
]

Adding Development Dependencies

Edit pyproject.toml and add the package to the dev optional dependencies:

[project.optional-dependencies]
dev = [
    "pytest>=7.4.0",
    "ruff>=0.1.0",
    # Add your new dev dependency here
    "new-dev-tool>=1.0.0",
]

Installing After Adding Dependencies

After modifying pyproject.toml, reinstall the package:

./.venv/Scripts/python.exe -m pip install -e .[dev]
# Or with optional OAuth connectors:
./.venv/Scripts/python.exe -m pip install -e .[dev,oauth]

Project Structure

llm-interactive-proxy/
├── .venv/                  # Virtual environment (created by you)
├── src/                    # Source code
├── tests/                  # Test suite
├── config/                 # Configuration files
├── docs/                   # Documentation
├── scripts/                # Utility scripts
├── var/                    # Runtime data (logs, captures)
├── pyproject.toml          # Project metadata and dependencies
├── setup.py                # Package setup
└── dev/README.md           # Project overview

Build Commands

Install Package

# Install in editable mode with dev dependencies
./.venv/Scripts/python.exe -m pip install -e .[dev]

# Install in editable mode with dev + optional OAuth connectors
./.venv/Scripts/python.exe -m pip install -e .[dev,oauth]

# Install in editable mode without dev dependencies
./.venv/Scripts/python.exe -m pip install -e .

# Install from source (non-editable)
./.venv/Scripts/python.exe -m pip install .

Verify Installation

# Check installed packages
./.venv/Scripts/python.exe -m pip list

# Verify proxy can be imported
./.venv/Scripts/python.exe -c "import src.core.cli; print('Success!')"

Development Tools

Code Formatting

# Format code with black
./.venv/Scripts/python.exe -m black .

# Check formatting without making changes
./.venv/Scripts/python.exe -m black --check .

Linting

# Run ruff linter
./.venv/Scripts/python.exe -m ruff check .

# Run ruff with auto-fix
./.venv/Scripts/python.exe -m ruff check --fix .

Type Checking

# Run mypy type checker
./.venv/Scripts/python.exe -m mypy src/

Running the Proxy

Basic Usage

# Run with default settings
./.venv/Scripts/python.exe -m src.core.cli

# Run with specific backend
./.venv/Scripts/python.exe -m src.core.cli --default-backend openai

# Run with configuration file
./.venv/Scripts/python.exe -m src.core.cli --config config/config.example.yaml

Common Options

# Bind to specific host and port
./.venv/Scripts/python.exe -m src.core.cli --host 127.0.0.1 --port 8000

# Disable authentication (local only)
./.venv/Scripts/python.exe -m src.core.cli --disable-auth

# Enable wire capture (See [Wire Capture](../user_guide/debugging/wire-capture.md))
./.venv/Scripts/python.exe -m src.core.cli --capture-file var/wire_captures_json/capture.json

# Enable CBOR wire capture (See [CBOR Capture](../user_guide/debugging/cbor-capture.md))
./.venv/Scripts/python.exe -m src.core.cli --cbor-capture-file var/wire_captures_cbor/capture.cbor

Environment Variables

Required for Backends

Set environment variables for the backends you plan to use:

# [OpenAI](../user_guide/backends/openai.md)
export OPENAI_API_KEY="sk-..."

# [Anthropic](../user_guide/backends/anthropic.md)
export ANTHROPIC_API_KEY="sk-ant-..."

# [Gemini](../user_guide/backends/gemini.md)
export GEMINI_API_KEY="AIza..."

# [OpenRouter](../user_guide/backends/openrouter.md)
export OPENROUTER_API_KEY="sk-or-..."

# [ZAI](../user_guide/backends/zai.md)
export ZAI_API_KEY="..."

# Minimax
export MINIMAX_API_KEY="..."

# For Gemini GCP backend
export GOOGLE_CLOUD_PROJECT="your-project-id"
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"

Optional Configuration

# Enable features
export ENABLE_SANDBOXING=true             # See [File Access Sandboxing](../user_guide/features/file-access-sandboxing.md)
export DANGEROUS_COMMAND_PREVENTION_ENABLED=true  # See [Dangerous Command Protection](../user_guide/features/dangerous-command-protection.md)
export FIX_THINK_TAGS_ENABLED=true        # See [Think Tags Fix](../user_guide/features/think-tags-fix.md)

# [LLM Assessment](../user_guide/features/llm-assessment.md)
export LLM_ASSESSMENT_ENABLED=true
export LLM_ASSESSMENT_BACKEND=openai
export LLM_ASSESSMENT_MODEL=gpt-4o-mini

# [Quality Verifier](../user_guide/features/quality-verifier.md)
export QUALITY_VERIFIER_MODEL="openai:gpt-4o-mini"
export QUALITY_VERIFIER_FREQUENCY=1

Database Configuration

# Database URL (SQLite is default, no configuration needed)
export DATABASE_URL="sqlite+aiosqlite:///./var/db/proxy.db"

# For PostgreSQL (production)
export DATABASE_URL="postgresql+asyncpg://user:pass@localhost:5432/llm_proxy"

# Connection pool settings (PostgreSQL only)
export DATABASE_POOL_SIZE=5
export DATABASE_MAX_OVERFLOW=10

Database Management

Migrations

The proxy uses Alembic for database migrations. By default, migrations run automatically on startup.

# Check current migration revision
./.venv/Scripts/python.exe -m alembic current

# Run pending migrations manually
./.venv/Scripts/python.exe -m alembic upgrade head

# Create a new migration (after model changes)
./.venv/Scripts/python.exe -m alembic revision --autogenerate -m "Add new_feature table"

# Rollback one migration
./.venv/Scripts/python.exe -m alembic downgrade -1

# Show migration history
./.venv/Scripts/python.exe -m alembic history

Development Database Reset

# Delete SQLite database and let it recreate
rm -f var/db/proxy.db

# Or reset PostgreSQL (caution: destroys data)
dropdb llm_proxy && createdb llm_proxy
./.venv/Scripts/python.exe -m alembic upgrade head

For detailed database configuration, see the Database Configuration Guide.

Platform-Specific Notes

Windows

  • Use .venv\Scripts\activate to activate the virtual environment
  • Use ./.venv/Scripts/python.exe for all Python commands
  • Use backslashes (\) for paths in commands

Linux/macOS

  • Use source .venv/bin/activate to activate the virtual environment
  • Use ./.venv/bin/python for all Python commands
  • Use forward slashes (/) for paths in commands

WSL (Windows Subsystem for Linux)

When using WSL, still use the Windows Python interpreter:

# Even in WSL, use the Windows Python executable
./.venv/Scripts/python.exe -m pip install -e .[dev]

Troubleshooting

Virtual Environment Issues

Problem: Virtual environment not activating

Solution:

# Recreate virtual environment
rm -rf .venv
python -m venv .venv
./.venv/Scripts/python.exe -m pip install -e .[dev]

Dependency Conflicts

Problem: Dependency version conflicts

Solution:

# Clear pip cache and reinstall
./.venv/Scripts/python.exe -m pip cache purge
./.venv/Scripts/python.exe -m pip install --force-reinstall -e .[dev]

Import Errors

Problem: Cannot import modules from src

Solution:

# Ensure package is installed in editable mode
./.venv/Scripts/python.exe -m pip install -e .

# Verify installation
./.venv/Scripts/python.exe -c "import src; print(src.__file__)"

Permission Errors

Problem: Permission denied when installing packages

Solution:

# On Windows, run as administrator or use --user flag
./.venv/Scripts/python.exe -m pip install --user -e .[dev]

# On Linux/macOS, check virtual environment ownership
sudo chown -R $USER .venv

Build Artifacts

Generated Files

The build process generates several artifacts:

  • .venv/: Virtual environment directory
  • src/llm_interactive_proxy.egg-info/: Package metadata
  • __pycache__/: Python bytecode cache
  • .mypy_cache/: Mypy type checking cache
  • .ruff_cache/: Ruff linting cache
  • .pytest_cache/: Pytest cache

Cleaning Build Artifacts

# Remove Python cache files
find . -type d -name "__pycache__" -exec rm -rf {} +
find . -type f -name "*.pyc" -delete

# Remove build artifacts
rm -rf src/llm_interactive_proxy.egg-info
rm -rf .mypy_cache .ruff_cache .pytest_cache

# Remove virtual environment (if needed)
rm -rf .venv

Continuous Integration

The project uses GitHub Actions for CI/CD:

  • CI Workflow: Runs tests, linting, and type checking on every push
  • Architecture Check: Validates architectural constraints
  • Coverage: Tracks code coverage with Codecov

See .github/workflows/ for workflow definitions.

Related Documentation