δΈζ | English
A Claude Code proxy server rewritten in Go, providing a complete frontend and backend interface that supports converting Claude API requests to OpenAI API calls.
- Complete Claude API Compatibility: Supports full
/v1/messagesendpoint - Multi-Provider Support: OpenAI, Azure OpenAI, local models (Ollama), and any OpenAI-compatible API
- Intelligent Model Mapping: Configure large and small models via environment variables
- Function Calling: Complete tool usage support and conversion
- Streaming Response: Real-time SSE streaming support
- Image Support: Base64 encoded image input
- Web Management Interface: Apple-style design management panel
- Database Support: Data persistence using ent and SQLite3
- User Management: Complete user authentication and authorization system
- Request Logging: Detailed API request logging and analysis
- System Monitoring: Real-time system performance monitoring and health checks
- Cache Optimization: Intelligent cache system for performance improvement
- Error Handling: Comprehensive error handling and logging
# Ensure Go 1.21+ is installed
go version
# Download dependencies
go mod tidy# Copy configuration file (optional)
cp .env.example .env
# Edit .env file to set data storage directory and security keys
# All API and service configurations are managed through the web interfaceUsing published images:
# Using Docker Hub image with data persistence
docker run -d -p 8082:8082 \
-v ccany_data:/app/data \
-v ccany_logs:/app/logs \
-e OPENAI_API_KEY=your_openai_api_key \
czyt/ccany:latest
# Or using GitHub Container Registry image with data persistence
docker run -d -p 8082:8082 \
-v ccany_data:/app/data \
-v ccany_logs:/app/logs \
-e OPENAI_API_KEY=your_openai_api_key \
ghcr.io/ca-x/ccany:latestUsing Docker Compose (Production):
# Copy environment configuration
cp .env.example .env
# Edit .env file to set required API keys
# Use production compose (with published images)
docker-compose -f docker-compose.production.yml up -d
# Or use full compose (includes monitoring services)
docker-compose up -d# Start server
go run cmd/server/main.go# After starting the server, initial setup is required for first access
# Visit http://localhost:8082/setup to create admin account and configure API keys
# Or use the deployment script for automated deployment:
chmod +x scripts/deploy.sh
./scripts/deploy.sh start# After logging into the web interface, configure in the management panel:
# - OpenAI API key and base URL
# - Claude API key and base URL
# - Model configuration
# - Performance parametersANTHROPIC_BASE_URL=http://localhost:8082 ANTHROPIC_AUTH_TOKEN="some-api-key" claudeOpen your browser and visit http://localhost:8082 to view the management panel.
System Configuration:
CLAUDE_PROXY_DATA_PATH- Data storage directory (default:./data)CLAUDE_PROXY_MASTER_KEY- Master key for encrypting sensitive configurations (recommended for production)JWT_SECRET- JWT key for user authentication (recommended for production)
API Configuration:
- OpenAI API key and base URL
- Claude API key and base URL
- Azure API version (optional)
Model Configuration:
- Large model (default:
gpt-4o) - Small model (default:
gpt-4o-mini)
Server Settings:
- Server host (default:
0.0.0.0) - Server port (default:
8082) - Log level (default:
info)
Performance Configuration:
- Maximum token limit (default:
4096) - Minimum token limit (default:
100) - Request timeout seconds (default:
90) - Maximum retry attempts (default:
2) - Temperature parameter (default:
0.7) - Streaming response (default:
true)
π‘ Note: All API and service configurations are now managed through the web management interface, no longer requiring environment variables. Visit the
/setuppage for initial setup on first run.
The proxy maps Claude model requests to your configured models:
| Claude Request | Mapped To | Environment Variable |
|---|---|---|
| Models containing "haiku" | SMALL_MODEL |
Default: gpt-4o-mini |
| Models containing "sonnet" or "opus" | BIG_MODEL |
Default: gpt-4o |
Configure through web interface:
- OpenAI API Key:
sk-your-openai-key - OpenAI Base URL:
https://api.openai.com/v1 - Large Model:
gpt-4o - Small Model:
gpt-4o-mini
Configure through web interface:
- OpenAI API Key:
your-azure-key - OpenAI Base URL:
https://your-resource.openai.azure.com/openai/deployments/your-deployment - Azure API Version:
2024-02-01 - Large Model:
gpt-4 - Small Model:
gpt-35-turbo
Configure through web interface:
- OpenAI API Key:
dummy-key(required but can be dummy) - OpenAI Base URL:
http://localhost:11434/v1 - Large Model:
llama3.1:70b - Small Model:
llama3.1:8b
Configure through web interface:
- Claude API Key:
sk-ant-your-claude-key - Claude Base URL:
https://api.anthropic.com
Access http://localhost:8082 to use the web management interface, which includes:
- Dashboard: View service status and statistics
- Request Logs: View detailed records and analysis of all API requests
- System Monitoring: Real-time system performance monitoring and health checks
- User Management: User account management and permission control
- Configuration Management: View and modify system configuration parameters
- API Testing: Test connections and send test messages
The interface features Apple-style design with responsive layout and frosted glass effects. A complete authentication system ensures the security of the management interface.
ccany/
βββ cmd/
β βββ server/
β βββ main.go # Main program entry
βββ internal/
β βββ app/ # Application configuration management
β βββ auth/ # Authentication service
β βββ cache/ # Cache service
β βββ claudecode/ # Claude Code compatibility services
β βββ client/ # OpenAI client
β βββ config/ # Configuration management
β βββ converter/ # Request/response converter
β βββ crypto/ # Encryption service
β βββ database/ # Database management
β βββ handlers/ # HTTP handlers
β βββ logging/ # Request logging
β βββ middleware/ # Middleware
β βββ models/ # Data models
β βββ monitoring/ # System monitoring
βββ ent/
β βββ schema/ # Database schema definitions
β βββ ... # Generated ORM code
βββ tests/
β βββ basic_test.go # Basic test file
βββ scripts/
β βββ deploy.sh # Deployment script
βββ web/
β βββ index.html # Main page
β βββ setup.html # Setup page
β βββ static/
β βββ css/ # Style files
β βββ js/ # JavaScript files
βββ .env.example # Configuration template
βββ go.mod # Go module file
βββ Makefile # Build script
βββ README.md # This file
POST /v1/messages- Create messagePOST /v1/messages/count_tokens- Count tokens
GET /- Web management interfaceGET /health- Health checkGET /ready- Readiness checkGET /setup- Initialization setup interfacePOST /setup- Submit initialization setup
POST /admin/auth/login- Admin loginPOST /admin/auth/logout- Admin logoutGET /admin/auth/profile- Get user information
GET /admin/users- Get user listPOST /admin/users- Create userPUT /admin/users/:id- Update userDELETE /admin/users/:id- Delete userGET /admin/config- Get configuration informationPUT /admin/config- Update configurationGET /admin/request-logs- Get request logsGET /admin/request-logs/stats- Get request statisticsDELETE /admin/request-logs- Clear request logs
GET /admin/monitoring/health- System health checkGET /admin/monitoring/metrics- System metricsGET /admin/monitoring/system- System information
# Build executable
go build -o ccany cmd/server/main.go
# Cross-compile
GOOS=linux GOARCH=amd64 go build -o ccany-linux cmd/server/main.go
GOOS=windows GOARCH=amd64 go build -o ccany.exe cmd/server/main.go# Run all tests
go test ./...
# Run tests for specific package
go test ./internal/converter
# Run integration tests (requires server to be running)
go test -v ./tests/
# Run benchmark tests
go test -bench=. ./tests/
# Generate test coverage report
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out -o coverage.html# Format code
go fmt ./...
# Check code
go vet ./...- Concurrent Processing: High concurrency using Goroutines
- Connection Pool: Efficient HTTP connection management
- Streaming Support: Real-time response streaming
- Intelligent Caching: Multi-strategy cache system (LRU, LFU, TTL)
- Request Logging: Asynchronous logging without performance impact
- System Monitoring: Low-overhead real-time performance monitoring
- Database Optimization: Connection pooling and query optimization
- Memory Management: Graceful memory usage and garbage collection
- Configurable Timeouts: Configurable timeouts and retries
- Intelligent Error Handling: Detailed logging
This proxy is designed to work seamlessly with Claude Code CLI. The enhanced version includes complete Claude Code compatibility support:
# Start the enhanced proxy using deployment script
./scripts/deploy.sh start
# Use Claude Code with the proxy
ANTHROPIC_BASE_URL=http://localhost:8082 claude
# Or set permanently
export ANTHROPIC_BASE_URL=http://localhost:8082
claude- β
Complete SSE Event Sequence: Support for
message_start,content_block_start,ping,content_block_delta,content_block_stop,message_delta,message_stopevents - β Request Cancellation Support: Client disconnect detection and graceful request cancellation
- β
Claude Configuration Automation: Automatic creation of
~/.claude.jsonconfiguration file - β
Thinking Mode: Support for
thinkingfield and intelligent model routing - β Enhanced Tool Calls: Tool call streaming with incremental JSON parsing
- β
Cache Tokens: Support for
cache_read_input_tokensusage reporting - β Smart Routing: Intelligent model selection based on complexity and token count
# Basic deployment
./scripts/deploy.sh start
# Deployment with monitoring
./scripts/deploy.sh monitoring
# Deployment with Nginx
./scripts/deploy.sh nginx
# Test Claude Code compatibility
./scripts/deploy.sh test
# Show help
./scripts/deploy.sh help# Copy environment configuration
cp .env.example .env
# Edit .env file to configure API keys
# Basic deployment
docker-compose up -d
# Deployment with monitoring
docker-compose --profile monitoring up -d
# Deployment with Nginx
docker-compose --profile nginx up -d
# Test Claude Code compatibility
docker-compose --profile test up --build test-claude-code# Automated deployment (recommended)
./scripts/deploy.sh start
# Deployment with monitoring stack
./scripts/deploy.sh monitoring
# Check service status
./scripts/deploy.sh status
# View logs
./scripts/deploy.sh logsFor detailed deployment guide, please refer to: Deployment Documentation
MIT License
Issues and Pull Requests are welcome!
- β Complete Claude Code compatibility support
- β Enhanced SSE event sequence (message_start, content_block_start, ping, content_block_delta, content_block_stop, message_delta, message_stop)
- β Request cancellation and client disconnect detection
- β Claude configuration auto-initialization (~/.claude.json)
- β Thinking mode support and intelligent model routing
- β Enhanced tool call streaming
- β Cache token usage reporting
- β Enhanced Docker and Docker Compose configuration
- β GitHub Actions CI/CD pipeline
- β Complete deployment scripts and monitoring support
- β Enhanced Nginx configuration and performance optimization
- Complete backend management system
- User authentication and authorization
- Request logging and analysis
- System monitoring and health checks
- Intelligent cache system
- Complete test suite
- Enhanced documentation and deployment guide
- Database integration and ORM support
- Configuration management system
- Secure configuration storage
- Database migration support
- Initial Go version release
- Complete Claude API compatibility
- Web management interface
- Database support
- Apple-style design UI