This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
# Install dependencies
npm install
# Run the server in development mode
npm run dev
# Build the project
npm run build
# Start the compiled server
npm start
# Run tests
npm test
# Run type checking
npm run type-check
# Run linting
npm run lint
# Format code (if using prettier)
npm run lint:fix# Start the proxy server with CLI
npx anthropic-proxy-nextgen start --port 8080 --base-url=http://localhost:4000 --big-model-name=github-copilot-claude-sonnet-4 --small-model-name=github-copilot-claude-3.5-sonnet --openai-api-key=sk-your-key --log-level=DEBUG
# Test with Claude Code
ANTHROPIC_BASE_URL=http://localhost:8080 claudeThis is a TypeScript Express.js application that serves as a proxy between Anthropic's Claude API and OpenAI-compatible APIs. The application is modular with well-separated concerns.
src/cli.ts: Command-line interface using Commander.jssrc/server.ts: Express server setup with routing and middlewaresrc/types.ts: TypeScript types and Zod schemas for validationsrc/converter.ts: Request/response conversion between Anthropic and OpenAI formatssrc/streaming.ts: Server-Sent Events streaming response handlersrc/tokenizer.ts: Token counting using tiktokensrc/logger.ts: Structured JSON logging with Winstonsrc/errors.ts: Error handling and mapping
- API Translation: Converts between Anthropic Messages API and OpenAI Chat Completions format
- Dynamic Model Selection: Maps Claude model names (Opus/Sonnet → big model, Haiku → small model)
- Streaming Support: Handles SSE streaming with proper content block indexing for mixed text/tool_use content
- Tool/Function Translation: Converts between Anthropic's tool system and OpenAI's function calling
- Comprehensive Error Handling: Maps OpenAI errors to Anthropic-compatible error formats
// In converter.ts
if (clientModelLower.includes('opus') || clientModelLower.includes('sonnet')) {
targetModel = bigModelName;
} else if (clientModelLower.includes('haiku')) {
targetModel = smallModelName;
} else {
targetModel = smallModelName; // default
}All configuration is handled through:
- CLI arguments (primary)
- Environment variables (fallback)
.envfile (development)
Required configuration:
baseUrl: Target OpenAI-compatible API endpointopenaiApiKey: API key for the target servicebigModelName/smallModelName: Model mapping configuration
POST /v1/messages: Main Anthropic Messages API compatible endpointPOST /v1/messages/count_tokens: Token counting utilityGET /: Health check and server info
Uses Zod schemas for runtime validation of:
- Anthropic request/response formats
- Configuration objects
- Internal data structures
The streaming handler (src/streaming.ts) maintains:
- Content block indexing for mixed text/tool_use responses
- Tool state tracking during streaming
- Proper SSE event formatting for Anthropic compatibility
- Token counting during stream processing
Comprehensive error mapping with:
- OpenAI API error extraction and conversion
- Provider-specific error details preservation
- Structured logging for debugging
- Proper HTTP status code mapping
Structured JSON logging with:
- Request/response tracking
- Performance metrics
- Error context preservation
- File and console output support
- Package Manager: Uses npm (could be migrated to yarn/pnpm if needed)
- Build System: TypeScript compiler with standard tsconfig.json
- Code Quality: ESLint + TypeScript strict mode
- Testing: Jest framework (tests need to be implemented)
- CLI Distribution: Published as npm package with binary
The application compiles to a standalone Node.js application that can be:
- Installed globally:
npm install -g anthropic-proxy-nextgen - Run directly:
npx anthropic-proxy-nextgen@latest start [options] - Used programmatically: Import as a TypeScript/JavaScript module
This TypeScript version maintains 100% API compatibility with the original Python FastAPI version while adding:
- CLI interface for easy deployment
- npm package distribution
- Better type safety
- Modular architecture for easier maintenance