DS2API converts DeepSeek Web chat capability into OpenAI-compatible, Claude-compatible, and Gemini-compatible APIs. The backend is a pure Go implementation, with a React WebUI admin panel (source in webui/, build output auto-generated to static/admin during deployment).
Documentation entry: Docs Index / Architecture / API Reference
Important Disclaimer
This repository is provided for learning, research, personal experimentation, and internal validation only. It does not grant any commercial authorization and comes with no warranty of fitness, stability, or results.
The author and repository maintainers are not responsible for any direct or indirect loss, account suspension, data loss, legal risk, or third-party claims arising from use, modification, distribution, deployment, or reliance on this project.
Do not use this project in ways that violate service terms, agreements, laws, or platform rules. Before any commercial use, review the
LICENSE, the relevant terms, and confirm that you have the author's written permission.
- Architecture Overview (Summary)
- Key Capabilities
- Platform Compatibility Matrix
- Model Support
- Quick Start
- Configuration
- Authentication Modes
- Concurrency Model
- Tool Call Adaptation
- Local Dev Packet Capture
- Documentation Index
- Testing
- Release Artifact Automation (GitHub Actions)
- Disclaimer
flowchart LR
Client["🖥️ Clients / SDKs\n(OpenAI / Claude / Gemini)"]
Upstream["☁️ DeepSeek API"]
subgraph DS2API["DS2API 4.x (Modular HTTP Surface + PromptCompat Core)"]
Router["chi Router + Middleware\n(RequestID / RealIP / Logger / Recoverer / CORS)"]
subgraph HTTP["HTTP API Surface"]
OA["OpenAI\nchat / responses / files / embeddings"]
CA["Claude\n/anthropic/* + /v1/messages"]
GA["Gemini\n/v1beta/models/* + /v1/models/*"]
Admin["Admin API\nresource packages"]
WebUI["WebUI\n/admin (static hosting)"]
Vercel["Vercel Node Stream\n/v1/chat/completions"]
end
subgraph Runtime["Runtime + Core Capabilities"]
Compat["PromptCompat\n(API -> web-chat plain text context)"]
Chat["Chat / Responses Runtime\n(unified tools + stream semantics)"]
Auth["Auth Resolver\n(API key / bearer / x-goog-api-key)"]
Pool["Account Pool + Queue\n(in-flight slots + wait queue)"]
DSClient["DeepSeek Client\n(session / auth / completion / files)"]
Pow["PoW Solver\n(Pure Go)"]
Tool["Tool Sieve\n(Go/Node semantic parity)"]
History["History Split\n(long history as files)"]
end
end
Client --> Router
Router --> OA & CA & GA
Router --> Admin
Router --> WebUI
Router --> Vercel
OA --> Compat
CA & GA --> Compat
Compat --> Chat
Compat -.long history.-> History
Vercel -.Go prepare.-> Chat
Vercel -.Node SSE.-> Tool
Chat --> Auth
Chat -.account rotation.-> Pool
Chat -.tool-call parsing.-> Tool
Chat -.PoW solving.-> Pow
Auth --> DSClient
DSClient --> Upstream
Upstream --> DSClient
Chat --> Client
Vercel --> Client
For the full module-by-module architecture and directory responsibilities, see docs/ARCHITECTURE.en.md.
- Backend: Go (
cmd/ds2api/,api/,internal/), no Python runtime - Frontend: React admin panel (
webui/), served as static build at runtime - Deployment: local run, Docker, Vercel serverless, Linux systemd
| Capability | Details |
|---|---|
| OpenAI compatible | GET /v1/models, GET /v1/models/{id}, POST /v1/chat/completions, POST /v1/responses, GET /v1/responses/{response_id}, POST /v1/embeddings, POST /v1/files |
| Claude compatible | GET /anthropic/v1/models, POST /anthropic/v1/messages, POST /anthropic/v1/messages/count_tokens (plus shortcut paths /v1/messages, /messages) |
| Gemini compatible | POST /v1beta/models/{model}:generateContent, POST /v1beta/models/{model}:streamGenerateContent (plus /v1/models/{model}:* paths) |
| Unified CORS compatibility | /v1/*, /anthropic/*, /v1beta/models/*, and /admin/* share one CORS policy; on Vercel, the Node Runtime for /v1/chat/completions mirrors the same relaxed preflight behavior for third-party clients |
| Multi-account rotation | Auto token refresh, email/mobile dual login |
| Concurrency control | Per-account in-flight limit + waiting queue, dynamic recommended concurrency |
| DeepSeek PoW | Pure Go high-performance solver (DeepSeekHashV1), ms-level response |
| Tool Calling | Anti-leak handling: non-code-block feature match, early delta.tool_calls, structured incremental output |
| Admin API | Config management, runtime settings hot-reload, proxy management, account testing/batch test, session cleanup, import/export, Vercel sync, version check |
| WebUI Admin Panel | SPA at /admin (bilingual Chinese/English, dark mode, with server-side conversation history) |
| Health Probes | GET /healthz (liveness), GET /readyz (readiness) |
OpenAI /v1/* routes remain canonical, and DS2API also accepts root shortcuts such as /models, /chat/completions, /responses, /embeddings, and /files for clients configured with the bare service URL.
| Tier | Platform | Status |
|---|---|---|
| P0 | Codex CLI/SDK (wire_api=chat / wire_api=responses) |
✅ |
| P0 | OpenAI SDK (JS/Python, chat + responses) | ✅ |
| P0 | Vercel AI SDK (openai-compatible) | ✅ |
| P0 | Anthropic SDK (messages) | ✅ |
| P0 | Google Gemini SDK (generateContent) | ✅ |
| P1 | LangChain / LlamaIndex / OpenWebUI (OpenAI-compatible integration) | ✅ |
| Family | Model ID | thinking | search |
|---|---|---|---|
| default | deepseek-v4-flash |
enabled by default, request-controlled | ❌ |
| expert | deepseek-v4-pro |
enabled by default, request-controlled | ❌ |
| default | deepseek-v4-flash-search |
enabled by default, request-controlled | ✅ |
| expert | deepseek-v4-pro-search |
enabled by default, request-controlled | ✅ |
| vision | deepseek-v4-vision |
enabled by default, request-controlled | ❌ |
Besides native IDs, DS2API also accepts common aliases as input (for example gpt-4.1, gpt-5, gpt-5-codex, o3, claude-*, gemini-*), but /v1/models returns normalized DeepSeek native model IDs. The complete alias behavior is documented in API.en.md and config.example.json.
Current upstream vision support exposes only the vision lane and does not provide a separate search-enabled vision variant.
| Current common model | Default Mapping |
|---|---|
claude-sonnet-4-6 |
deepseek-v4-flash |
claude-haiku-4-5 (compatible with claude-3-5-haiku-latest) |
deepseek-v4-flash |
claude-opus-4-6 |
deepseek-v4-pro |
Override mapping via the global model_aliases config.
Besides the primary aliases above, /anthropic/v1/models also returns Claude 4.x snapshots plus historical 3.x IDs and common aliases for legacy client compatibility.
- Set
ANTHROPIC_BASE_URLto the DS2API root URL (for examplehttp://127.0.0.1:5001). Claude Code sends requests to/v1/messages?beta=true. ANTHROPIC_API_KEYmust match an entry inkeysfromconfig.json. Keeping both a regular key and ansk-ant-*style key improves client compatibility.- If your environment has proxy variables, set
NO_PROXY=127.0.0.1,localhost,<your_host_ip>for DS2API to avoid proxy interception of local traffic. - If tool calls are rendered as plain text and not executed, first verify the model output uses the recommended DSML block:
<|DSML|tool_calls><|DSML|invoke name="..."><|DSML|parameter name="...">.... DS2API also accepts legacy canonical XML:<tool_calls><invoke name="..."><parameter name="...">...; legacy<tools>/<tool_call>/<tool_name>/<param>,<function_call>,tool_use, or standalone JSONtool_callsare not executed.
The Gemini adapter maps model names to DeepSeek native models via model_aliases or built-in heuristics, supporting both generateContent and streamGenerateContent call patterns with full Tool Calling support (functionDeclarations → functionCall output).
Recommended order when choosing a deployment method:
- Download and run release binaries: the easiest path for most users because the artifacts are already built.
- Docker / GHCR image deployment: suitable for containerized, orchestrated, or cloud environments.
- Vercel deployment: suitable if you already use Vercel and accept its platform constraints.
- Run from source / build locally: suitable for development, debugging, or when you need to modify the code yourself.
Use config.json as the single source of truth (recommended):
cp config.example.json config.json
# Edit config.jsonRecommended per deployment mode:
- Local run: read
config.jsondirectly - Docker / Vercel: generate Base64 from
config.jsonand inject asDS2API_CONFIG_JSON, or paste raw JSON directly
The WebUI admin panel’s “Full configuration template” is loaded from the same config.example.json, so updating that file keeps the frontend template in sync.
GitHub Actions automatically builds multi-platform archives on each Release:
# After downloading the archive for your platform
tar -xzf ds2api_<tag>_linux_amd64.tar.gz
cd ds2api_<tag>_linux_amd64
cp config.example.json config.json
# Edit config.json
./ds2api# Pull prebuilt image
docker pull ghcr.io/cjackhwang/ds2api:latest
# Or run a pinned version
# docker pull ghcr.io/cjackhwang/ds2api:v3.0.0
# Prepare env file and config file
cp .env.example .env
cp config.example.json config.json
# Start with compose
docker-compose up -dThe default docker-compose.yml uses ghcr.io/cjackhwang/ds2api:latest and maps host port 6011 to container port 5001. If you want 5001 exposed directly, set DS2API_HOST_PORT=5001 (or adjust the ports mapping).
It also mounts ./config.json to /data/config.json and sets DS2API_CONFIG_PATH=/data/config.json by default, which avoids runtime token persistence failures caused by read-only /app.
Rebuild after updates: docker-compose up -d --build
- Click the “Deploy on Zeabur” button above to deploy.
- After deployment, open
/adminand login withDS2API_ADMIN_KEYshown in Zeabur env/template instructions. - Import / edit config in Admin UI (it will be written and persisted to
/data/config.json).
Note: when Zeabur builds directly from the repo Dockerfile, you do not need to pass BUILD_VERSION. The image prefers that build arg when provided, and automatically falls back to the repo-root VERSION file when it is absent.
- Fork this repo to your GitHub account
- Import the project on Vercel
- Set environment variables (minimum:
DS2API_ADMIN_KEY; recommended to also setDS2API_CONFIG_JSON) - Deploy
Recommended first step in repo root:
cp config.example.json config.json
# Edit config.jsonRecommended: convert config.json to Base64 locally, then paste into DS2API_CONFIG_JSON to avoid JSON formatting mistakes:
base64 < config.json | tr -d '\n'Streaming note:
/v1/chat/completionson Vercel is routed toapi/chat-stream.js(Node Runtime) for real-time SSE. Auth, account selection, and session/PoW preparation are still handled by the Go internal prepare endpoint; streaming output (includingtools) is assembled on Node with Go-aligned anti-leak handling. This is the only interface family currently routed through Node, and its CORS allow behavior is kept aligned with the Go router so third-party preflight handling stays unified.
For detailed deployment instructions, see the Deployment Guide.
Prerequisites: Go 1.26+, Node.js 20.19+ or 22.12+ (only if building WebUI locally)
# 1. Clone
git clone https://github.com/CJackHwang/ds2api.git
cd ds2api
# 2. Configure
cp config.example.json config.json
# Edit config.json with your DeepSeek account info and API keys
# 3. Start
go run ./cmd/ds2apiDefault local URL: http://127.0.0.1:5001
The server actually binds to 0.0.0.0:5001, so devices on the same LAN can usually reach it through your private IP as well.
WebUI auto-build: On first local startup, if
static/adminis missing, DS2API will auto-runnpm ci(only when dependencies are missing) andnpm run build -- --outDir static/admin --emptyOutDir(requires Node.js). You can also build manually:./scripts/build-webui.sh
README keeps only the onboarding path. Use config.example.json as the field template, and see the deployment guide plus API configuration notes for full details.
Common fields:
keys/api_keys: client API keys;api_keysaddsnameandremarkmetadata whilekeysremains compatible.accounts: managed DeepSeek accounts, supportingemailormobilelogin plus proxy/name/remark metadata.model_aliases: one shared alias map for OpenAI / Claude / Gemini model names.runtime: account concurrency, queueing, and token refresh behavior, hot-reloadable via Admin Settings.auto_delete.mode: remote session cleanup after each request, supportingnone/single/all.history_split: legacy multi-turn history split field, now ignored and kept only for backward-compatible config loading.current_input_file: the only active split mode; it is enabled by default and uploads the full context as ahistory.txtcontext file once the character threshold is reached.- If you turn off
current_input_file, requests pass through directly without uploading any split context file.
For the full environment variable list, see docs/DEPLOY.en.md. For auth behavior, see API.en.md.
For business endpoints (/v1/*, /anthropic/*, Gemini routes), DS2API supports two modes:
| Mode | Description |
|---|---|
| Managed account | Use a key from config.keys via Authorization: Bearer ... or x-api-key; DS2API auto-selects an account |
| Direct token | If the token is not in config.keys, DS2API treats it as a DeepSeek token directly |
Optional header X-Ds2-Target-Account: Pin a specific managed account (value is email or mobile).
Gemini routes also accept x-goog-api-key, or ?key= / ?api_key= when no auth header is present.
Per-account inflight = DS2API_ACCOUNT_MAX_INFLIGHT (default 2)
Recommended concurrency = account_count × per_account_inflight
Queue limit = DS2API_ACCOUNT_MAX_QUEUE (default = recommended concurrency)
429 threshold = inflight + queue ≈ account_count × 4
- When inflight slots are full, requests enter a waiting queue — no immediate 429
- 429 is returned only when total load exceeds inflight + queue capacity
GET /admin/queue/statusreturns real-time concurrency state
When tools is present in the request, DS2API performs anti-leak handling:
- Toolcall feature matching is enabled only in non-code-block context (fenced examples are ignored)
- The parser now treats the DSML shell as the recommended executable tool-calling syntax:
<|DSML|tool_calls>→<|DSML|invoke name="...">→<|DSML|parameter name="...">; it also accepts legacy canonical XML<tool_calls>→<invoke name="...">→<parameter name="...">. DSML is a shell alias and internal parsing remains XML-based; legacy<tools>/<tool_call>/<tool_name>/<param>,<function_call>,tool_use, antml variants, and standalone JSONtool_callspayloads are treated as plain text responsesstreaming strictly uses official item lifecycle events (response.output_item.*,response.content_part.*,response.function_call_arguments.*)responsessupports and enforcestool_choice(auto/none/required/forced function);requiredviolations return422for non-stream andresponse.failedfor stream- The output protocol follows the client request (OpenAI / Claude / Gemini native shapes); model-side prompting can prefer XML, and the compatibility layer handles the protocol-specific translation
Note: the current parser still prioritizes “parse successfully whenever possible”; hard allow-list rejection for undeclared tool names is not enabled yet.
This is for debugging issues such as Responses reasoning streaming and tool-call handoff. When enabled, DS2API stores the latest N DeepSeek conversation payload pairs (request body + upstream response body), defaulting to 20 entries with auto-eviction; each response body is capped at 5 MB by default.
Enable example:
DS2API_DEV_PACKET_CAPTURE=true \
DS2API_DEV_PACKET_CAPTURE_LIMIT=20 \
go run ./cmd/ds2apiInspect/clear (Admin JWT required):
GET /admin/dev/captures: list captured items (newest first)DELETE /admin/dev/captures: clear captured itemsGET /admin/dev/raw-samples/query?q=keyword&limit=20: search current in-memory captures by prompt keyword and groupcompletion + continuebychat_session_idPOST /admin/dev/raw-samples/save: persist a selected capture chain astests/raw_stream_samples/<sample-id>/
Response fields include:
request_body: full payload sent to DeepSeekresponse_body: concatenated raw upstream stream body textresponse_truncated: whether body-size truncation happened
The save endpoint can target a chain by query, chain_key, or capture_id. Example:
{"query":"Guangzhou weather","sample_id":"gz-weather-from-memory"}| Document | Description |
|---|---|
| API.md / API.en.md | API reference with request/response examples |
| DEPLOY.md / DEPLOY.en.md | Deployment guide (local/Docker/Vercel/systemd) |
| CONTRIBUTING.md / CONTRIBUTING.en.md | Contributing guide |
| TESTING.md | Testsuite guide |
For the full testing guide, see docs/TESTING.md.
Quick commands:
# Local PR gates
./scripts/lint.sh
./tests/scripts/check-refactor-line-gate.sh
./tests/scripts/run-unit-all.sh
npm run build --prefix webui
# Live end-to-end tests (real accounts, full request/response logs)
./tests/scripts/run-live.shWorkflow: .github/workflows/release-artifacts.yml
- Trigger: only on GitHub Release
published(normal pushes do not trigger builds) - Outputs: multi-platform archives (
linux/amd64,linux/arm64,linux/armv7,darwin/amd64,darwin/arm64,windows/amd64,windows/arm64) +sha256sums.txt - Container publishing: GHCR only (
ghcr.io/cjackhwang/ds2api) - Each archive includes:
ds2apiexecutable,static/admin, WASM file (with embedded fallback support),config.example.json-based config template, README, LICENSE
This project is built through reverse engineering and is provided for learning, research, personal experimentation, and internal validation only. No commercial authorization is granted, and no warranty of stability, fitness, or results is provided. The author and repository maintainers are not responsible for any direct or indirect loss, account suspension, data loss, legal risk, or third-party claims arising from use, modification, distribution, deployment, or reliance on this project.
Do not use this project in ways that violate service terms, agreements, laws, or platform rules. Before any commercial use, review the LICENSE, the relevant terms, and confirm that you have the author's written permission.