Help Wanted: Community Provider Testing & Contributions
English
Claudex supports routing Claude Code to multiple AI providers through its translation proxy. However, we don't have API keys for every platform and can't test all providers ourselves.
What We Need
We're looking for community members who can help with:
- Testing existing providers — Verify that profiles in
config.example.toml work correctly with real API keys
- Adding new providers — Contribute profile configs and translation fixes for untested platforms
- OAuth subscription testing — Test
claudex auth login with different subscription plans (ChatGPT Plus/Pro, Gemini Pro, GitHub Copilot, etc.)
- Fixing translation edge cases — Some providers have slight API differences that need special handling in
src/proxy/translation.rs
Currently Tested (v0.2.0)
| Provider |
Route |
Status |
Notes |
| OpenRouter → Claude |
openrouter-claude |
✅ Working |
Correctly identifies as Claude by Anthropic |
| OpenRouter → GPT |
openrouter-gpt |
✅ Working |
Response via Claude Code system prompt |
| OpenRouter → Gemini |
openrouter-gemini |
✅ Working |
Correctly identifies as Gemini by Google |
| OpenRouter → DeepSeek |
openrouter-deepseek |
✅ Working |
Correctly identifies as DeepSeek-V3 |
| OpenRouter → Grok |
openrouter-grok |
✅ Working |
Correctly identifies as Grok by xAI |
| OpenRouter → Qwen |
openrouter-qwen |
✅ Working |
Correctly identifies as Qwen3 by Alibaba |
| OpenRouter → Llama |
openrouter-llama |
✅ Working |
Correctly identifies model as llama-4-maverick |
| MiniMax (Claude proxy) |
minimax |
✅ Working |
DirectAnthropic passthrough |
| OpenAI (direct API) |
openai |
✅ Working |
Requires max_tokens = 16384 in profile config |
| Mistral (direct API) |
mistral |
✅ Working |
mistral-large-latest via api.mistral.ai |
| Cohere (direct API) |
cohere |
✅ Working |
command-a-03-2025 via compatibility API, requires max_tokens = 8192 |
| Z.AI / GLM (direct API) |
glm |
✅ Working |
GLM-4.6 via api.z.ai |
| Kimi / Moonshot (direct API) |
kimi |
✅ Working |
Kimi-K2 via api.moonshot.ai |
| Ollama (local) |
ollama |
✅ Working |
Tested with gpt-oss:20b, localhost:11434 |
| OpenAI (Codex CLI OAuth) |
codex-sub |
✅ Working |
OAuth subscription, auto token refresh |
| Anthropic (direct API) |
anthropic |
❌ Key Issue |
Organization disabled (not a Claudex bug) |
Needs Testing / Contributions
Ranked by developer adoption (based on real-world usage data).
High Priority — Most requested by developers, large user base:
| Provider |
Type |
Base URL |
Notes |
| Groq |
OpenAICompatible |
https://api.groq.com/openai/v1 |
Ultra-fast inference, free tier available |
| Google AI Studio / Gemini |
OpenAICompatible |
https://generativelanguage.googleapis.com/v1beta/openai |
Direct Gemini API (not via OpenRouter) |
| Azure OpenAI |
OpenAICompatible |
https://{resource}.openai.azure.com/openai/deployments/{model}/v1 |
Enterprise deployments |
| Perplexity AI |
OpenAICompatible |
https://api.perplexity.ai |
Search-augmented generation |
Medium Priority — Growing adoption, active developer communities:
| Provider |
Type |
Base URL |
Notes |
| Cerebras |
OpenAICompatible |
https://api.cerebras.ai/v1 |
Ultra-fast inference hardware |
| Together AI |
OpenAICompatible |
https://api.together.xyz/v1 |
Open model hosting, fine-tuning |
| Fireworks AI |
OpenAICompatible |
https://api.fireworks.ai/inference/v1 |
Fast open model inference |
| GitHub Models |
OpenAICompatible |
https://models.inference.ai.azure.com |
Free tier with GitHub account |
Low Priority — Niche or region-specific:
| Provider |
Type |
Base URL |
Notes |
| Amazon Bedrock |
OpenAICompatible |
Requires SDK adapter |
Enterprise AWS integration |
| Cloudflare Workers AI |
OpenAICompatible |
https://api.cloudflare.com/client/v4/accounts/{id}/ai/v1 |
Edge inference |
| Nvidia NIM |
OpenAICompatible |
https://integrate.api.nvidia.com/v1 |
GPU-optimized inference |
| Yi / 零一万物 |
OpenAICompatible |
https://api.lingyiwanwu.com/v1 |
Yi-Lightning |
| Baichuan / 百川 |
OpenAICompatible |
https://api.baichuan-ai.com/v1 |
Chinese LLM |
| Volcengine / 豆包 |
OpenAICompatible |
https://ark.cn-beijing.volces.com/api/v3 |
ByteDance Doubao |
| SiliconFlow |
OpenAICompatible |
https://api.siliconflow.cn/v1 |
Chinese model aggregator |
Known Issues & Tips
- OpenAI direct API: GPT-4o only supports
max_tokens = 16384. Add max_tokens = 16384 to profile config (v0.1.2+).
- Cohere:
command-a-03-2025 only supports max_tokens = 8192. Use base_url = "https://api.cohere.ai/compatibility/v1" (the OpenAI-compatible endpoint).
--no-chrome: Claudex v0.1.2+ automatically injects --no-chrome to avoid Chrome integration conflicts.
- OAuth token refresh: OpenAI (Codex CLI) tokens auto-refresh when expired (v0.1.2+).
- Ollama: Use
api_key = "ollama" (any non-empty value works, Ollama doesn't validate keys).
How to Contribute
- Quick test: Add a profile to your config, run:
claudex run <profile> -p "hello" --dangerously-skip-permissions --no-session-persistence --no-chrome --disable-slash-commands --tools "" --output-format text
- Config PR: Add a working profile to
config.example.toml
- Translation fix: Fix API format differences in
src/proxy/translation.rs
- OAuth flow: Test
claudex auth login <provider> and report results
Profile Config Template
[[profiles]]
name = "provider-name"
provider_type = "OpenAICompatible"
base_url = "https://api.example.com/v1"
api_key = "your-api-key"
default_model = "model-name"
# max_tokens = 16384 # uncomment if provider has a lower limit
priority = 70
enabled = true
[profiles.models]
haiku = "small-model"
sonnet = "default-model"
opus = "large-model"
[profiles.custom_headers]
[profiles.extra_env]
See CONTRIBUTING.md for development setup.
征集社区提供商测试与贡献
中文
Claudex 通过翻译代理支持将 Claude Code 路由到多个 AI 提供商。但是,我们没有所有平台的 API Key,无法独自测试所有提供商。
已测试 (v0.2.0)
| 提供商 |
Profile 名 |
状态 |
备注 |
| OpenRouter → Claude |
openrouter-claude |
✅ 正常 |
正确识别为 Claude / Anthropic |
| OpenRouter → GPT |
openrouter-gpt |
✅ 正常 |
通过 Claude Code 系统提示词响应 |
| OpenRouter → Gemini |
openrouter-gemini |
✅ 正常 |
正确识别为 Gemini / Google |
| OpenRouter → DeepSeek |
openrouter-deepseek |
✅ 正常 |
正确识别为 DeepSeek-V3 |
| OpenRouter → Grok |
openrouter-grok |
✅ 正常 |
正确识别为 Grok / xAI |
| OpenRouter → Qwen |
openrouter-qwen |
✅ 正常 |
正确识别为 Qwen3 / 阿里云 |
| OpenRouter → Llama |
openrouter-llama |
✅ 正常 |
正确识别为 llama-4-maverick |
| MiniMax (Claude 代理) |
minimax |
✅ 正常 |
DirectAnthropic 直通 |
| OpenAI (直连 API) |
openai |
✅ 正常 |
需配置 max_tokens = 16384 |
| Mistral (直连 API) |
mistral |
✅ 正常 |
mistral-large-latest,api.mistral.ai |
| Cohere (直连 API) |
cohere |
✅ 正常 |
command-a-03-2025,需配置 max_tokens = 8192 |
| Z.AI / GLM (直连 API) |
glm |
✅ 正常 |
GLM-4.6,api.z.ai |
| Kimi / Moonshot (直连 API) |
kimi |
✅ 正常 |
Kimi-K2,api.moonshot.ai |
| Ollama (本地) |
ollama |
✅ 正常 |
gpt-oss:20b,localhost:11434 |
| OpenAI (Codex CLI OAuth) |
codex-sub |
✅ 正常 |
OAuth 订阅,token 自动刷新 |
| Anthropic (直连 API) |
anthropic |
❌ Key 问题 |
组织已禁用(非 Claudex bug) |
需要测试(按开发者使用频率排序)
高优先级 — 开发者高频使用:
| 提供商 |
类型 |
Base URL |
备注 |
| Groq |
OpenAICompatible |
https://api.groq.com/openai/v1 |
超快推理,有免费额度 |
| Google AI Studio / Gemini |
OpenAICompatible |
https://generativelanguage.googleapis.com/v1beta/openai |
Gemini 直连(非 OpenRouter) |
| Azure OpenAI |
OpenAICompatible |
https://{resource}.openai.azure.com/... |
企业部署 |
| Perplexity AI |
OpenAICompatible |
https://api.perplexity.ai |
搜索增强生成 |
中优先级 — 增长中的开发者社区:
| 提供商 |
类型 |
Base URL |
备注 |
| Cerebras |
OpenAICompatible |
https://api.cerebras.ai/v1 |
超快推理硬件 |
| Together AI |
OpenAICompatible |
https://api.together.xyz/v1 |
开源模型托管 |
| Fireworks AI |
OpenAICompatible |
https://api.fireworks.ai/inference/v1 |
快速推理 |
| GitHub Models |
OpenAICompatible |
https://models.inference.ai.azure.com |
GitHub 账号免费额度 |
低优先级 — 区域或垂直场景:
| 提供商 |
类型 |
Base URL |
备注 |
| Amazon Bedrock |
OpenAICompatible |
需 SDK 适配 |
AWS 企业集成 |
| Cloudflare Workers AI |
OpenAICompatible |
https://api.cloudflare.com/... |
边缘推理 |
| Nvidia NIM |
OpenAICompatible |
https://integrate.api.nvidia.com/v1 |
GPU 优化推理 |
| Yi / 零一万物 |
OpenAICompatible |
https://api.lingyiwanwu.com/v1 |
Yi-Lightning |
| Baichuan / 百川 |
OpenAICompatible |
https://api.baichuan-ai.com/v1 |
中文大模型 |
| 火山引擎 / 豆包 |
OpenAICompatible |
https://ark.cn-beijing.volces.com/api/v3 |
字节跳动豆包 |
| SiliconFlow |
OpenAICompatible |
https://api.siliconflow.cn/v1 |
国内模型聚合 |
已知问题与提示
- OpenAI 直连: GPT-4o 仅支持
max_tokens = 16384,需在 profile 配置中设置(v0.1.2+)
- Cohere:
command-a-03-2025 仅支持 max_tokens = 8192,base_url 使用 https://api.cohere.ai/compatibility/v1(OpenAI 兼容端点)
--no-chrome: Claudex v0.1.2+ 自动注入 --no-chrome 避免 Chrome 集成冲突
- OAuth token 刷新: OpenAI (Codex CLI) token 过期后自动刷新(v0.1.2+)
- Ollama:
api_key 填任意非空值即可(如 "ollama"),Ollama 不校验 key
Profile 配置模板
[[profiles]]
name = "provider-name"
provider_type = "OpenAICompatible"
base_url = "https://api.example.com/v1"
api_key = "your-api-key"
default_model = "model-name"
# max_tokens = 16384 # 如果 provider 有更低的上限则取消注释
priority = 70
enabled = true
[profiles.models]
haiku = "small-model"
sonnet = "default-model"
opus = "large-model"
[profiles.custom_headers]
[profiles.extra_env]
如何贡献
- 在配置中添加 profile,运行测试命令,反馈结果
- 提交
config.example.toml 的 PR
- 在
src/proxy/translation.rs 中修复翻译问题
- 测试
claudex auth login <provider> 并反馈
详见 CONTRIBUTING.md。
Help Wanted: Community Provider Testing & Contributions
English
Claudex supports routing Claude Code to multiple AI providers through its translation proxy. However, we don't have API keys for every platform and can't test all providers ourselves.
What We Need
We're looking for community members who can help with:
config.example.tomlwork correctly with real API keysclaudex auth loginwith different subscription plans (ChatGPT Plus/Pro, Gemini Pro, GitHub Copilot, etc.)src/proxy/translation.rsCurrently Tested (v0.2.0)
openrouter-claudeopenrouter-gptopenrouter-geminiopenrouter-deepseekopenrouter-grokopenrouter-qwenopenrouter-llamaminimaxopenaimax_tokens = 16384in profile configmistralmistral-large-latestviaapi.mistral.aicoherecommand-a-03-2025via compatibility API, requiresmax_tokens = 8192glmapi.z.aikimiapi.moonshot.aiollamalocalhost:11434codex-subanthropicNeeds Testing / Contributions
Ranked by developer adoption (based on real-world usage data).
High Priority — Most requested by developers, large user base:
https://api.groq.com/openai/v1https://generativelanguage.googleapis.com/v1beta/openaihttps://{resource}.openai.azure.com/openai/deployments/{model}/v1https://api.perplexity.aiMedium Priority — Growing adoption, active developer communities:
https://api.cerebras.ai/v1https://api.together.xyz/v1https://api.fireworks.ai/inference/v1https://models.inference.ai.azure.comLow Priority — Niche or region-specific:
https://api.cloudflare.com/client/v4/accounts/{id}/ai/v1https://integrate.api.nvidia.com/v1https://api.lingyiwanwu.com/v1https://api.baichuan-ai.com/v1https://ark.cn-beijing.volces.com/api/v3https://api.siliconflow.cn/v1Known Issues & Tips
max_tokens = 16384. Addmax_tokens = 16384to profile config (v0.1.2+).command-a-03-2025only supportsmax_tokens = 8192. Usebase_url = "https://api.cohere.ai/compatibility/v1"(the OpenAI-compatible endpoint).--no-chrome: Claudex v0.1.2+ automatically injects--no-chrometo avoid Chrome integration conflicts.api_key = "ollama"(any non-empty value works, Ollama doesn't validate keys).How to Contribute
config.example.tomlsrc/proxy/translation.rsclaudex auth login <provider>and report resultsProfile Config Template
See CONTRIBUTING.md for development setup.
征集社区提供商测试与贡献
中文
Claudex 通过翻译代理支持将 Claude Code 路由到多个 AI 提供商。但是,我们没有所有平台的 API Key,无法独自测试所有提供商。
已测试 (v0.2.0)
openrouter-claudeopenrouter-gptopenrouter-geminiopenrouter-deepseekopenrouter-grokopenrouter-qwenopenrouter-llamaminimaxopenaimax_tokens = 16384mistralmistral-large-latest,api.mistral.aicoherecommand-a-03-2025,需配置max_tokens = 8192glmapi.z.aikimiapi.moonshot.aiollamalocalhost:11434codex-subanthropic需要测试(按开发者使用频率排序)
高优先级 — 开发者高频使用:
https://api.groq.com/openai/v1https://generativelanguage.googleapis.com/v1beta/openaihttps://{resource}.openai.azure.com/...https://api.perplexity.ai中优先级 — 增长中的开发者社区:
https://api.cerebras.ai/v1https://api.together.xyz/v1https://api.fireworks.ai/inference/v1https://models.inference.ai.azure.com低优先级 — 区域或垂直场景:
https://api.cloudflare.com/...https://integrate.api.nvidia.com/v1https://api.lingyiwanwu.com/v1https://api.baichuan-ai.com/v1https://ark.cn-beijing.volces.com/api/v3https://api.siliconflow.cn/v1已知问题与提示
max_tokens = 16384,需在 profile 配置中设置(v0.1.2+)command-a-03-2025仅支持max_tokens = 8192,base_url 使用https://api.cohere.ai/compatibility/v1(OpenAI 兼容端点)--no-chrome: Claudex v0.1.2+ 自动注入--no-chrome避免 Chrome 集成冲突api_key填任意非空值即可(如"ollama"),Ollama 不校验 keyProfile 配置模板
如何贡献
config.example.toml的 PRsrc/proxy/translation.rs中修复翻译问题claudex auth login <provider>并反馈详见 CONTRIBUTING.md。