diff --git a/docs/customize/model-providers/more/ofoxai.mdx b/docs/customize/model-providers/more/ofoxai.mdx
new file mode 100644
index 00000000000..dc8a887a6fa
--- /dev/null
+++ b/docs/customize/model-providers/more/ofoxai.mdx
@@ -0,0 +1,207 @@
+---
+title: "How to Configure OfoxAI with Continue"
+sidebarTitle: "OfoxAI"
+---
+
+
+ [OfoxAI](https://ofox.ai/zh) is a unified LLM API gateway that provides access to 100+ models (OpenAI, Anthropic, Google, Meta, DeepSeek, Mistral, Qwen, and more) through a single API key. It is fully compatible with the OpenAI, Anthropic, and Gemini protocols, so you can switch by only updating the base URL and key — no code changes required.
+
+
+
+ Get an API key from the [OfoxAI Console](https://app.ofox.ai/auth). Full documentation lives at [docs.ofox.ai](https://docs.ofox.ai).
+
+
+## Base URLs
+
+OfoxAI exposes three protocol-compatible endpoints. Pick the one that matches the model family you want to call:
+
+| Protocol | Base URL | Use For |
+| :-------- | :-------------------------------- | :----------------------------------------- |
+| OpenAI | `https://api.ofox.ai/v1` | GPT, DeepSeek, Qwen, Llama, and most others |
+| Anthropic | `https://api.ofox.ai/anthropic` | Claude family |
+| Gemini | `https://api.ofox.ai/gemini` | Gemini family |
+
+All three endpoints accept the same OfoxAI API key. China users get optimized routing via HK express by default.
+
+## Configuration
+
+Because OfoxAI speaks the OpenAI / Anthropic / Gemini wire protocols natively, you can configure it in Continue using the matching built-in provider and just override `apiBase`.
+
+### OpenAI-compatible models (recommended default)
+
+
+
+ ```yaml title="config.yaml"
+ name: My Config
+ version: 0.0.1
+ schema: v1
+
+ models:
+ - name: GPT-4o (OfoxAI)
+ provider: openai
+ model: gpt-4o
+ apiBase: https://api.ofox.ai/v1
+ apiKey: ${{ secrets.OFOXAI_API_KEY }}
+ roles:
+ - chat
+ - edit
+ - apply
+ ```
+
+
+ ```json title="config.json"
+ {
+ "models": [
+ {
+ "title": "GPT-4o (OfoxAI)",
+ "provider": "openai",
+ "model": "gpt-4o",
+ "apiBase": "https://api.ofox.ai/v1",
+ "apiKey": ""
+ }
+ ]
+ }
+ ```
+
+
+
+### Claude via the Anthropic protocol
+
+
+
+ ```yaml title="config.yaml"
+ models:
+ - name: Claude Sonnet (OfoxAI)
+ provider: anthropic
+ model: claude-sonnet-4-20250514
+ apiBase: https://api.ofox.ai/anthropic
+ apiKey: ${{ secrets.OFOXAI_API_KEY }}
+ ```
+
+
+ ```json title="config.json"
+ {
+ "models": [
+ {
+ "title": "Claude Sonnet (OfoxAI)",
+ "provider": "anthropic",
+ "model": "claude-sonnet-4-20250514",
+ "apiBase": "https://api.ofox.ai/anthropic",
+ "apiKey": ""
+ }
+ ]
+ }
+ ```
+
+
+
+### Gemini via the Gemini protocol
+
+
+
+ ```yaml title="config.yaml"
+ models:
+ - name: Gemini 2.5 Pro (OfoxAI)
+ provider: gemini
+ model: gemini-2.5-pro
+ apiBase: https://api.ofox.ai/gemini
+ apiKey: ${{ secrets.OFOXAI_API_KEY }}
+ ```
+
+
+ ```json title="config.json"
+ {
+ "models": [
+ {
+ "title": "Gemini 2.5 Pro (OfoxAI)",
+ "provider": "gemini",
+ "model": "gemini-2.5-pro",
+ "apiBase": "https://api.ofox.ai/gemini",
+ "apiKey": ""
+ }
+ ]
+ }
+ ```
+
+
+
+## Mixing Models for Different Roles
+
+Because every OfoxAI request uses the same key, you can comfortably wire different upstream models to different Continue roles:
+
+```yaml title="config.yaml"
+models:
+ # Chat & Edit — high-quality model
+ - name: Claude Sonnet (OfoxAI)
+ provider: anthropic
+ model: claude-sonnet-4-20250514
+ apiBase: https://api.ofox.ai/anthropic
+ apiKey: ${{ secrets.OFOXAI_API_KEY }}
+ roles:
+ - chat
+ - edit
+ - apply
+
+ # Autocomplete — fast & cheap
+ - name: DeepSeek Coder (OfoxAI)
+ provider: openai
+ model: deepseek-coder
+ apiBase: https://api.ofox.ai/v1
+ apiKey: ${{ secrets.OFOXAI_API_KEY }}
+ roles:
+ - autocomplete
+
+ # Embeddings
+ - name: text-embedding-3-large (OfoxAI)
+ provider: openai
+ model: text-embedding-3-large
+ apiBase: https://api.ofox.ai/v1
+ apiKey: ${{ secrets.OFOXAI_API_KEY }}
+ roles:
+ - embed
+```
+
+## Tool Use / Agent Mode
+
+OfoxAI passes through native function calling for models that support it (GPT-4o, Claude, Gemini, Qwen, etc.). If a model's tool support isn't auto-detected by Continue, set it explicitly:
+
+
+
+ ```yaml title="config.yaml"
+ models:
+ - name: GPT-4o (OfoxAI)
+ provider: openai
+ model: gpt-4o
+ apiBase: https://api.ofox.ai/v1
+ apiKey: ${{ secrets.OFOXAI_API_KEY }}
+ capabilities:
+ - tool_use
+ ```
+
+
+ ```json title="config.json"
+ {
+ "models": [
+ {
+ "title": "GPT-4o (OfoxAI)",
+ "provider": "openai",
+ "model": "gpt-4o",
+ "apiBase": "https://api.ofox.ai/v1",
+ "apiKey": "",
+ "capabilities": {
+ "tools": true
+ }
+ }
+ ]
+ }
+ ```
+
+
+
+## Pricing
+
+OfoxAI offers a free tier for evaluation and pay-per-token usage thereafter. Token cost is charged at the upstream provider's published rate plus a small gateway fee. Detailed pricing and the live model catalog are available at [docs.ofox.ai](https://docs.ofox.ai).
+
+
+ Because Continue talks to OfoxAI through the standard OpenAI / Anthropic / Gemini providers, all of Continue's existing features — streaming, tool calls, prompt caching where supported — work out of the box.
+
diff --git a/docs/customize/model-providers/overview.mdx b/docs/customize/model-providers/overview.mdx
index 7ba030dcb4d..415d6c71830 100644
--- a/docs/customize/model-providers/overview.mdx
+++ b/docs/customize/model-providers/overview.mdx
@@ -34,6 +34,7 @@ Beyond the top-level providers, Continue supports many other options:
| [Together AI](/customize/model-providers/more/together) | Platform for running a variety of open models |
| [DeepInfra](/customize/model-providers/more/deepinfra) | Hosting for various open source models |
| [OpenRouter](/customize/model-providers/top-level/openrouter) | Gateway to multiple model providers |
+| [OfoxAI](/customize/model-providers/more/ofoxai) | Unified LLM API gateway with OpenAI / Anthropic / Gemini protocol support |
| [ClawRouter](/customize/model-providers/more/clawrouter) | Open-source LLM router with automatic cost-optimized model selection |
| [Tetrate Agent Router Service](/customize/model-providers/top-level/tetrate_agent_router_service) | Gateway with intelligent routing across multiple model providers |
| [Cohere](/customize/model-providers/more/cohere) | Models specialized for semantic search and text generation |
diff --git a/docs/docs.json b/docs/docs.json
index 8556401ffc6..6f8500ec8a6 100644
--- a/docs/docs.json
+++ b/docs/docs.json
@@ -180,6 +180,7 @@
"customize/model-providers/more/moonshot",
"customize/model-providers/more/nous",
"customize/model-providers/more/nvidia",
+ "customize/model-providers/more/ofoxai",
"customize/model-providers/more/tensorix",
"customize/model-providers/more/together",
"customize/model-providers/more/xAI",