Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions content/manuals/ai/model-runner/ide-integrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,24 @@ print(response.text)

You can find more details in [this Docker Blog post](https://www.docker.com/blog/opencode-docker-model-runner-private-ai-coding/)

## Claude Code

[Claude Code](https://claude.com/product/claude-code) is [Anthropic's](https://www.anthropic.com/) command-line tool for agentic coding. It lives in your terminal, understands your codebase, and executes routine tasks, explains complex code, and handles Git workflows through natural language commands.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since every other tool on this page uses the OpenAI-compatible endpoint, it might be worth a one-liner noting that DMR also exposes an Anthropic-compatible API, which is why this integration works differently. Otherwise readers might wonder why this section doesn't use /engines/v1 like everything else.


### Configuration

1. Install Claude Code (see [docs](https://code.claude.com/docs/en/quickstart#step-1-install-claude-code))
2. Use the `ANTHROPIC_BASE_URL` environment variable to point Claude Code at DMR. On Mac or Linux, you can do this, for example if you want to use the `gpt-oss:32k` model:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth adding a tip about making the env var persistent so users don't have to set it every time:

Tip

To avoid setting the variable each time, add it to your shell profile
(~/.bashrc, ~/.zshrc, or equivalent):

export ANTHROPIC_BASE_URL=http://localhost:12434

```bash
ANTHROPIC_BASE_URL=http://localhost:12434 claude --model gpt-oss:32k
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The gpt-oss:32k example is great, but readers won't know what that is since it's a locally repackaged model. Could we add a brief note explaining that ai/gpt-oss defaults to only 4,096 tokens of context (not great for coding), and show how to repackage it?

Something like:

Tip

The default context size for gpt-oss is 4,096 tokens, which is limiting for coding tasks.
You can repackage it with a larger context window:

$ docker model pull gpt-oss
$ docker model package --from ai/gpt-oss --context-size 32000 gpt-oss:32k

Alternatively, models like ai/glm-4.7-flash, ai/qwen3-coder, and ai/devstral-small-2
come with 128K context by default and work without repackaging.

```
On Windows (PowerShell) you can do it like this:
```powershell
$env:ANTHROPIC_BASE_URL="http://localhost:12434"
claude --model gpt-oss:32k
```
You can find more details in [this Docker Blog post](https://www.docker.com/blog/run-claude-code-locally-docker-model-runner/)

## Common issues

### "Connection refused" errors
Expand Down