Use native Anthropic API for documentation agents when provider=anthropic#53
Use native Anthropic API for documentation agents when provider=anthropic#53pruv wants to merge 1 commit intoFSoft-AI4Code:mainfrom
Conversation
There was a problem hiding this comment.
Overall this is a focused, correct fix for the Anthropic agent path, flagging two issues inline that I think should be addressed before merging (or at least explicitly accepted): inconsistent provider normalization across the file, and a regression in the default fallback model when provider=anthropic.
| @@ -106,8 +164,11 @@ def _create_litellm_openai_client(config: Config) -> OpenAI: | |||
| ) | |||
|
|
|||
|
|
|||
There was a problem hiding this comment.
Provider comparison is inconsistent with the rest of this file. The new branches use a tolerant check — (config.provider or "").strip().lower() == "anthropic" — but the existing checks elsewhere in the same module use raw equality:
_create_litellm_openai_clientline 95:if config.provider == "bedrock":call_llmline 177:if provider == "azure-openai":_call_llm_via_litellmlines 216/220:if config.provider == "bedrock":/elif config.provider == "anthropic":
This means a config with provider="Anthropic" (or with stray whitespace) will route the agents through the native Anthropic path here, but call_llm will silently fall through to the OpenAI-compatible default — a confusing split. Please normalize once (either pick one rule and apply it everywhere in this file, or normalize at Config construction time so all comparisons see the canonical value).
|
|
||
|
|
||
| def create_fallback_model(config: Config) -> CompatibleOpenAIModel: | ||
| def create_fallback_model(config: Config) -> Any: |
There was a problem hiding this comment.
Behavior change worth calling out: with this branch, the fallback model is also routed through the native Anthropic API. The default fallback in codewiki/src/config.py is FALLBACK_MODEL_1 = os.getenv('FALLBACK_MODEL_1', 'glm-4p5') — a non-Anthropic model. Any user who sets provider=anthropic without explicitly overriding the fallback will now hit a hard Anthropic 4xx for glm-4p5 whenever the main model fails over.
Before this PR the fallback went through the OpenAI-compatible/LiteLLM client and could broker to glm-4p5, so this is a regression for that configuration. Options: (a) validate at Config construction that the fallback model is Anthropic-compatible when provider=anthropic, (b) pick an Anthropic-native default fallback when provider=anthropic, or (c) document the constraint prominently. At minimum I'd flag this in the PR description so users updating to this build know to reset FALLBACK_MODEL_1.
Summary
Routes pydantic-ai documentation agents through AnthropicModel + AnthropicProvider (Anthropic Messages API) when config.provider is anthropic, instead of always using CompatibleOpenAIModel + OpenAIProvider. Declares the pydantic-ai[anthropic] extra so the Anthropic SDK is installed with the package.
Motivation
With provider: anthropic and Anthropic model IDs, the previous stack still used the OpenAI-compatible client, which led to 404s and confusion when base_url pointed at Anthropic or when model names were valid for Anthropic but not for a chat.completions endpoint. Agents should use the same native Anthropic path that matches user configuration.
Changes