fix: fallback to config.llm.model when resolveAgentPrimaryModelRef returns undefined#618
Conversation
743525c to
9193ba6
Compare
rwmjhb
left a comment
There was a problem hiding this comment.
Small fix with the right intent. One assumption in the implementation needs verification before merge.
Must fix
F1 — Fallback may still produce provider=undefined for bare model names (index.ts:1161-1163)
splitProviderModel returns { model: s } (no provider key) when the input contains no / (line 702). If config.llm.model is stored as a bare name — e.g. 'MiniMax-M2.1' without a provider prefix — the fallback yields provider=undefined, model='MiniMax-M2.1'. Whether runEmbeddedPiAgent handles a defined model with an undefined provider correctly is not visible from this repo.
Please confirm: does config.llm.model in dc-channel plugin config always use 'provider/model' format (e.g. 'openclaw/MiniMax-M2.1')? If yes, the fix is correct as written. If it can be a bare name, also read a config.llm.provider field — which would need to be added to PluginConfig.llm — or add a test that exercises the exact production config shape.
Non-blocking
- MR1 — The fallback reads
config.llm.modeldirectly, bypassing the repo's OAuth model compatibility checks and normalization rules that the normalresolveAgentPrimaryModelRefpath runs through. Worth noting as a known limitation.
|
F1 確認與修復方案(A+C): 你的觀察正確。Production config 確實使用 bare name: "llm": {
"model": "MiniMax-M2.5",
"baseURL": "https://api.minimax.io/v1"
}問題:
修復方案(A+C):
// 新增 helper
function inferProviderFromBaseURL(baseURL: string | undefined): string | undefined {
if (!baseURL) return undefined;
const u = baseURL.toLowerCase();
if (u.includes("minimax.io")) return "minimax-portal";
if (u.includes("openai.com")) return "openai";
if (u.includes("anthropic.com")) return "anthropic";
return undefined;
}
// 修改 generateReflectionText 中的 fallback
const cfg = params.cfg as Record<string, unknown>;
const llmConfig = cfg?.llm as Record<string, unknown> | undefined;
const modelRef =
(resolveAgentPrimaryModelRef(params.cfg, params.agentId) as string | undefined)
?? (llmConfig?.model as string | undefined);
const split = modelRef ? splitProviderModel(modelRef) : { provider: void 0, model: void 0 };
const provider = split.provider ?? inferProviderFromBaseURL(llmConfig?.baseURL as string | undefined);
const model = split.model;已知限制:
會在後續 commit 加入這段 code 並補單元測試。 MR1:了解,這是已知限制。 |
- Add inferProviderFromBaseURL() helper using URL API (hostname.endsWith) - Add short-circuit guard to prevent baseURL from overriding qualified model refs - Fix void 0 to undefined for consistency Addresses F1 feedback from PR CortexReach#618 review
F1 修復程式碼已推送根據對抗式 review 建議,已更新程式碼: 改進 1:URL API 避免子網域欺騙function inferProviderFromBaseURL(baseURL: string | undefined): string | undefined {
if (!baseURL) return undefined;
try {
const url = new URL(baseURL);
const hostname = url.hostname.toLowerCase();
// 使用 endsWith 避免子網域欺騙
if (hostname.endsWith("minimax.io")) return "minimax-portal";
if (hostname.endsWith("openai.com")) return "openai";
if (hostname.endsWith("anthropic.com")) return "anthropic";
return undefined;
} catch {
return undefined;
}
}改進 2:short-circuit guardconst provider = split.provider ?? inferProviderFromBaseURL(llmConfig?.baseURL);改進 3:void 0 → undefined{ provider: undefined, model: undefined }變更摘要:
Commit: |
…turns undefined Root cause: When generateReflectionText() is triggered via plugin hooks (dc-channel--...), the plugin-scoped config lacks an 'agents' section. resolveAgentPrimaryModelRef() returns undefined, causing runEmbeddedPiAgent to receive provider=undefined, model=undefined. This falls back to openai/gpt-5.4 (default), which has no API key, resulting in a 56-second timeout before minimal fallback activation. Fix: When resolveAgentPrimaryModelRef returns undefined, fall back to config.llm.model (the MiniMax M2.1 configured in plugin's openclaw.json). This ensures correct model is used for both agent:main (which gets full config and resolveAgentPrimaryModelRef works) and dc-channel--... sessions (which get plugin-scoped config). Fixes the 56-second delay on /reset for dc-channel Discord sessions.
- Add inferProviderFromBaseURL() helper using URL API (hostname.endsWith) - Add short-circuit guard to prevent baseURL from overriding qualified model refs - Fix void 0 to undefined for consistency Addresses F1 feedback from PR CortexReach#618 review
b5192ec to
aaad045
Compare
…F1 fix - Test URL inference with hostname.endsWith() (no subdomain spoofing) - Test production config: bare model + baseURL = correct provider - Test fallback: bare name + no baseURL = undefined provider - Test edge cases: null, empty string, invalid URL
單元測試已補上新增測試檔案: est/infer-provider-from-baseurl.test.mjs 測試覆蓋
Commit: 32d1a33 |
|
The root cause is correct — plugin-scoped config lacks Must fix
Worth addressing
Build note: Minor: The |
Problem
When generateReflectionText() is triggered via plugin hooks (e.g., command:reset for dc-channel--...), the plugin-scoped config passed to resolveAgentPrimaryModelRef() lacks an agents section. This causes the function to return undefined, resulting in runEmbeddedPiAgent receiving provider=undefined, model=undefined.
The Pi runner then falls back to openai/gpt-5.4 (the hard-coded default), which has no API key configured, causing a 56-second timeout before minimal fallback activates. Reflection content is degraded to (fallback) marker.
Sessions like agent:main that receive the full OpenClaw config are unaffected.
Fix
When resolveAgentPrimaryModelRef returns undefined, fall back to config.llm.model (the MiniMax M2.1 model configured in the plugin config). This ensures correct model regardless of config scope.
Before:
const modelRef = resolveAgentPrimaryModelRef(params.cfg, params.agentId);
const { provider, model } = modelRef ? splitProviderModel(modelRef) : {};
After:
const modelRef = resolveAgentPrimaryModelRef(params.cfg, params.agentId)
?? (((params.cfg as Record<string, unknown>)?.llm as Record<string, unknown>)?.model as string | undefined);
const { provider, model } = modelRef ? splitProviderModel(modelRef) : { provider: void 0, model: void 0 };
Testing