Skip to content

fix: fallback to config.llm.model when resolveAgentPrimaryModelRef returns undefined#618

Open
jlin53882 wants to merge 3 commits intoCortexReach:masterfrom
jlin53882:fix/issue-reflection-model-resolution
Open

fix: fallback to config.llm.model when resolveAgentPrimaryModelRef returns undefined#618
jlin53882 wants to merge 3 commits intoCortexReach:masterfrom
jlin53882:fix/issue-reflection-model-resolution

Conversation

@jlin53882
Copy link
Copy Markdown
Contributor

Problem

When generateReflectionText() is triggered via plugin hooks (e.g., command:reset for dc-channel--...), the plugin-scoped config passed to resolveAgentPrimaryModelRef() lacks an agents section. This causes the function to return undefined, resulting in runEmbeddedPiAgent receiving provider=undefined, model=undefined.

The Pi runner then falls back to openai/gpt-5.4 (the hard-coded default), which has no API key configured, causing a 56-second timeout before minimal fallback activates. Reflection content is degraded to (fallback) marker.

Sessions like agent:main that receive the full OpenClaw config are unaffected.

Fix

When resolveAgentPrimaryModelRef returns undefined, fall back to config.llm.model (the MiniMax M2.1 model configured in the plugin config). This ensures correct model regardless of config scope.

Before:
const modelRef = resolveAgentPrimaryModelRef(params.cfg, params.agentId);
const { provider, model } = modelRef ? splitProviderModel(modelRef) : {};

After:
const modelRef = resolveAgentPrimaryModelRef(params.cfg, params.agentId)
?? (((params.cfg as Record<string, unknown>)?.llm as Record<string, unknown>)?.model as string | undefined);
const { provider, model } = modelRef ? splitProviderModel(modelRef) : { provider: void 0, model: void 0 };

Testing

  1. Trigger /reset on a dc-channel Discord session
  2. Verify reflection file does NOT contain (fallback) marker
  3. Verify reflection contains full content
  4. Confirm elapsed time is normal (~5-10s) instead of ~56s

Copy link
Copy Markdown
Collaborator

@rwmjhb rwmjhb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small fix with the right intent. One assumption in the implementation needs verification before merge.

Must fix

F1 — Fallback may still produce provider=undefined for bare model names (index.ts:1161-1163)

splitProviderModel returns { model: s } (no provider key) when the input contains no / (line 702). If config.llm.model is stored as a bare name — e.g. 'MiniMax-M2.1' without a provider prefix — the fallback yields provider=undefined, model='MiniMax-M2.1'. Whether runEmbeddedPiAgent handles a defined model with an undefined provider correctly is not visible from this repo.

Please confirm: does config.llm.model in dc-channel plugin config always use 'provider/model' format (e.g. 'openclaw/MiniMax-M2.1')? If yes, the fix is correct as written. If it can be a bare name, also read a config.llm.provider field — which would need to be added to PluginConfig.llm — or add a test that exercises the exact production config shape.


Non-blocking

  • MR1 — The fallback reads config.llm.model directly, bypassing the repo's OAuth model compatibility checks and normalization rules that the normal resolveAgentPrimaryModelRef path runs through. Worth noting as a known limitation.

@jlin53882
Copy link
Copy Markdown
Contributor Author

F1 確認與修復方案(A+C):

你的觀察正確。Production config 確實使用 bare name:

"llm": {
  "model": "MiniMax-M2.5",
  "baseURL": "https://api.minimax.io/v1"
}

問題

  1. MiniMax API 只接受 bare name(如 MiniMax-M2.5),不接受 provider prefix(minimax-portal/MiniMax-M2.5 會被 API 拒絕)
  2. splitProviderModel("MiniMax-M2.5") 只回 { model: "MiniMax-M2.5" },沒有 provider

修復方案(A+C)

  1. 加上 inferProviderFromBaseURL() helper:當 model 是 bare name 時,從 baseURL 推斷 provider
  2. 加 short-circuit guard:當 split 已 provider 就直接用,不被 baseURL 覆蓋
// 新增 helper
function inferProviderFromBaseURL(baseURL: string | undefined): string | undefined {
  if (!baseURL) return undefined;
  const u = baseURL.toLowerCase();
  if (u.includes("minimax.io")) return "minimax-portal";
  if (u.includes("openai.com")) return "openai";
  if (u.includes("anthropic.com")) return "anthropic";
  return undefined;
}

// 修改 generateReflectionText 中的 fallback
const cfg = params.cfg as Record<string, unknown>;
const llmConfig = cfg?.llm as Record<string, unknown> | undefined;
const modelRef =
  (resolveAgentPrimaryModelRef(params.cfg, params.agentId) as string | undefined)
  ?? (llmConfig?.model as string | undefined);
const split = modelRef ? splitProviderModel(modelRef) : { provider: void 0, model: void 0 };
const provider = split.provider ?? inferProviderFromBaseURL(llmConfig?.baseURL as string | undefined);
const model = split.model;

已知限制

  • 只支援 3 家 major providers(minimax、openai、anthropic)
  • 若 baseURL 無法匹配,provider 會是 undefined,這是預期行為

會在後續 commit 加入這段 code 並補單元測試。


MR1:了解,這是已知限制。

jlin53882 added a commit to jlin53882/memory-lancedb-pro that referenced this pull request Apr 15, 2026
- Add inferProviderFromBaseURL() helper using URL API (hostname.endsWith)
- Add short-circuit guard to prevent baseURL from overriding qualified model refs
- Fix void 0 to undefined for consistency

Addresses F1 feedback from PR CortexReach#618 review
@jlin53882
Copy link
Copy Markdown
Contributor Author

F1 修復程式碼已推送

根據對抗式 review 建議,已更新程式碼:

改進 1:URL API 避免子網域欺騙

function inferProviderFromBaseURL(baseURL: string | undefined): string | undefined {
  if (!baseURL) return undefined;
  
  try {
    const url = new URL(baseURL);
    const hostname = url.hostname.toLowerCase();
    
    // 使用 endsWith 避免子網域欺騙
    if (hostname.endsWith("minimax.io")) return "minimax-portal";
    if (hostname.endsWith("openai.com")) return "openai";
    if (hostname.endsWith("anthropic.com")) return "anthropic";
    
    return undefined;
  } catch {
    return undefined;
  }
}

改進 2:short-circuit guard

const provider = split.provider ?? inferProviderFromBaseURL(llmConfig?.baseURL);

改進 3:void 0 → undefined

{ provider: undefined, model: undefined }

變更摘要

  • URL 匹配:includes()endsWith()
  • 無效 URL:try-catch 回傳 undefined
  • 空值表示:void 0undefined

Commit: b5192ec

…turns undefined

Root cause: When generateReflectionText() is triggered via plugin hooks
(dc-channel--...), the plugin-scoped config lacks an 'agents' section.
resolveAgentPrimaryModelRef() returns undefined, causing
runEmbeddedPiAgent to receive provider=undefined, model=undefined.
This falls back to openai/gpt-5.4 (default), which has no API key,
resulting in a 56-second timeout before minimal fallback activation.

Fix: When resolveAgentPrimaryModelRef returns undefined, fall back to
config.llm.model (the MiniMax M2.1 configured in plugin's openclaw.json).
This ensures correct model is used for both agent:main (which gets full
config and resolveAgentPrimaryModelRef works) and dc-channel--... sessions
(which get plugin-scoped config).

Fixes the 56-second delay on /reset for dc-channel Discord sessions.
- Add inferProviderFromBaseURL() helper using URL API (hostname.endsWith)
- Add short-circuit guard to prevent baseURL from overriding qualified model refs
- Fix void 0 to undefined for consistency

Addresses F1 feedback from PR CortexReach#618 review
@jlin53882 jlin53882 force-pushed the fix/issue-reflection-model-resolution branch from b5192ec to aaad045 Compare April 15, 2026 18:39
…F1 fix

- Test URL inference with hostname.endsWith() (no subdomain spoofing)
- Test production config: bare model + baseURL = correct provider
- Test fallback: bare name + no baseURL = undefined provider
- Test edge cases: null, empty string, invalid URL
@jlin53882
Copy link
Copy Markdown
Contributor Author

單元測試已補上

新增測試檔案: est/infer-provider-from-baseurl.test.mjs

測試覆蓋

測試案例 描述
hostname.endsWith() 防護 fake-minimax.io 不應匹配
Production config bare model + baseURL → 正確 provider
Fallback bare name + 無 baseURL → undefined provider
Edge cases null、空字串、無效 URL

Commit: 32d1a33

@rwmjhb
Copy link
Copy Markdown
Collaborator

rwmjhb commented Apr 16, 2026

The root cause is correct — plugin-scoped config lacks agents, causing resolveAgentPrimaryModelRef to return undefined and fall through to an openai/gpt-5.4 default with no key. Good fix direction, one must-fix needed.

Must fix

  • Bare model name not handled: config.llm.model is passed directly to splitProviderModel, which splits on /. If the config stores a bare model name without a provider prefix (e.g. 'MiniMax-M2.1'), splitProviderModel will return provider=undefined — the same failure mode you're fixing. Add a guard: if the split produces no provider, fall back to a known-safe default (or throw with a clear message) rather than silently passing undefined downstream.

Worth addressing

  • No test covers the fallback path (resolveAgentPrimaryModelRef returns undefinedconfig.llm.model used). Given this is fixing a real production timeout, a targeted unit test would prevent regression.
  • The triple-cast in the fix (as unknown as X) signals a typing gap rather than a real fix — worth modeling the plugin-scoped config shape properly so the compiler validates the path.
  • The fallback bypasses the repo's OAuth model compatibility and normalization rules (used for the normal resolution path). If those rules matter for plugin-scoped reflections, the fallback should go through the same pipeline.

Build note: test/cli-smoke.mjs:316 fails (undefined !== 1) — stale_base=true suggests pre-existing, but please confirm before merge.

Minor: The {}{ provider: void 0, model: void 0 } change in the destructure fallback isn't mentioned in the PR description — worth a one-liner explaining the intent (TypeScript strict mode? intentional runtime behavior?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants