Skip to content

feat(chat): centralize LLM model configs and add provider presets#17

Merged
MinimaxLanbo merged 2 commits intoMiniMax-AI:mainfrom
juhuaxia:feat/llm-model-config-refactor
Mar 19, 2026
Merged

feat(chat): centralize LLM model configs and add provider presets#17
MinimaxLanbo merged 2 commits intoMiniMax-AI:mainfrom
juhuaxia:feat/llm-model-config-refactor

Conversation

@juhuaxia
Copy link
Contributor

@juhuaxia juhuaxia commented Mar 19, 2026

Summary

  • extract provider/model definitions into a single llmModels.ts source of truth and refactor llmClient.ts to consume it
  • add complete provider model presets (OpenAI, Anthropic, DeepSeek, MiniMax, Z.ai, Kimi) and wire them into Chat settings
  • support preset dropdown selection with manual model input fallback, plus update unit tests for new defaults and providers

Detail

  • Added providers such as z.ai, Zhipu, and Kimi.
  • Model selection now defaults to a dropdown menu; if a specific model is not listed in the dropdown, you can click "Edit" next to it to manually enter the model name.
  • Some model Base URLs include "v1," while others do not (e.g., z.ai); therefore, "v1" has been incorporated directly into each model's specific Base URL rather than being part of a fixed, concatenated string.

Validation

  • cd apps/webuiapps && pnpm test -- --run src/lib/__tests__/llmClient.test.ts (40/40 passed)
  • pnpm -w run lint (passed)
  • cd apps/webuiapps && pnpm build (passed)

@juhuaxia juhuaxia force-pushed the feat/llm-model-config-refactor branch 2 times, most recently from 1868b3b to c0865c2 Compare March 19, 2026 03:43
@juhuaxia juhuaxia force-pushed the feat/llm-model-config-refactor branch from c0865c2 to fc6c812 Compare March 19, 2026 05:55
@MinimaxLanbo
Copy link
Collaborator

Thanks for this PR! The refactoring is well-structured — extracting llmModels.ts as a single source of truth for provider/model configs is a clean separation of concerns, and the URL joining logic (hasVersionSuffix + joinUrl) is a solid improvement over the previous hardcoded /v1 concatenation. The dropdown + manual edit toggle for model selection is a nice UX touch too.

A few things I noticed:

1. Unrelated test changes bundled in

The chatHistoryStorage.test.ts changes (removing localStorage assertions, switching to session paths) seem unrelated to the LLM model config refactor. Could you either split this into a separate commit with its own description, or mention the rationale in the PR body? If this is fixing pre-existing test failures on main, it would be helpful to note that explicitly.

2. Existing user config migration

All provider baseUrl values now include /v1 (e.g. https://api.openai.comhttps://api.openai.com/v1). The hasVersionSuffix helper handles both old and new formats correctly for URL construction, which is great. However, if a user switches providers and switches back, their saved baseUrl gets overwritten with the new format. This is probably fine, but worth a note in the PR description.

3. hasVersionSuffix edge case

The regex /\/v\d+\/?$/ won't match URLs with query parameters (e.g. https://api.example.com/v1?key=xxx). Not a problem today, but could be a subtle bug in the future. Consider a comment noting this assumption.

4. Unused exports in llmModels.ts

getModelInfo, getModelsByCategory, isPresetModel, and getProviderDisplayName are exported but not referenced anywhere in this PR. If these are intentionally reserved for future use, a brief comment would help. Otherwise, consider removing them to keep the API surface lean.

5. Anthropic default model change: Opus → Sonnet

The default model changed from claude-opus-4-6 to claude-sonnet-4-6. Was this intentional? Sonnet is cheaper but less capable — just want to confirm this is the desired default for new users.

Overall this looks good — nice work! Happy to approve once the above points are addressed or clarified.

@MinimaxLanbo MinimaxLanbo merged commit 0118623 into MiniMax-AI:main Mar 19, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants