Conversation
📝 WalkthroughWalkthroughThe PR implements a provider-database-backed model refresh system with background synchronization. It adds a listener for database updates that triggers conditional model refreshes for database-backed providers, introduces deduplication of concurrent refresh requests, and includes conditional UI messaging with toast notifications. The Doubao provider's thinking-model detection migrates from a hardcoded list to database lookups. A new utility module identifies which providers are database-backed. Changes
Sequence Diagram(s)sequenceDiagram
participant EB as Event Bus
participant Presenter as LLMProviderPresenter
participant CP as ConfigPresenter
participant DB as ProviderDB
participant Provider as OpenAICompatibleProvider
participant UI as ProviderApiConfig.vue
Note over EB,UI: Background DB-driven refresh flow
EB->>Presenter: PROVIDER_DB_EVENTS.UPDATED
Presenter->>Presenter: Filter enabled DB-backed providers
Presenter->>Presenter: Enqueue refresh (deduplicate)
Note over CP,DB: Sync upstream metadata
Presenter->>CP: refreshProviderDb(true)
CP->>DB: Sync providers metadata
DB-->>CP: Status response
CP-->>Presenter: Return status
alt refreshProviderDb success
Presenter->>Provider: refreshModels()
Provider->>DB: Fetch models
DB-->>Provider: Model list
Provider-->>Presenter: Resolved
else refreshProviderDb error
Presenter-->>UI: Reject with error
UI->>UI: Show destructive toast
end
Note over Presenter,UI: User-initiated refresh flow
UI->>Presenter: refreshModels(providerId)
Presenter->>Presenter: Check if DB-backed
alt isProviderDbBackedProvider
Presenter->>CP: refreshProviderDb(true)
CP-->>Presenter: Status
Presenter->>Provider: refreshModels()
else non-DB provider
Presenter->>Provider: refreshModels()
end
Provider-->>Presenter: Resolved
Presenter-->>UI: Success
UI->>UI: Show success toast (with/without metadata)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~55 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/renderer/settings/components/ProviderApiConfig.vue (1)
326-352:⚠️ Potential issue | 🟡 MinorSnapshot the provider classification before awaiting the refresh.
If the parent reuses this component instance for another provider while the request is in flight, Lines 335-349 can pick up the new
props.provider.idstate and show the wrong success/failure copy for the refresh that just finished.💡 Suggested fix
const refreshModels = async () => { if (isRefreshing.value) return + const providerId = props.provider.id + const refreshesMetadataFirst = shouldRefreshProviderDbFirst.value isRefreshing.value = true try { - await llmProviderPresenter.refreshModels(props.provider.id) + await llmProviderPresenter.refreshModels(providerId) toast({ title: t('settings.provider.toast.refreshModelsSuccessTitle'), description: t( - shouldRefreshProviderDbFirst.value + refreshesMetadataFirst ? 'settings.provider.toast.refreshModelsSuccessDescriptionWithMetadata' : 'settings.provider.toast.refreshModelsSuccessDescription' ), duration: 4000 }) } catch (error) { console.error('Failed to refresh models:', error) toast({ title: t('settings.provider.toast.refreshModelsFailedTitle'), description: t( - shouldRefreshProviderDbFirst.value + refreshesMetadataFirst ? 'settings.provider.toast.refreshModelsFailedDescriptionWithMetadata' : 'settings.provider.toast.refreshModelsFailedDescription' ), variant: 'destructive', duration: 4000🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/renderer/settings/components/ProviderApiConfig.vue` around lines 326 - 352, Snapshot the provider identity and the classification flag before the await in refreshModels: capture props.provider.id (e.g., providerId) and shouldRefreshProviderDbFirst.value (e.g., snapshotShouldRefresh) at the top of refreshModels, then call llmProviderPresenter.refreshModels(providerId) and use snapshotShouldRefresh for selecting the toast description strings and in the success/failure logic so the messages correspond to the request that completed rather than a possibly changed props.provider; keep isRefreshing handling as-is.
🧹 Nitpick comments (3)
test/main/presenter/llmProviderPresenter/doubaoProvider.test.ts (2)
179-218: Add the negative case for models without the Doubao thinking note.This only locks the positive branch. If
coreStreamstarts sendingthinkingfor every Doubao model, this suite would still pass even though unflagged models may reject the request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/llmProviderPresenter/doubaoProvider.test.ts` around lines 179 - 218, The test only asserts the positive branch where a model has the 'doubao-thinking-parameter' note; add a negative case to ensure models without that note do not send the thinking parameter. Create another test (or extend this one) that mocks mockGetModel to return a model without extra_capabilities.reasoning.notes, call DoubaoProvider.coreStream (as done here), collect events, and assert mockChatCompletionsCreate was called with an objectContaining that does NOT include a thinking property (or that thinking is undefined), referencing DoubaoProvider, coreStream, mockGetModel, and mockChatCompletionsCreate to locate the code to change.
1-4: Mirror theproviders/subdirectory in this test path.This suite targets
src/main/presenter/llmProviderPresenter/providers/doubaoProvider.ts, so it should live undertest/main/presenter/llmProviderPresenter/providers/as well.As per coding guidelines, "Vitest test suites should mirror the source structure under
test/main/**andtest/renderer/**".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/llmProviderPresenter/doubaoProvider.test.ts` around lines 1 - 4, The test is placed at test/main/presenter/llmProviderPresenter/ but must mirror the source providers subdirectory; move the test so it lives alongside the implementation’s subfolder (i.e., under a providers/ folder corresponding to the implementation) and update the module imports accordingly so they correctly import DoubaoProvider and any types (IConfigPresenter, LLM_PROVIDER, ModelConfig) from the same relative locations as the source file; ensure the test filename remains doubaoProvider.test.ts and update any import paths in that file to reflect the new providers/ location.test/main/presenter/llmProviderPresenter/backgroundModelSync.test.ts (1)
204-247: Cover the disabled DB-backed provider case too.All
provider-db:updatedcoverage here uses an enabled DB-backed provider. A disabled DB-backed provider should also ignore the event, otherwise catalog updates can still trigger background refresh work for a provider the user turned off.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/llmProviderPresenter/backgroundModelSync.test.ts` around lines 204 - 247, Add a test that mirrors the enabled DB-backed case but uses a DB-backed provider marked disabled so provider-db:updated is ignored: spy on OpenAICompatibleProvider.prototype.refreshModels, instantiate LLMProviderPresenter with createConfigPresenter(createProvider({ id: 'doubao', apiType: 'doubao', /* mark provider disabled e.g. enabled: false or dbEnabled: false */ })), await the microtasks as in the other tests, emitMainEvent('provider-db:updated', {...}), and assert refreshSpy.not.toHaveBeenCalled(); this ensures LLMProviderPresenter ignores provider-db updates for DB-backed providers that the user has disabled.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/llmProviderPresenter/index.ts`:
- Around line 51-52: The current coalescing uses modelRefreshPromises to
collapse concurrent refreshes into the first in-flight promise and drops any
newer requests; change this so that when a provider.refreshModels() is already
in-flight you record that a trailing refresh was requested (e.g., set a pending
flag or increment a counter keyed by provider id), and when the in-flight
promise resolves you check that pending indicator and, if set, invoke
provider.refreshModels() again (updating modelRefreshPromises to the new promise
and clearing the pending flag); reference the existing modelRefreshPromises map
and the calls to provider.refreshModels() so the map is updated with each actual
run and newer refresh requests are not discarded.
In `@test/main/presenter/llmProviderPresenter/backgroundModelSync.test.ts`:
- Around line 172-177: The emitMainEvent helper currently fires handlers with
handlers.forEach(...) and then flushes two microtasks; replace that with
collecting handler results and awaiting them so async handlers settle
deterministically: in emitMainEvent (and using
eventState.handlers.get(eventName)), map handlers to their returned values (cast
to Promise<unknown> if needed), call Promise.all on that array and await it
before returning, ensuring both sync and async listeners are properly awaited
instead of relying on Promise.resolve() hops.
---
Outside diff comments:
In `@src/renderer/settings/components/ProviderApiConfig.vue`:
- Around line 326-352: Snapshot the provider identity and the classification
flag before the await in refreshModels: capture props.provider.id (e.g.,
providerId) and shouldRefreshProviderDbFirst.value (e.g., snapshotShouldRefresh)
at the top of refreshModels, then call
llmProviderPresenter.refreshModels(providerId) and use snapshotShouldRefresh for
selecting the toast description strings and in the success/failure logic so the
messages correspond to the request that completed rather than a possibly changed
props.provider; keep isRefreshing handling as-is.
---
Nitpick comments:
In `@test/main/presenter/llmProviderPresenter/backgroundModelSync.test.ts`:
- Around line 204-247: Add a test that mirrors the enabled DB-backed case but
uses a DB-backed provider marked disabled so provider-db:updated is ignored: spy
on OpenAICompatibleProvider.prototype.refreshModels, instantiate
LLMProviderPresenter with createConfigPresenter(createProvider({ id: 'doubao',
apiType: 'doubao', /* mark provider disabled e.g. enabled: false or dbEnabled:
false */ })), await the microtasks as in the other tests,
emitMainEvent('provider-db:updated', {...}), and assert
refreshSpy.not.toHaveBeenCalled(); this ensures LLMProviderPresenter ignores
provider-db updates for DB-backed providers that the user has disabled.
In `@test/main/presenter/llmProviderPresenter/doubaoProvider.test.ts`:
- Around line 179-218: The test only asserts the positive branch where a model
has the 'doubao-thinking-parameter' note; add a negative case to ensure models
without that note do not send the thinking parameter. Create another test (or
extend this one) that mocks mockGetModel to return a model without
extra_capabilities.reasoning.notes, call DoubaoProvider.coreStream (as done
here), collect events, and assert mockChatCompletionsCreate was called with an
objectContaining that does NOT include a thinking property (or that thinking is
undefined), referencing DoubaoProvider, coreStream, mockGetModel, and
mockChatCompletionsCreate to locate the code to change.
- Around line 1-4: The test is placed at
test/main/presenter/llmProviderPresenter/ but must mirror the source providers
subdirectory; move the test so it lives alongside the implementation’s subfolder
(i.e., under a providers/ folder corresponding to the implementation) and update
the module imports accordingly so they correctly import DoubaoProvider and any
types (IConfigPresenter, LLM_PROVIDER, ModelConfig) from the same relative
locations as the source file; ensure the test filename remains
doubaoProvider.test.ts and update any import paths in that file to reflect the
new providers/ location.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4242595a-da82-4ae3-ad2c-e1c8e2730e33
📒 Files selected for processing (20)
src/main/presenter/configPresenter/providerDbLoader.tssrc/main/presenter/llmProviderPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/doubaoProvider.tssrc/renderer/settings/components/ProviderApiConfig.vuesrc/renderer/src/i18n/da-DK/settings.jsonsrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.jsonsrc/renderer/src/i18n/he-IL/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/shared/providerDbCatalog.tstest/main/presenter/llmProviderPresenter/backgroundModelSync.test.tstest/main/presenter/llmProviderPresenter/doubaoProvider.test.tstest/renderer/components/ProviderApiConfig.test.ts
| private readonly modelRefreshPromises: Map<string, Promise<void>> = new Map() | ||
| private readonly configPresenter: IConfigPresenter |
There was a problem hiding this comment.
Don't drop newer refresh requests behind an older in-flight refresh.
Lines 408-410 collapse every concurrent request for the same provider into the first promise and never schedule a trailing rerun. If another provider-db update lands while the first provider.refreshModels() is already reading the previous snapshot, that later refresh is discarded and the local model list can settle on stale data.
♻️ Suggested coalescing pattern
export class LLMProviderPresenter implements ILlmProviderPresenter {
private currentProviderId: string | null = null
private readonly activeStreams: Map<string, StreamState> = new Map()
private readonly modelRefreshPromises: Map<string, Promise<void>> = new Map()
+ private readonly pendingModelRefreshes: Set<string> = new Set()
private readonly configPresenter: IConfigPresenter
...
private enqueueProviderModelRefresh(providerId: string): Promise<void> {
const existingRefresh = this.modelRefreshPromises.get(providerId)
if (existingRefresh) {
+ this.pendingModelRefreshes.add(providerId)
return existingRefresh
}
- const refreshPromise = (async () => {
- const provider = this.getProviderInstance(providerId)
- await provider.refreshModels()
- })().finally(() => {
+ const runRefresh = async (): Promise<void> => {
+ do {
+ this.pendingModelRefreshes.delete(providerId)
+ const provider = this.getProviderInstance(providerId)
+ await provider.refreshModels()
+ } while (this.pendingModelRefreshes.has(providerId))
+ }
+
+ const refreshPromise = runRefresh().finally(() => {
+ this.pendingModelRefreshes.delete(providerId)
if (this.modelRefreshPromises.get(providerId) === refreshPromise) {
this.modelRefreshPromises.delete(providerId)
}
})
this.modelRefreshPromises.set(providerId, refreshPromise)
return refreshPromise
}A regression where a second refresh request arrives before the first promise resolves would make this behavior much easier to lock down.
Also applies to: 407-423
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/main/presenter/llmProviderPresenter/index.ts` around lines 51 - 52, The
current coalescing uses modelRefreshPromises to collapse concurrent refreshes
into the first in-flight promise and drops any newer requests; change this so
that when a provider.refreshModels() is already in-flight you record that a
trailing refresh was requested (e.g., set a pending flag or increment a counter
keyed by provider id), and when the in-flight promise resolves you check that
pending indicator and, if set, invoke provider.refreshModels() again (updating
modelRefreshPromises to the new promise and clearing the pending flag);
reference the existing modelRefreshPromises map and the calls to
provider.refreshModels() so the map is updated with each actual run and newer
refresh requests are not discarded.
| const emitMainEvent = async (eventName: string, ...args: unknown[]) => { | ||
| const handlers = eventState.handlers.get(eventName) ?? [] | ||
| handlers.forEach((handler) => handler(...args)) | ||
| await Promise.resolve() | ||
| await Promise.resolve() | ||
| } |
There was a problem hiding this comment.
Await the registered handler promises instead of flushing two microtasks.
This helper currently assumes the background work settles after two Promise.resolve() hops. That makes the suite race-prone as soon as the listener adds another await.
♻️ Suggested change
const emitMainEvent = async (eventName: string, ...args: unknown[]) => {
const handlers = eventState.handlers.get(eventName) ?? []
- handlers.forEach((handler) => handler(...args))
- await Promise.resolve()
- await Promise.resolve()
+ await Promise.all(
+ handlers.map((handler) => Promise.resolve().then(() => handler(...args)))
+ )
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/main/presenter/llmProviderPresenter/backgroundModelSync.test.ts` around
lines 172 - 177, The emitMainEvent helper currently fires handlers with
handlers.forEach(...) and then flushes two microtasks; replace that with
collecting handler results and awaiting them so async handlers settle
deterministically: in emitMainEvent (and using
eventState.handlers.get(eventName)), map handlers to their returned values (cast
to Promise<unknown> if needed), call Promise.all on that array and await it
before returning, ensuring both sync and async listeners are properly awaited
instead of relying on Promise.resolve() hops.
Summary by CodeRabbit
Release Notes
New Features
Documentation
Tests