Conversation
There was a problem hiding this comment.
Code Review
The pull request introduces significant updates to the desktop application, including a transition to the ACP bridge for AI chat, local-first task management, and enhanced resource monitoring. Key improvements include refined accessibility permission handling, background AI investigation for tasks, and a watchdog mechanism for ffmpeg processes to prevent system degradation. However, there are critical issues regarding main thread blocking during notification registration and potential data desync in the new task synchronization logic that should be addressed to ensure a smooth and reliable user experience.
| register.waitUntilExit() | ||
|
|
||
| // Restart usernoted (notification center daemon) to pick up fresh registration | ||
| // Runs as current user (no sudo needed), auto-restarts within ~1 second | ||
| let killUsernoted = Process() | ||
| killUsernoted.executableURL = URL(fileURLWithPath: "/usr/bin/killall") | ||
| killUsernoted.arguments = ["usernoted"] | ||
| killUsernoted.standardOutput = FileHandle.nullDevice | ||
| killUsernoted.standardError = FileHandle.nullDevice | ||
| try? killUsernoted.run() | ||
| killUsernoted.waitUntilExit() | ||
| log("Restarted usernoted to force notification re-discovery") | ||
|
|
||
| // Wait for usernoted to restart before retrying | ||
| Thread.sleep(forTimeInterval: 1.5) |
There was a problem hiding this comment.
The notification registration logic performs blocking operations on the main thread, including register.waitUntilExit() and Thread.sleep(forTimeInterval: 1.5). Since this method is invoked via UI actions (e.g., during onboarding), it will cause the application to become unresponsive for several seconds. This logic should be moved to a background queue, or the delay should be implemented using non-blocking asynchronous methods like Task.sleep.
| captureTimer = nil | ||
| DispatchQueue.main.asyncAfter(deadline: .now() + 1.5) { [weak self] in | ||
| guard let self = self, self.isMonitoring else { return } | ||
| self.captureTimer?.invalidate() | ||
| self.captureTimer = Timer.scheduledTimer(withTimeInterval: RewindSettings.shared.captureInterval, repeats: true) { [weak self] _ in |
There was a problem hiding this comment.
The captureTimer?.invalidate() call at line 906 is ineffective because captureTimer was set to nil at line 903. This results in the previous timer continuing to run in the background, which can lead to multiple active timers and unnecessary resource consumption. The timer should be invalidated before the reference is cleared.
| captureTimer = nil | |
| DispatchQueue.main.asyncAfter(deadline: .now() + 1.5) { [weak self] in | |
| guard let self = self, self.isMonitoring else { return } | |
| self.captureTimer?.invalidate() | |
| self.captureTimer = Timer.scheduledTimer(withTimeInterval: RewindSettings.shared.captureInterval, repeats: true) { [weak self] _ in | |
| captureTimer?.invalidate() | |
| captureTimer = nil | |
| DispatchQueue.main.asyncAfter(deadline: .now() + 1.5) { [weak self] in | |
| guard let self = self, self.isMonitoring else { return } | |
| self.captureTimer = Timer.scheduledTimer(withTimeInterval: RewindSettings.shared.captureInterval, repeats: true) { [weak self] _ in |
| Task { | ||
| do { | ||
| var metadata: [String: Any]? = nil | ||
| if let tags = tags, !tags.isEmpty { | ||
| metadata = ["tags": tags] | ||
| } | ||
|
|
||
| let created = try await APIClient.shared.createActionItem( | ||
| description: description, | ||
| dueAt: dueAt, | ||
| source: "manual", | ||
| priority: priority, | ||
| category: tags?.first, | ||
| metadata: metadata, | ||
| recurrenceRule: recurrenceRule | ||
| ) | ||
|
|
||
| try await ActionItemStorage.shared.markSynced(id: localId, backendId: created.id) | ||
|
|
||
| // Replace local_ entry with real backend-synced task | ||
| if let idx = self.incompleteTasks.firstIndex(where: { $0.id == localTask.id }) { | ||
| self.incompleteTasks[idx] = created | ||
| } | ||
| log("TasksStore: Task synced to backend (local \(localId) → \(created.id))") | ||
| } catch { | ||
| logError("TasksStore: Failed to sync new task to backend (will retry on next launch)", error: error) | ||
| } | ||
| } |
There was a problem hiding this comment.
The local-first task creation logic lacks a robust retry mechanism for background synchronization. If the initial sync fails, the task remains only in the local database and may never reach the backend. Additionally, if the app is closed after a successful API call but before markSynced updates the local record, the task could be duplicated during the next full sync from the server. Consider implementing a persistent queue or a background worker that identifies and retries unsynced local records.
|
Hey @m13v 👋 Thank you so much for taking the time to contribute to Omi! We truly appreciate you putting in the effort to submit this pull request. After careful review, we've decided not to merge this particular PR. Please don't take this personally — we genuinely try to merge as many contributions as possible, but sometimes we have to make tough calls based on:
Your contribution is still valuable to us, and we'd love to see you contribute again in the future! If you'd like feedback on how to improve this PR or want to discuss alternative approaches, please don't hesitate to reach out. Thank you for being part of the Omi community! 💜 |
Automated sync from omi-desktop at 474e1e1f.
This PR was created by the sync pipeline.