Conversation
There was a problem hiding this comment.
Code Review
This pull request is an automated sync that introduces a wide range of significant improvements. Key changes include unifying the AI chat bridge to a single, more capable implementation (ACP), which supports both Omi accounts and user-provided Claude accounts via a new browser-based OAuth flow. The Tasks feature has been substantially enhanced with drag-and-drop reordering across categories, inline task creation, background AI investigations, and a redesigned UI. The onboarding flow has also been streamlined for a better user experience. Critically, this PR adds several crucial reliability and performance fixes, such as an auto-restart mechanism for extreme memory usage, watchdogs for hung ffmpeg processes, and fixes for memory leaks in the task chat feature. The release and verification scripts have also been made safer and more robust.
| // Extreme threshold - auto-restart to prevent the system from becoming unresponsive. | ||
| // Without this, memory can climb to 7GB+, causing SQLite I/O failures and making | ||
| // the app impossible to reopen without a full computer restart. | ||
| if snapshot.memoryFootprintMB >= memoryAutoRestartThreshold && !autoRestartTriggered && !isDevBuild { | ||
| autoRestartTriggered = true | ||
| logError("ResourceMonitor: EXTREME memory \(snapshot.memoryFootprintMB)MB — auto-restarting to prevent system degradation") | ||
|
|
||
| SentrySDK.capture(message: "App Auto-Restarting Due to Extreme Memory") { scope in | ||
| scope.setLevel(.fatal) | ||
| scope.setTag(value: "auto_restart", key: "resource_alert") | ||
| scope.setContext(value: snapshot.asDictionary(), key: "resources") | ||
| } | ||
|
|
||
| // Give Sentry 3 seconds to flush, then relaunch and terminate | ||
| DispatchQueue.main.asyncAfter(deadline: .now() + 3.0) { | ||
| let task = Process() | ||
| task.launchPath = "/usr/bin/open" | ||
| task.arguments = ["-n", Bundle.main.bundleURL.path] | ||
| try? task.run() | ||
| NSApp.terminate(nil) | ||
| } | ||
| return | ||
| } |
There was a problem hiding this comment.
This auto-restart mechanism for extreme memory usage is a critical reliability improvement. It prevents the app from consuming excessive memory that could lead to system-wide unresponsiveness, which is especially important for an always-on background application. By proactively restarting, it ensures a better user experience and avoids a state where the app becomes impossible to use without a full system reboot.
| // Wait for ffmpeg to finish, but don't block forever. | ||
| // Under memory pressure, ffmpeg can hang in a disk I/O syscall and never respond | ||
| // to stdin close. A 10-second watchdog force-kills it so the cooperative thread | ||
| // isn't blocked indefinitely, which would deadlock the entire actor. | ||
| if let process = ffmpegProcess { | ||
| let pid = process.processIdentifier | ||
| let frameCount = frameTimestamps.count | ||
|
|
||
| let watchdog = Task.detached(priority: .background) { | ||
| try? await Task.sleep(nanoseconds: 10_000_000_000) | ||
| if process.isRunning { | ||
| logError("VideoChunkEncoder: ffmpeg hung for 10s — force killing PID \(pid)") | ||
| kill(pid, SIGKILL) | ||
| } | ||
| } | ||
|
|
||
| process.waitUntilExit() | ||
| watchdog.cancel() | ||
|
|
||
| if process.terminationStatus != 0 { | ||
| logError("VideoChunkEncoder: FFmpeg exited with status \(process.terminationStatus)") | ||
| } else { | ||
| log("VideoChunkEncoder: Finalized chunk with \(frameTimestamps.count) frames") | ||
| log("VideoChunkEncoder: Finalized chunk with \(frameCount) frames") | ||
| } | ||
|
|
||
| ffmpegProcess = nil |
There was a problem hiding this comment.
Adding a watchdog with a SIGKILL fallback for ffmpeg is a critical reliability fix. Under memory pressure, I/O-bound processes can get stuck in uninterruptible sleep, ignoring SIGTERM. This watchdog prevents the actor from deadlocking and stops zombie ffmpeg processes from accumulating, which could otherwise lead to severe memory leaks and system instability.
| /// Release memory held by a task's chat state. | ||
| /// Call when a task is permanently deleted to prevent unbounded memory growth. | ||
| func purgeState(for taskId: String) { | ||
| taskCancellables[taskId]?.removeAll() | ||
| taskCancellables.removeValue(forKey: taskId) | ||
| taskStates.removeValue(forKey: taskId) | ||
| if activeTaskId == taskId { | ||
| closeChat() | ||
| } | ||
| streamingTaskIds.remove(taskId) | ||
| streamingStatuses.removeValue(forKey: taskId) | ||
| unreadTaskIds.remove(taskId) | ||
| log("TaskChatCoordinator: Purged state for task \(taskId)") | ||
| } |
There was a problem hiding this comment.
| # Sign ALL native binaries in acp-bridge node_modules | ||
| # Apple notarization requires every binary/dylib/jnilib to be signed | ||
| ACP_BRIDGE_BUNDLE="$APP_BUNDLE/Contents/Resources/acp-bridge" | ||
| if [ -d "$ACP_BRIDGE_BUNDLE/node_modules" ]; then | ||
| echo " Signing native binaries in acp-bridge/node_modules..." | ||
|
|
||
| # Remove vendor directories with JARs containing native libs we don't need | ||
| # (JetBrains plugin has .jnilib inside JARs which Apple scans and rejects) | ||
| if [ -d "$ACP_BRIDGE_BUNDLE/node_modules/@anthropic-ai/claude-code/vendor/claude-code-jetbrains-plugin" ]; then | ||
| echo " Removing unnecessary JetBrains plugin vendor directory..." | ||
| rm -rf "$ACP_BRIDGE_BUNDLE/node_modules/@anthropic-ai/claude-code/vendor/claude-code-jetbrains-plugin" | ||
| fi | ||
|
|
||
| # Sign all Mach-O binaries: .node, .dylib, executables (rg, etc.) | ||
| # Use a single find + file check to catch everything Apple cares about | ||
| echo " Scanning for Mach-O binaries to sign..." | ||
| SIGNED_COUNT=0 | ||
| find "$ACP_BRIDGE_BUNDLE/node_modules" -type f \ | ||
| \( -name "*.node" -o -name "*.dylib" -o -name "*.jnilib" -o -name "*.so" -o -name "rg" \) \ | ||
| 2>/dev/null | while read native_bin; do | ||
| if file "$native_bin" 2>/dev/null | grep -q "Mach-O"; then | ||
| echo " Signing: $(echo "$native_bin" | sed "s|$ACP_BRIDGE_BUNDLE/||")" | ||
| codesign --force --options runtime --timestamp \ | ||
| --sign "$SIGN_IDENTITY" \ | ||
| "$native_bin" | ||
| fi | ||
| done | ||
|
|
||
| # Catch-all: find any remaining unsigned Mach-O binaries | ||
| find "$ACP_BRIDGE_BUNDLE/node_modules" -type f \ | ||
| ! -name "*.js" ! -name "*.json" ! -name "*.ts" ! -name "*.map" \ | ||
| ! -name "*.md" ! -name "*.txt" ! -name "*.yml" ! -name "*.yaml" \ | ||
| ! -name "*.css" ! -name "*.html" ! -name "*.jar" ! -name "*.d.ts" \ | ||
| ! -name "*.node" ! -name "*.dylib" ! -name "*.jnilib" ! -name "*.so" ! -name "rg" \ | ||
| 2>/dev/null | while read candidate; do | ||
| if file "$candidate" 2>/dev/null | grep -q "Mach-O"; then | ||
| echo " Signing (catch-all): $(echo "$candidate" | sed "s|$ACP_BRIDGE_BUNDLE/||")" | ||
| codesign --force --options runtime --timestamp \ | ||
| --sign "$SIGN_IDENTITY" \ | ||
| "$candidate" | ||
| fi | ||
| done | ||
| fi |
There was a problem hiding this comment.
This comprehensive approach to signing all native binaries within node_modules is a critical fix for the release process. Apple's notarization service is very strict and requires every single executable Mach-O file to be signed. This robust script ensures we don't miss any, preventing notarization failures that would block releases. The removal of unneeded vendor directories is also a smart optimization.
| # Kill only the test instance (by matching its path, not the process name) | ||
| # Avoid pkill -x "Omi Computer" which kills ALL instances including Dev/Production | ||
| TEST_PIDS=$(pgrep -f "$VERIFY_DIR" 2>/dev/null || true) | ||
| if [ -n "$TEST_PIDS" ]; then | ||
| echo "$TEST_PIDS" | xargs kill 2>/dev/null || true | ||
| fi |
There was a problem hiding this comment.
This is a critical fix. Using pgrep -f to find and kill only the specific test instance of the app prevents the verification script from accidentally terminating a user's running Development or Production instances. This avoids potential data loss and workflow interruptions for anyone running the script while also using the app.
|
Hey @m13v 👋 Thank you so much for taking the time to contribute to Omi! We truly appreciate you putting in the effort to submit this pull request. After careful review, we've decided not to merge this particular PR. Please don't take this personally — we genuinely try to merge as many contributions as possible, but sometimes we have to make tough calls based on:
Your contribution is still valuable to us, and we'd love to see you contribute again in the future! If you'd like feedback on how to improve this PR or want to discuss alternative approaches, please don't hesitate to reach out. Thank you for being part of the Omi community! 💜 |
Automated sync from omi-desktop at 474e1e1f.
This PR was created by the sync pipeline.