Skip to content

Sync desktop/ from omi-desktop (474e1e1f)#4947

Closed
m13v wants to merge 1 commit intomainfrom
sync/desktop-20260222-170537
Closed

Sync desktop/ from omi-desktop (474e1e1f)#4947
m13v wants to merge 1 commit intomainfrom
sync/desktop-20260222-170537

Conversation

@m13v
Copy link
Copy Markdown
Contributor

@m13v m13v commented Feb 22, 2026

Automated sync from omi-desktop at 474e1e1f.

This PR was created by the sync pipeline.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is an automated sync that introduces a wide range of significant improvements. Key changes include unifying the AI chat bridge to a single, more capable implementation (ACP), which supports both Omi accounts and user-provided Claude accounts via a new browser-based OAuth flow. The Tasks feature has been substantially enhanced with drag-and-drop reordering across categories, inline task creation, background AI investigations, and a redesigned UI. The onboarding flow has also been streamlined for a better user experience. Critically, this PR adds several crucial reliability and performance fixes, such as an auto-restart mechanism for extreme memory usage, watchdogs for hung ffmpeg processes, and fixes for memory leaks in the task chat feature. The release and verification scripts have also been made safer and more robust.

Comment on lines +201 to +223
// Extreme threshold - auto-restart to prevent the system from becoming unresponsive.
// Without this, memory can climb to 7GB+, causing SQLite I/O failures and making
// the app impossible to reopen without a full computer restart.
if snapshot.memoryFootprintMB >= memoryAutoRestartThreshold && !autoRestartTriggered && !isDevBuild {
autoRestartTriggered = true
logError("ResourceMonitor: EXTREME memory \(snapshot.memoryFootprintMB)MB — auto-restarting to prevent system degradation")

SentrySDK.capture(message: "App Auto-Restarting Due to Extreme Memory") { scope in
scope.setLevel(.fatal)
scope.setTag(value: "auto_restart", key: "resource_alert")
scope.setContext(value: snapshot.asDictionary(), key: "resources")
}

// Give Sentry 3 seconds to flush, then relaunch and terminate
DispatchQueue.main.asyncAfter(deadline: .now() + 3.0) {
let task = Process()
task.launchPath = "/usr/bin/open"
task.arguments = ["-n", Bundle.main.bundleURL.path]
try? task.run()
NSApp.terminate(nil)
}
return
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This auto-restart mechanism for extreme memory usage is a critical reliability improvement. It prevents the app from consuming excessive memory that could lead to system-wide unresponsiveness, which is especially important for an always-on background application. By proactively restarting, it ensures a better user experience and avoids a state where the app becomes impossible to use without a full system reboot.

Comment on lines +306 to 331
// Wait for ffmpeg to finish, but don't block forever.
// Under memory pressure, ffmpeg can hang in a disk I/O syscall and never respond
// to stdin close. A 10-second watchdog force-kills it so the cooperative thread
// isn't blocked indefinitely, which would deadlock the entire actor.
if let process = ffmpegProcess {
let pid = process.processIdentifier
let frameCount = frameTimestamps.count

let watchdog = Task.detached(priority: .background) {
try? await Task.sleep(nanoseconds: 10_000_000_000)
if process.isRunning {
logError("VideoChunkEncoder: ffmpeg hung for 10s — force killing PID \(pid)")
kill(pid, SIGKILL)
}
}

process.waitUntilExit()
watchdog.cancel()

if process.terminationStatus != 0 {
logError("VideoChunkEncoder: FFmpeg exited with status \(process.terminationStatus)")
} else {
log("VideoChunkEncoder: Finalized chunk with \(frameTimestamps.count) frames")
log("VideoChunkEncoder: Finalized chunk with \(frameCount) frames")
}

ffmpegProcess = nil
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Adding a watchdog with a SIGKILL fallback for ffmpeg is a critical reliability fix. Under memory pressure, I/O-bound processes can get stuck in uninterruptible sleep, ignoring SIGTERM. This watchdog prevents the actor from deadlocking and stops zombie ffmpeg processes from accumulating, which could otherwise lead to severe memory leaks and system instability.

Comment on lines +115 to +128
/// Release memory held by a task's chat state.
/// Call when a task is permanently deleted to prevent unbounded memory growth.
func purgeState(for taskId: String) {
taskCancellables[taskId]?.removeAll()
taskCancellables.removeValue(forKey: taskId)
taskStates.removeValue(forKey: taskId)
if activeTaskId == taskId {
closeChat()
}
streamingTaskIds.remove(taskId)
streamingStatuses.removeValue(forKey: taskId)
unreadTaskIds.remove(taskId)
log("TaskChatCoordinator: Purged state for task \(taskId)")
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The introduction of purgeState(for:) is a critical fix for memory management. Without it, chat state for deleted tasks would remain in memory indefinitely, leading to a memory leak that grows over time as tasks are created and deleted. This ensures the app's memory footprint remains stable.

Comment thread desktop/release.sh
Comment on lines +510 to 552
# Sign ALL native binaries in acp-bridge node_modules
# Apple notarization requires every binary/dylib/jnilib to be signed
ACP_BRIDGE_BUNDLE="$APP_BUNDLE/Contents/Resources/acp-bridge"
if [ -d "$ACP_BRIDGE_BUNDLE/node_modules" ]; then
echo " Signing native binaries in acp-bridge/node_modules..."

# Remove vendor directories with JARs containing native libs we don't need
# (JetBrains plugin has .jnilib inside JARs which Apple scans and rejects)
if [ -d "$ACP_BRIDGE_BUNDLE/node_modules/@anthropic-ai/claude-code/vendor/claude-code-jetbrains-plugin" ]; then
echo " Removing unnecessary JetBrains plugin vendor directory..."
rm -rf "$ACP_BRIDGE_BUNDLE/node_modules/@anthropic-ai/claude-code/vendor/claude-code-jetbrains-plugin"
fi

# Sign all Mach-O binaries: .node, .dylib, executables (rg, etc.)
# Use a single find + file check to catch everything Apple cares about
echo " Scanning for Mach-O binaries to sign..."
SIGNED_COUNT=0
find "$ACP_BRIDGE_BUNDLE/node_modules" -type f \
\( -name "*.node" -o -name "*.dylib" -o -name "*.jnilib" -o -name "*.so" -o -name "rg" \) \
2>/dev/null | while read native_bin; do
if file "$native_bin" 2>/dev/null | grep -q "Mach-O"; then
echo " Signing: $(echo "$native_bin" | sed "s|$ACP_BRIDGE_BUNDLE/||")"
codesign --force --options runtime --timestamp \
--sign "$SIGN_IDENTITY" \
"$native_bin"
fi
done

# Catch-all: find any remaining unsigned Mach-O binaries
find "$ACP_BRIDGE_BUNDLE/node_modules" -type f \
! -name "*.js" ! -name "*.json" ! -name "*.ts" ! -name "*.map" \
! -name "*.md" ! -name "*.txt" ! -name "*.yml" ! -name "*.yaml" \
! -name "*.css" ! -name "*.html" ! -name "*.jar" ! -name "*.d.ts" \
! -name "*.node" ! -name "*.dylib" ! -name "*.jnilib" ! -name "*.so" ! -name "rg" \
2>/dev/null | while read candidate; do
if file "$candidate" 2>/dev/null | grep -q "Mach-O"; then
echo " Signing (catch-all): $(echo "$candidate" | sed "s|$ACP_BRIDGE_BUNDLE/||")"
codesign --force --options runtime --timestamp \
--sign "$SIGN_IDENTITY" \
"$candidate"
fi
done
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This comprehensive approach to signing all native binaries within node_modules is a critical fix for the release process. Apple's notarization service is very strict and requires every single executable Mach-O file to be signed. This robust script ensures we don't miss any, preventing notarization failures that would block releases. The removal of unneeded vendor directories is also a smart optimization.

Comment thread desktop/verify-release.sh
Comment on lines +174 to +179
# Kill only the test instance (by matching its path, not the process name)
# Avoid pkill -x "Omi Computer" which kills ALL instances including Dev/Production
TEST_PIDS=$(pgrep -f "$VERIFY_DIR" 2>/dev/null || true)
if [ -n "$TEST_PIDS" ]; then
echo "$TEST_PIDS" | xargs kill 2>/dev/null || true
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This is a critical fix. Using pgrep -f to find and kill only the specific test instance of the app prevents the verification script from accidentally terminating a user's running Development or Production instances. This avoids potential data loss and workflow interruptions for anyone running the script while also using the app.

@m13v m13v closed this Feb 23, 2026
@m13v m13v deleted the sync/desktop-20260222-170537 branch February 23, 2026 03:32
@github-actions
Copy link
Copy Markdown
Contributor

Hey @m13v 👋

Thank you so much for taking the time to contribute to Omi! We truly appreciate you putting in the effort to submit this pull request.

After careful review, we've decided not to merge this particular PR. Please don't take this personally — we genuinely try to merge as many contributions as possible, but sometimes we have to make tough calls based on:

  • Project standards — Ensuring consistency across the codebase
  • User needs — Making sure changes align with what our users need
  • Code best practices — Maintaining code quality and maintainability
  • Project direction — Keeping aligned with our roadmap and vision

Your contribution is still valuable to us, and we'd love to see you contribute again in the future! If you'd like feedback on how to improve this PR or want to discuss alternative approaches, please don't hesitate to reach out.

Thank you for being part of the Omi community! 💜

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant