@i is an AI Agent application with tool calling, subagent execution, long-term memory, and workspace operations.
- Unified multi-provider access: built-in adapters for OpenAI-compatible / Claude-compatible / Gemini-compatible providers.
- Agent toolchain: built-in file read/write, directory traversal, command execution, web search/fetch, subagent spawn/wait, plan management, skill loading, memory read/write, and scheduled tasks.
- MCP support: connect to local or remote MCP servers, and search/import configs from the MCP Registry.
- Skills system: scan local folders and import
SKILL.md, enable skills per chat. - Long-term memory: main process uses
better-sqlite3 + sqlite-vecfor semantic memory storage and vector retrieval. - Tasks and scheduling: plan review, step status management, and scheduled prompt delivery to specific chats.
- Subagents: spawn background researcher/coder/reviewer-style subagents with isolated execution context, live status updates, and parent-run confirmation bridging.
- Artifacts / Workspace: each session is bound to an isolated workspace for file browsing and previewing dev services.
- Telegram bot support: receive Telegram messages and attachments through a gateway, map them into the shared chat runtime, and reply back through the same unified agent pipeline.
src/main Electron main process, IPC, database, tool execution, scheduler
src/preload preload bridge
src/renderer/src React UI, Zustand store, chat and settings screens
src/shared shared constants, prompts, tool definitions, schema
src/data built-in provider definitions
resources bundled resources
docs design and data-flow docs
pnpm install
pnpm devThe app follows a clear split:
rendererhandles UI, state, and event-driven streaming rendering.preloadexposes a controlled Electron API to the renderer.mainowns the database, model requests, tool execution, subagent runtime, MCP connections, memory retrieval, scheduling, and host adapters such as Telegram.
On submission, the renderer triggers MainChatSubmitService via IPC. The main process builds system prompts, skill prompts, message context, and tool definitions, then sends a unified model request. Streaming output is parsed into text segments, tool calls, and tool results, and pushed back to the UI. When needed, the main agent can also spawn background subagents that run with their own runtime context and report status/results back into the same chat flow.
Telegram follows the same main-process path through a host adapter and gateway layer. Incoming Telegram text, commands, and supported attachments are normalized into the shared chat/message model, executed by the same runtime, and formatted back into Telegram replies.
Main chat window
Chat sidebar
Setting section
Task plan bar
sudo xattr -r -d com.apple.quarantine /Applications/at-i.appsudo gtk-update-icon-cache /usr/share/icons/hicolor
sudo update-icon-caches /usr/share/icons/hicolor
sudo update-desktop-database /usr/share/applications- https://github.com/openai/openai-node
- https://developers.openai.com/api/docs
- https://platform.claude.com/docs/en/home
- https://ai.google.dev/gemini-api/docs
- https://icons.lobehub.com/
This project is licensed under the GNU General Public License v3.0 or later.
- SPDX identifier:
GPL-3.0-or-later - See LICENSE for the full text.



