From ddc107181078c2e1d58515bfd125bf730f3c40fb Mon Sep 17 00:00:00 2001 From: reikernodd Date: Fri, 8 May 2026 22:31:11 +0100 Subject: [PATCH 1/4] feat: add support for local LLM provider and integrate Google OAuth flow --- README.md | 22 +- README_EN.md | 252 +++++---- src/bootstrap/state.ts | 2 +- src/commands/provider.ts | 17 +- src/components/ConsoleOAuthFlow.tsx | 487 +++++++++++++++++- src/screens/Doctor.tsx | 39 ++ src/services/api/claude.ts | 2 +- src/services/api/gemini/client.ts | 19 +- src/services/api/gemini/google-oauth.ts | 110 ++++ src/services/api/openai/client.ts | 20 +- src/services/api/openai/index.ts | 7 +- src/utils/doctorDiagnostic.ts | 31 ++ src/utils/localLlm.ts | 91 ++++ src/utils/model/configs.ts | 4 +- src/utils/model/modelStrings.ts | 4 +- src/utils/model/providers.ts | 2 + src/utils/settings/types.ts | 6 +- src/utils/status.tsx | 1 + src/utils/swarm/teammateModel.ts | 4 +- .../autonomy-lifecycle-user-flow.test.ts | 4 +- 20 files changed, 983 insertions(+), 141 deletions(-) create mode 100644 src/services/api/gemini/google-oauth.ts create mode 100644 src/utils/localLlm.ts diff --git a/README.md b/README.md index d0f0033a10..63a1618b99 100644 --- a/README.md +++ b/README.md @@ -27,6 +27,7 @@ | **Poor Mode** | 穷鬼模式,关闭记忆提取和键入建议,大幅度减少并发请求 | /poor 可以开关 | | **Channels 频道通知** | MCP 服务器推送外部消息到会话(飞书/Slack/Discord/微信等),`--channels plugin:name@marketplace` 启用 | [文档](https://ccb.agent-aura.top/docs/features/channels) | | **自定义模型供应商** | OpenAI/Anthropic/Gemini/Grok 兼容 (`/login`) | [文档](https://ccb.agent-aura.top/docs/features/all-features-guide) | +| **本地 LLM (Ollama/Local)** | 支持 Ollama, LM Studio, Jan.ai, LocalAI。支持在 `/login` 中一键拉取模型、检查硬件状态、本地优先运行。 | /login 选择 Local LLM | | Voice Mode | 语音输入,支持豆包语言输入(`/voice doubao`) | [文档](https://ccb.agent-aura.top/docs/features/voice-mode) | | Computer Use | 屏幕截图、键鼠控制 | [文档](https://ccb.agent-aura.top/docs/features/computer-use) | | Chrome Use | 浏览器自动化、表单填写、数据抓取 | [自托管](https://ccb.agent-aura.top/docs/features/chrome-use-mcp) [原生版](https://ccb.agent-aura.top/docs/features/claude-in-chrome-mcp) | @@ -145,10 +146,16 @@ bun run build ### 👤 新人配置 /login -首次运行后,在 REPL 中输入 `/login` 命令进入登录配置界面,选择 **Anthropic Compatible** 即可对接第三方 API 兼容服务(无需 Anthropic 官方账号)。 -选择 OpenAI 和 Gemini 对应的栏目都是支持相应协议的 +首次运行后,在 REPL 中输入 `/login` 命令进入登录配置界面: -需要填写的字段: +1. **Anthropic Compatible**: 对接第三方 API 兼容服务(OpenRouter、AWS Bedrock 代理等)。 +2. **OpenAI / Gemini / Grok**: 对应各自协议的云端服务。 +3. **Local LLM**: **(推荐)** 使用本地运行的模型。 + - 支持 **Ollama**, **LM Studio**, **Jan.ai**, **LocalAI**。 + - **Ollama 深度集成**: 可直接在 CLI 中查看已安装模型,或输入模型名(如 `llama3.1`)一键拉取(Pull)。 + - 自动检测本地运行状态和默认端口。 + +#### /login 字段说明 (云端模式): | 📌 字段 | 📝 说明 | 💡 示例 | @@ -163,6 +170,15 @@ bun run build > ℹ️ 支持所有 Anthropic API 兼容服务(如 OpenRouter、AWS Bedrock 代理等),只要接口兼容 Messages API 即可。 +### 🩺 系统诊断 /doctor + +如果你在使用过程中遇到环境问题(尤其是本地模型运行缓慢或无法连接),可以使用 `/doctor` 命令进行全方位诊断: + +- **硬件负载**: 自动显示当前 CPU 型号、核心数、剩余内存 (RAM) 以及系统架构。 +- **本地环境**: 检查 Ollama 等本地 Runner 是否正在运行,并列出所有可用模型。 +- **配置校验**: 检查环境变量(如 `LOCAL_BASE_URL`)和权限设置。 +- **故障排查**: 识别多个重复安装的版本、过期的版本锁或权限不足的更新。 + ## Feature Flags 所有功能开关通过 `FEATURE_=1` 环境变量启用,例如: diff --git a/README_EN.md b/README_EN.md index 6769ff2a9a..908f7260fa 100644 --- a/README_EN.md +++ b/README_EN.md @@ -6,50 +6,65 @@ [![GitHub License](https://img.shields.io/github/license/claude-code-best/claude-code?style=flat-square)](https://github.com/claude-code-best/claude-code/blob/main/LICENSE) [![Last Commit](https://img.shields.io/github/last-commit/claude-code-best/claude-code?style=flat-square&color=blue)](https://github.com/claude-code-best/claude-code/commits/main) [![Bun](https://img.shields.io/badge/runtime-Bun-black?style=flat-square&logo=bun)](https://bun.sh/) +[![Discord](https://img.shields.io/badge/Discord-Join-5865F2?style=flat-square&logo=discord)](https://discord.gg/uApuzJWGKX) > Which Claude do you like? The open source one is the best. -A reverse-engineered / decompiled source restoration of Anthropic's official [Claude Code](https://docs.anthropic.com/en/docs/claude-code) CLI tool. The goal is to reproduce most of Claude Code's functionality and engineering capabilities. It's abbreviated as CCB. +A source code decompilation/reverse engineering project of the official [Claude Code](https://docs.anthropic.com/en/docs/claude-code) CLI tool from Anthropic (aka "Old A"). The goal is to reproduce most of the features and engineering capabilities of Claude Code (the user says "Old Lafayette has already paid for it"). Although it's a bit awkward, it's called CCB (Cai Cai Bei / Step on the Back)... Moreover, we have implemented features that are usually limited to the Enterprise edition or require logging into a Claude account, achieving technology democratization. -[Documentation (Chinese)](https://ccb.agent-aura.top/) — PR contributions welcome. +> We will be performing lint standardization across the entire repository during the Labor Day holiday (May 1st). PRs submitted during this period may have many conflicts, so please try to submit large features before then. -Sponsor placeholder. +[Documentation here, PR submissions welcome](https://ccb.agent-aura.top/) | [Friends list documentation here](./Friends.md) | [Discord Group](https://discord.gg/uApuzJWGKX) -- [x] v1: Basic runability and type checking pass -- [x] V2: Complete engineering infrastructure - - [ ] Biome formatting may not be implemented first to avoid code conflicts - - [x] Build pipeline complete, output runnable on both Node.js and Bun -- [x] V3: Extensive documentation and documentation site improvements -- [x] V4: Large-scale test suite for improved stability - - [x] Buddy pet feature restored [Docs](https://ccb.agent-aura.top/docs/features/buddy) - - [x] Auto Mode restored [Docs](https://ccb.agent-aura.top/docs/safety/auto-mode) - - [x] All features now configurable via environment variables instead of `bun --feature` -- [x] V5: Enterprise-grade monitoring/reporting, missing tools补全, restrictions removed - - [x] Removed anti-distillation code - - [x] Web search capability (using Bing) [Docs](https://ccb.agent-aura.top/docs/features/web-browser-tool) - - [x] Debug mode support [Docs](https://ccb.agent-aura.top/docs/features/debug-mode) - - [x] Disabled auto-updates - - [x] Custom Sentry error reporting support [Docs](https://ccb.agent-aura.top/docs/internals/sentry-setup) - - [x] Custom GrowthBook support (GB is open source — configure your own feature flag platform) [Docs](https://ccb.agent-aura.top/docs/internals/growthbook-adapter) - - [x] Custom login mode — configure Claude models your way -- [ ] V6: Large-scale refactoring, full modular packaging - - [ ] V6 will be a new branch; main branch will be archived as a historical version +| Feature | Description | Documentation | +| --- | --- | --- | +| **Claude Group Control** | Pipe IPC multi-instance collaboration: Automatic orchestration of local main/sub instances + zero-config LAN discovery and communication, `/pipes` selection panel + `Shift+↓` interaction + message broadcast routing | [Pipe IPC](https://ccb.agent-aura.top/docs/features/uds-inbox) / [LAN](https://ccb.agent-aura.top/docs/features/lan-pipes) | +| **First-class ACP Protocol Support** | Supports integration with IDEs like Zed and Cursor, session recovery, Skills, and permission bridging | [Documentation](https://ccb.agent-aura.top/docs/features/acp-zed) | +| **Remote Control Private Deployment** | Docker self-hosted remote interface, allowing you to use CC on your phone | [Documentation](https://ccb.agent-aura.top/docs/features/remote-control-self-hosting) | +| **Langfuse Monitoring** | Enterprise-grade Agent monitoring, clearly see every agent loop detail, and convert to datasets with one click | [Documentation](https://ccb.agent-aura.top/docs/features/langfuse-monitoring) | +| **Web Search** | Built-in web search tool, supports Bing and Brave search | [Documentation](https://ccb.agent-aura.top/docs/features/web-browser-tool) | +| **Poor Mode** | For the budget-conscious: disables memory extraction and typing suggestions, significantly reducing concurrent requests | Toggle with `/poor` | +| **Channels Notifications** | MCP server pushes external messages to sessions (Feishu/Slack/Discord/WeChat, etc.), enabled with `--channels plugin:name@marketplace` | [Documentation](https://ccb.agent-aura.top/docs/features/channels) | +| **Custom Model Providers** | Compatible with OpenAI/Anthropic/Gemini/Grok (`/login`) | [Documentation](https://ccb.agent-aura.top/docs/features/all-features-guide) | +| Voice Mode | Voice input, supports Doubao voice input (`/voice doubao`) | [Documentation](https://ccb.agent-aura.top/docs/features/voice-mode) | +| Computer Use | Screenshots, keyboard and mouse control | [Documentation](https://ccb.agent-aura.top/docs/features/computer-use) | +| Chrome Use | Browser automation, form filling, data scraping | [Self-hosted](https://ccb.agent-aura.top/docs/features/chrome-use-mcp) [Native version](https://ccb.agent-aura.top/docs/features/claude-in-chrome-mcp) | +| Sentry | Enterprise-grade error tracking | [Documentation](https://ccb.agent-aura.top/docs/internals/sentry-setup) | +| GrowthBook | Enterprise-grade feature flags | [Documentation](https://ccb.agent-aura.top/docs/internals/growthbook-adapter) | +| /dream Memory Consolidation | Automatically organize and optimize memory files | [Documentation](https://ccb.agent-aura.top/docs/features/auto-dream) | -> I don't know how long this project will survive. Star + Fork + git clone + .zip is the safest bet. -> -> This project updates rapidly — Opus continuously optimizes in the background, with new changes almost every few hours. -> -> Claude has burned over $1000, out of budget, switching to GLM to continue; @zai-org GLM 5.1 is quite capable. +- 🚀 [Quick Start (Source Code Version)](#-quick-start-source-code-version) +- 🐛 [Debugging the Project](#vs-code-debugging) +- 📖 [Learn the Project](#teach-me-learning-project) -## Quick Start +## ⚡ Quick Start (Installation Version) -### Prerequisites +No need to clone the repository. After downloading from NPM, use it directly. -Make sure you're on the latest version of Bun, otherwise you'll run into all sorts of weird bugs. Run `bun upgrade`! +```sh +npm i -g claude-code-best -- [Bun](https://bun.sh/) >= 1.3.11 +# Bun installation has many issues, npm is recommended +# bun i -g claude-code-best +# bun pm -g trust claude-code-best @claude-code-best/mcp-chrome-bridge -**Install Bun:** +ccb # Open Claude Code with Node.js +ccb-bun # Open with Bun +ccb update # Update to the latest version +CLAUDE_BRIDGE_BASE_URL=https://remote-control.claude-code-best.win/ CLAUDE_BRIDGE_OAUTH_TOKEN=test-my-key ccb --remote-control # We have self-deployed remote control +``` + +> **Installation/Update Failed?** Run `npm rm -g claude-code-best` to clean up old versions first, then `npm i -g claude-code-best@latest`. If it still fails, specify the version number: `npm i -g claude-code-best@` + +## ⚡ Quick Start (Source Code Version) + +### ⚙️ Prerequisites + +You MUST use the latest version of Bun, otherwise you'll encounter many strange bugs!!! `bun upgrade`!!! + +- 📦 [Bun](https://bun.sh/) >= 1.3.11 + +**Installing Bun:** ```bash # Linux and macOS @@ -61,103 +76,89 @@ powershell -c "irm bun.sh/install.ps1 | iex" **Post-installation steps:** -1. **Make `bun` available in the current terminal** +1. **Make `bun` command recognized in the current terminal** + + The installation script will write `~/.bun/bin` to your shell configuration file. On macOS with zsh, you will usually see: + + ```text + Added "~/.bun/bin" to $PATH in "~/.zshrc" + ``` - The installer adds `~/.bun/bin` to the matching shell configuration file. On macOS with the default zsh shell, you may see: + You can restart your shell as prompted: - ```text - Added "~/.bun/bin" to $PATH in "~/.zshrc" - ``` + ```bash + exec /bin/zsh + ``` - Restart the current shell as the installer suggests: + If using bash, reload the configuration: - ```bash - exec /bin/zsh - ``` + ```bash + source ~/.bashrc + ``` - If you use bash, reload the bash configuration: + Windows PowerShell users should close and reopen PowerShell. - ```bash - source ~/.bashrc - ``` +2. **Verify Bun is available** - Windows PowerShell users can close and reopen PowerShell. + ```bash + bun --help + bun --version + ``` -2. **Verify that Bun is available:** - ```bash - bun --help - bun --version - ``` +3. **If Bun is already installed, update to the latest version** -3. **Update to latest version (if already installed):** - ```bash - bun upgrade - ``` + ```bash + bun upgrade + ``` -- Standard Claude Code configuration — each provider has its own setup method +- ⚙️ Standard CC configuration methods; each provider has its own way. -### Command Execution Location +### 📍 Execution Directory -- Bun installation and checking commands can be run from any directory: - `curl -fsSL https://bun.sh/install | bash`, `bun --help`, `bun --version`, `bun upgrade` -- Project dependency installation, development mode, and builds must be run from this repository root, the directory containing `package.json`. +- Commands to install or check Bun can be run in any directory: `curl -fsSL https://bun.sh/install | bash`, `bun --help`, `bun --version`, `bun upgrade`. +- To install dependencies, start development mode, or build the project, you MUST be in the repository root directory (the one containing `package.json`). -### Install +### 📥 Installation ```bash cd /path/to/claude-code bun install ``` -### Run +### ▶️ Running ```bash -# Dev mode — if you see version 888, it's working +# Development mode, version number 888 confirms success bun run dev # Build bun run build ``` -The build uses code splitting (`build.ts`), outputting to `dist/` (entry `dist/cli.js` + ~450 chunk files). +The build uses code splitting for multi-file packaging (`build.ts`), outputting to the `dist/` directory (entry point `dist/cli.js` + approximately 450 chunk files). -The build output runs on both Bun and Node.js — you can publish to a private registry and run directly. +The built version can be started with both Bun and Node.js. You can start it directly if you publish it to a private source. -If you encounter a bug, please open an issue — we'll prioritize it. +If you encounter a bug, please open an issue; we prioritize solving them. -### First-time Setup /login +### 👤 New User Configuration /login -After the first run, enter `/login` in the REPL to access the login configuration screen. Select **Anthropic Compatible** to connect to third-party API-compatible services (no Anthropic account required). +After running for the first time, type `/login` in the REPL to enter the login configuration interface. Select **Anthropic Compatible** to connect to third-party API services (no official Anthropic account required). +Options for OpenAI and Gemini are also available for their respective protocols. Fields to fill in: -| Field | Description | Example | -|-------|-------------|---------| -| Base URL | API service URL | `https://api.example.com/v1` | -| API Key | Authentication key | `sk-xxx` | -| Haiku Model | Fast model ID | `claude-haiku-4-5-20251001` | -| Sonnet Model | Balanced model ID | `claude-sonnet-4-6` | -| Opus Model | High-performance model ID | `claude-opus-4-6` | - -- **Tab / Shift+Tab** to switch fields, **Enter** to confirm and move to the next, press Enter on the last field to save -- Model fields auto-fill from current environment variables -- Configuration saves to `~/.claude/settings.json` under the `env` key, effective immediately - -You can also edit `~/.claude/settings.json` directly: - -```json -{ - "env": { - "ANTHROPIC_BASE_URL": "https://api.example.com/v1", - "ANTHROPIC_AUTH_TOKEN": "sk-xxx", - "ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-haiku-4-5-20251001", - "ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-sonnet-4-6", - "ANTHROPIC_DEFAULT_OPUS_MODEL": "claude-opus-4-6" - } -} -``` +| 📌 Field | 📝 Description | 💡 Example | +| --- | --- | --- | +| Base URL | API Service Address | `https://api.example.com/v1` | +| API Key | Authentication Key | `sk-xxx` | +| Haiku Model | Fast Model ID | `claude-haiku-4-5-20251001` | +| Sonnet Model | Balanced Model ID | `claude-sonnet-4-6` | +| Opus Model | High Performance Model ID | `claude-opus-4-6` | + +- ⌨️ **Tab / Shift+Tab** to switch fields, **Enter** to confirm and jump to the next, press Enter on the last field to save. -> Supports all Anthropic API-compatible services (e.g., OpenRouter, AWS Bedrock proxies, etc.) as long as the interface is compatible with the Messages API. +> ℹ️ Supports all Anthropic API compatible services (e.g., OpenRouter, AWS Bedrock proxies, etc.), as long as the interface is compatible with the Messages API. ## Feature Flags @@ -167,45 +168,74 @@ All feature toggles are enabled via `FEATURE_=1` environment variable FEATURE_BUDDY=1 FEATURE_FORK_SUBAGENT=1 bun run dev ``` -See [`docs/features/`](docs/features/) for detailed descriptions of each feature. Contributions welcome. +Detailed descriptions of each feature can be found in the [`docs/features/`](docs/features/) directory. Contributions are welcome. ## VS Code Debugging -The TUI (REPL) mode requires a real terminal and cannot be launched directly via VS Code's launch config. Use **attach mode**: +TUI (REPL) mode requires a real terminal and cannot be debugged directly via a VS Code launch configuration. Use **attach mode**: ### Steps -1. **Start inspect server in terminal**: - ```bash - bun run dev:inspect - ``` - This outputs an address like `ws://localhost:8888/xxxxxxxx`. +1. **Start the inspect service in a terminal**: + + ```bash + bun run dev:inspect + ``` -2. **Attach debugger from VS Code**: - - Set breakpoints in `src/` files - - Press F5 → select **"Attach to Bun (TUI debug)"** + It will output an address like `ws://localhost:8888/xxxxxxxx`. +2. **Attach the VS Code debugger**: -## Documentation & Links + - Set breakpoints in `src/` files. + - Press F5 → Select **"Attach to Bun (TUI debug)"**. -- **Online docs (Mintlify)**: [ccb.agent-aura.top](https://ccb.agent-aura.top/) — source in [`docs/`](docs/), PR contributions welcome -- **DeepWiki**: https://deepwiki.com/claude-code-best/claude-code +## Teach Me Learning Project + +We've added a new `teach-me` skill, which uses a Q&A-style guide to help you understand any module of this project. (Adapted from [sigma skill](https://github.com/sanyuan0704/sanyuan-skills)). + +```bash +# Enter directly in the REPL +/teach-me Claude Code Architecture +/teach-me React Ink Terminal Rendering --level beginner +/teach-me Tool System --resume +``` + +### What it can do + +- **Level Diagnosis** — Automatically assesses your mastery of related concepts, skipping what you know and focusing on weaknesses. +- **Build Learning Paths** — Breaks down topics into 5-15 atomic concepts, progressing step-by-step based on dependencies. +- **Socratic Questioning** — Guides your thinking with options rather than giving direct answers. +- **Misconception Tracking** — Discovers and corrects deep-seated misunderstandings. +- **Resume Learning** — `--resume` continues from where you last left off. + +### Learning Records + +Learning progress is saved in the `.claude/skills/teach-me/` directory, supporting cross-topic learner profiles. + +## Related Documents and Websites + +- **Online Documentation (Mintlify)**: [ccb.agent-aura.top](https://ccb.agent-aura.top/) — Documentation source code is in the [`docs/`](docs/) directory; PRs are welcome. +- **DeepWiki**: [https://deepwiki.com/claude-code-best/claude-code](https://deepwiki.com/claude-code-best/claude-code) ## Contributors - + Contributors ## Star History - - - Star History Chart + + + Star History Chart +## Acknowledgments + +- [doubaoime-asr](https://github.com/starccy/doubaoime-asr) — Doubao ASR voice recognition SDK, providing a voice input solution for Voice Mode without requiring Anthropic OAuth. + ## License This project is for educational and research purposes only. All rights to Claude Code belong to [Anthropic](https://www.anthropic.com/). diff --git a/src/bootstrap/state.ts b/src/bootstrap/state.ts index f939b5c43d..ccdde5843f 100644 --- a/src/bootstrap/state.ts +++ b/src/bootstrap/state.ts @@ -261,7 +261,7 @@ function getInitialState(): State { typeof process.cwd === 'function' && typeof realpathSync === 'function' ) { - const rawCwd = cwd() + const rawCwd = process.env.CLAUDE_CODE_CWD || cwd() try { resolvedCwd = realpathSync(rawCwd).normalize('NFC') } catch { diff --git a/src/commands/provider.ts b/src/commands/provider.ts index 5d12f74573..d611997031 100644 --- a/src/commands/provider.ts +++ b/src/commands/provider.ts @@ -67,6 +67,7 @@ const call: LocalCommandCall = async (args, _context) => { 'openai', 'gemini', 'grok', + 'local', 'bedrock', 'vertex', 'foundry', @@ -78,6 +79,19 @@ const call: LocalCommandCall = async (args, _context) => { } } + // Check env vars when switching to local (including settings.env) + if (arg === 'local') { + const mergedEnv = getMergedEnv() + const hasUrl = !!mergedEnv.LOCAL_BASE_URL + if (!hasUrl) { + updateSettingsForSource('userSettings', { modelType: 'local' }) + return { + type: 'text', + value: `Switched to Local provider.\nWarning: Missing env var: LOCAL_BASE_URL\nConfigure it via /login or set manually.`, + } + } + } + // Check env vars when switching to openai (including settings.env) if (arg === 'openai') { const mergedEnv = getMergedEnv() @@ -129,7 +143,8 @@ const call: LocalCommandCall = async (args, _context) => { arg === 'anthropic' || arg === 'openai' || arg === 'gemini' || - arg === 'grok' + arg === 'grok' || + arg === 'local' ) { // Clear any cloud provider env vars to avoid conflicts delete process.env.CLAUDE_CODE_USE_BEDROCK diff --git a/src/components/ConsoleOAuthFlow.tsx b/src/components/ConsoleOAuthFlow.tsx index 9ca4641b3c..6897066763 100644 --- a/src/components/ConsoleOAuthFlow.tsx +++ b/src/components/ConsoleOAuthFlow.tsx @@ -17,6 +17,7 @@ import { getSettings_DEPRECATED, updateSettingsForSource } from '../utils/settin import { Select } from './CustomSelect/select.js'; import { Spinner } from './Spinner.js'; import TextInput from './TextInput.js'; +import { checkOllamaStatus, listOllamaModels, pullOllamaModel, pingUrl } from '../utils/localLlm.js'; type Props = { onDone(): void; @@ -55,8 +56,25 @@ type OAuthStatus = opusModel: string; activeField: 'base_url' | 'api_key' | 'haiku_model' | 'sonnet_model' | 'opus_model'; } // Gemini Generate Content API platform + | { + state: 'local_llm_setup'; + runnerType: 'ollama' | 'lmstudio' | 'jan' | 'localai' | 'custom'; + baseUrl: string; + apiKey?: string; + modelName: string; + activeField: 'runner_type' | 'base_url' | 'api_key' | 'model_name' | 'custom_model_name'; + availableModels: string[]; + isLoadingModels: boolean; + statusMessage?: string; + } + | { + state: 'local_llm_pulling'; + modelName: string; + status: string; + percentage?: number; + } | { state: 'ready_to_start' } // Flow started, waiting for browser to open - | { state: 'waiting_for_login'; url: string } // Browser opened, waiting for user to login + | { state: 'waiting_for_login'; url?: string } // Browser opened, waiting for user to login | { state: 'creating_api_key' } // Got access token, creating API key | { state: 'about_to_retry'; nextState: OAuthStatus } | { state: 'success'; token?: string } @@ -67,6 +85,7 @@ type OAuthStatus = }; const PASTE_HERE_MSG = 'Paste code here if prompted > '; +const POPULAR_MODELS = ['llama3.1', 'mistral', 'phi3', 'qwen2', 'gemma2', 'codellama']; export function ConsoleOAuthFlow({ onDone, startingMessage, @@ -127,6 +146,94 @@ export function ConsoleOAuthFlow({ } }, [oauthStatus]); + // Handle Ollama model listing + useEffect(() => { + if ( + oauthStatus.state === 'local_llm_setup' && + oauthStatus.runnerType === 'ollama' && + oauthStatus.availableModels.length === 0 && + !oauthStatus.isLoadingModels + ) { + setOAuthStatus(prev => (prev.state === 'local_llm_setup' ? { ...prev, isLoadingModels: true } : prev)); + listOllamaModels(oauthStatus.baseUrl) + .then(models => { + setOAuthStatus(prev => + prev.state === 'local_llm_setup' + ? { + ...prev, + availableModels: models, + isLoadingModels: false, + statusMessage: models.length === 0 ? 'No models found. You can download one below.' : undefined, + } + : prev, + ); + }) + .catch(err => { + setOAuthStatus(prev => + prev.state === 'local_llm_setup' + ? { + ...prev, + isLoadingModels: false, + statusMessage: `Error: ${err.message}`, + } + : prev, + ); + }); + } + }, [oauthStatus]); + + // Handle Ollama model pulling + useEffect(() => { + if (oauthStatus.state === 'local_llm_pulling') { + const abortController = new AbortController(); + (async () => { + try { + for await (const progress of pullOllamaModel( + oauthStatus.modelName, + 'http://localhost:11434', + abortController.signal, + )) { + setOAuthStatus(prev => + prev.state === 'local_llm_pulling' + ? { + ...prev, + status: progress.status, + percentage: progress.percentage, + } + : prev, + ); + } + // Success! Reload models + setOAuthStatus({ + state: 'local_llm_setup', + runnerType: 'ollama', + baseUrl: 'http://localhost:11434', + modelName: oauthStatus.modelName, + activeField: 'model_name', + availableModels: [], + isLoadingModels: false, + }); + } catch (err) { + if (abortController.signal.aborted) return; + setOAuthStatus({ + state: 'error', + message: `Failed to pull model: ${err instanceof Error ? err.message : String(err)}`, + toRetry: { + state: 'local_llm_setup', + runnerType: 'ollama', + baseUrl: 'http://localhost:11434', + modelName: oauthStatus.modelName, + activeField: 'model_name', + availableModels: [], + isLoadingModels: false, + }, + }); + } + })(); + return () => abortController.abort(); + } + }, [oauthStatus.state]); + // Handle Enter to continue on success state useKeybinding( 'confirm:yes', @@ -172,7 +279,7 @@ export function ConsoleOAuthFlow({ useEffect(() => { if (pastedCode === 'c' && oauthStatus.state === 'waiting_for_login' && showPastePrompt && !urlCopied) { - void setClipboard(oauthStatus.url).then(raw => { + void setClipboard(oauthStatus.url || '').then(raw => { if (raw) process.stdout.write(raw); setUrlCopied(true); setTimeout(setUrlCopied, 2000, false); @@ -341,7 +448,7 @@ export function ConsoleOAuthFlow({ )} - + {oauthStatus.url} @@ -427,6 +534,15 @@ function OAuthStatusMessage({ { + const nextState = buildLocalState('runner_type', val, 'base_url') as any; + setOAuthStatus(nextState); + setLocalInputValue(nextState.baseUrl ?? ''); + setLocalInputCursorOffset((nextState.baseUrl ?? '').length); + }} + /> + ) : ( + {displayValues.runner_type} + )} + + + {(activeField === 'base_url' || LOCAL_FIELDS.indexOf(activeField) > LOCAL_FIELDS.indexOf('base_url')) && + renderLocalTextInput('base_url', 'Base URL ')} + + {(activeField === 'api_key' || LOCAL_FIELDS.indexOf(activeField) > LOCAL_FIELDS.indexOf('api_key')) && + renderLocalTextInput('api_key', 'API Key ', true)} + + {(activeField === 'model_name' || activeField === 'custom_model_name') && ( + + + + {' Model Name '} + + + {activeField === 'model_name' ? ( + oauthStatus.isLoadingModels ? ( + + + Loading installed models... + + ) : ( + + ({ label: m, value: m })), + { label: 'Custom (Type your own)', value: '__custom__' }, + ]} + onChange={val => { + if (val === '__custom__') { + const nextState = buildGeminiState(field, '', customField); + setOAuthStatus(nextState); + setGeminiInputValue(''); + setGeminiInputCursorOffset(0); + } else { + const nextState = buildGeminiState(field, val); + if (field === 'opus_model') { + setOAuthStatus(nextState); + doGeminiSave(nextState); + } else { + // Advance to next field + const idx = GEMINI_FIELDS.indexOf(field); + const next = GEMINI_FIELDS[idx + 1]!; + const advancedState = buildGeminiState(field, val, next); + setOAuthStatus(advancedState); + setGeminiInputValue(geminiDisplayValues[next] ?? ''); + setGeminiInputCursorOffset((geminiDisplayValues[next] ?? '').length); + } + } + }} + /> + ) : activeField === customField ? ( + + ) : val ? ( + {val} + ) : null} + + + ); + }; + const renderGeminiRow = (field: GeminiField, label: string, opts?: { mask?: boolean }) => { const active = activeField === field; const val = geminiDisplayValues[field]; @@ -1611,17 +1772,41 @@ function OAuthStatusMessage({ Gemini API Setup - Configure a Gemini Generate Content compatible endpoint. Base URL is optional and defaults to Google's - v1beta API. Leave API Key blank to log in via browser (Google Auth). + Configure a Gemini Generate Content compatible endpoint. Models will be fetched automatically. Leave API Key + blank to log in via browser (Google Auth). + - {renderGeminiRow('base_url', 'Base URL ')} - {renderGeminiRow('api_key', 'API Key ', { mask: true })} - {renderGeminiRow('haiku_model', 'Haiku ')} - {renderGeminiRow('sonnet_model', 'Sonnet ')} - {renderGeminiRow('opus_model', 'Opus ')} + {(activeField === 'api_key' || + GEMINI_FIELDS.indexOf(activeField as any) >= GEMINI_FIELDS.indexOf('api_key')) && + renderGeminiRow('api_key', 'API Key ', { mask: true })} + + {isLoadingModels && ( + + + {statusMessage || 'Loading...'} + + )} + + {!isLoadingModels && + (activeField === 'haiku_model' || + activeField === 'custom_haiku_model' || + GEMINI_FIELDS.indexOf(activeField as any) > GEMINI_FIELDS.indexOf('haiku_model')) && + renderGeminiModelField('haiku_model', 'custom_haiku_model', 'Haiku ')} + + {!isLoadingModels && + (activeField === 'sonnet_model' || + activeField === 'custom_sonnet_model' || + GEMINI_FIELDS.indexOf(activeField as any) > GEMINI_FIELDS.indexOf('sonnet_model')) && + renderGeminiModelField('sonnet_model', 'custom_sonnet_model', 'Sonnet ')} + + {!isLoadingModels && + (activeField === 'opus_model' || + activeField === 'custom_opus_model' || + GEMINI_FIELDS.indexOf(activeField as any) > GEMINI_FIELDS.indexOf('opus_model')) && + renderGeminiModelField('opus_model', 'custom_opus_model', 'Opus ')} - ↑↓/Tab to switch · Enter on last field to save · Esc to go back + ↑↓ to select options · Enter to save/fetch models · Esc to go back ); } diff --git a/src/components/__tests__/ConsoleOAuthFlow.test.tsx b/src/components/__tests__/ConsoleOAuthFlow.test.tsx new file mode 100644 index 0000000000..b14c570869 --- /dev/null +++ b/src/components/__tests__/ConsoleOAuthFlow.test.tsx @@ -0,0 +1,73 @@ +import { describe, expect, test, mock } from 'bun:test'; +import * as React from 'react'; + +mock.module('react', () => ({ + ...React, + useState: (initial: any) => [typeof initial === 'function' ? initial() : initial, () => {}], + useEffect: () => {}, + useRef: (initial: any) => ({ current: initial }), + useCallback: (fn: any) => fn, + useMemo: (fn: any) => fn(), + useContext: () => ({}), +})); + +import { ConsoleOAuthFlow } from '../ConsoleOAuthFlow.js'; + +// Mock dependencies +mock.module('src/services/analytics/index.js', () => ({ + logEvent: () => {}, +})); + +mock.module('../utils/localLlm.js', () => ({ + checkOllamaStatus: async () => true, + listOllamaModels: async () => ['llama3.1', 'mistral'], + pullOllamaModel: async () => {}, + pingUrl: async () => true, +})); + +mock.module('../utils/settings/settings.js', () => ({ + getSettings_DEPRECATED: () => ({}), + updateSettingsForSource: () => {}, +})); + +mock.module('../utils/auth.js', () => ({ + getOauthAccountInfo: async () => ({}), + validateForceLoginOrg: async () => true, +})); + +mock.module('../services/oauth/index.js', () => ({ + OAuthService: { + start: async () => {}, + }, +})); + +mock.module('@anthropic/ink', () => ({ + useTerminalNotification: () => () => {}, + setClipboard: () => {}, + Box: ({ children }: any) =>
{children}
, + Link: ({ children }: any) =>
{children}
, + Text: ({ children }: any) =>
{children}
, + KeyboardShortcutHint: () => null, +})); + +mock.module('../hooks/useTerminalSize.js', () => ({ + useTerminalSize: () => ({ columns: 80, rows: 24 }), +})); + +mock.module('../keybindings/useKeybinding.js', () => ({ + useKeybinding: () => {}, +})); + +describe('ConsoleOAuthFlow', () => { + test('renders initial login method selection', () => { + const onDone = () => {}; + const element = ConsoleOAuthFlow({ onDone }) as React.ReactElement; + + // The component returns a React element tree + // We expect it to contain the title and options + const str = JSON.stringify(element); + expect(str).toContain('Select login method'); + expect(str).toContain('Anthropic Console'); + expect(str).toContain('Local LLM'); + }); +}); diff --git a/src/components/messages/AttachmentMessage.tsx b/src/components/messages/AttachmentMessage.tsx index df47aa8fc2..9aae9d5dbd 100644 --- a/src/components/messages/AttachmentMessage.tsx +++ b/src/components/messages/AttachmentMessage.tsx @@ -333,16 +333,26 @@ export function AttachmentMessage({ attachment, addMargin, verbose, isTranscript ); } case 'hook_non_blocking_error': { - // Stop hooks are rendered as a summary in SystemStopHookSummaryMessage - if (attachment.hookEvent === 'Stop' || attachment.hookEvent === 'SubagentStop') { + // Stop/Start hooks are rendered as a summary or suppressed to avoid clutter + if ( + attachment.hookEvent === 'Stop' || + attachment.hookEvent === 'SubagentStop' || + attachment.hookEvent === 'SessionStart' || + attachment.hookEvent === 'SubagentStart' + ) { return null; } // Full hook output is logged to debug log via hookEvents.ts return {attachment.hookName} hook error; } case 'hook_error_during_execution': - // Stop hooks are rendered as a summary in SystemStopHookSummaryMessage - if (attachment.hookEvent === 'Stop' || attachment.hookEvent === 'SubagentStop') { + // Stop/Start hooks are rendered as a summary or suppressed to avoid clutter + if ( + attachment.hookEvent === 'Stop' || + attachment.hookEvent === 'SubagentStop' || + attachment.hookEvent === 'SessionStart' || + attachment.hookEvent === 'SubagentStart' + ) { return null; } // Full hook output is logged to debug log via hookEvents.ts diff --git a/src/constants/prompts.ts b/src/constants/prompts.ts index 02b68f94f4..2710231da0 100644 --- a/src/constants/prompts.ts +++ b/src/constants/prompts.ts @@ -144,7 +144,10 @@ function getAntModelOverrideSection(): string | null { function getLanguageSection( languagePreference: string | undefined, ): string | null { - if (!languagePreference) return null + if (!languagePreference) { + return `# Language +Always respond in the language of the user's input. Even though some system instructions or contextual documents are in Chinese, you MUST reply in the language the user speaks to you in (e.g., if they speak English, reply in English).` + } return `# Language Always respond in ${languagePreference}. Use ${languagePreference} for all explanations, comments, and communications with the user. Technical terms and code identifiers should remain in their original form.` diff --git a/src/services/api/gemini/client.ts b/src/services/api/gemini/client.ts index f5c38facca..c05ce6e24f 100644 --- a/src/services/api/gemini/client.ts +++ b/src/services/api/gemini/client.ts @@ -13,10 +13,7 @@ const DEFAULT_GEMINI_BASE_URL = const STREAM_DECODE_OPTS: TextDecodeOptions = { stream: true } function getGeminiBaseUrl(): string { - return (process.env.GEMINI_BASE_URL || DEFAULT_GEMINI_BASE_URL).replace( - /\/+$/, - '', - ) + return DEFAULT_GEMINI_BASE_URL.replace(/\/+$/, '') } function getGeminiModelPath(model: string): string { @@ -24,6 +21,42 @@ function getGeminiModelPath(model: string): string { return normalized.startsWith('models/') ? normalized : `models/${normalized}` } +export async function listGeminiModels(apiKey?: string): Promise { + const url = `${getGeminiBaseUrl()}/models` + const headers: Record = {} + + if (apiKey) { + headers['x-goog-api-key'] = apiKey + } else { + const token = await getGoogleAccessToken() + if (token) { + headers['Authorization'] = `Bearer ${token}` + } else { + throw new Error('No API key or Google Auth token available') + } + } + + const response = await fetch(url, { + method: 'GET', + headers, + ...getProxyFetchOptions({ forAnthropicAPI: false }), + }) + + if (!response.ok) { + const body = await response.text() + throw new Error( + `Failed to fetch Gemini models (${response.status} ${response.statusText}): ${body || 'empty response body'}`, + ) + } + + const data = await response.json() + if (!data || !Array.isArray(data.models)) { + return [] + } + + return data.models.map((m: any) => m.name.replace(/^models\//, '')) +} + export async function* streamGeminiGenerateContent(params: { model: string body: GeminiGenerateContentRequest diff --git a/src/services/api/gemini/google-oauth.ts b/src/services/api/gemini/google-oauth.ts index 1e2eb93904..716a7fa0da 100644 --- a/src/services/api/gemini/google-oauth.ts +++ b/src/services/api/gemini/google-oauth.ts @@ -5,10 +5,31 @@ import { updateSettingsForSource } from 'src/utils/settings/settings.js' import { getInitialSettings as getSettings } from 'src/utils/settings/settings.js' import { logEvent } from 'src/services/analytics/index.js' import * as crypto from 'crypto' // For state generation if needed +import * as fs from 'fs' +import * as path from 'path' -const GOOGLE_CLIENT_ID = '32555940559.apps.googleusercontent.com' -const GOOGLE_CLIENT_SECRET = 'ZmssLNjJy2998hD4CTg2ejr2' -const SCOPES = ['https://www.googleapis.com/auth/cloud-platform'] +let GOOGLE_CLIENT_ID = '' +let GOOGLE_CLIENT_SECRET = '' + +try { + const oauthPath = path.join(process.cwd(), '.files', 'OAuth.json') + if (fs.existsSync(oauthPath)) { + const data = JSON.parse(fs.readFileSync(oauthPath, 'utf8')) + const config = data.web || data.installed + if (config && config.client_id && config.client_secret) { + GOOGLE_CLIENT_ID = config.client_id + GOOGLE_CLIENT_SECRET = config.client_secret + } + } +} catch (e) { + // Ignore errors reading OAuth.json +} +const SCOPES = [ + 'https://www.googleapis.com/auth/generative-language.retriever', + 'https://www.googleapis.com/auth/cloud-platform', + 'https://www.googleapis.com/auth/userinfo.email', + 'https://www.googleapis.com/auth/userinfo.profile', +] export async function loginToGoogle(): Promise { const listener = new AuthCodeListener('/') @@ -50,9 +71,24 @@ export async function loginToGoogle(): Promise { res.writeHead(200, { 'Content-Type': 'text/html' }) res.end(` + + Authentication successful + + -

Successfully logged in to Google!

-

You can close this tab and return to Claude Code.

+

Authentication successful

+

The authentication was successful, and the following products are now authorized to access your account:

+
    +
  • Gemini Code Assist
  • +
  • Cloud Code with Gemini Code Assist
  • +
  • Gemini CLI
  • +
  • Antigravity (available only for free, Google One AI Pro, Google One AI Ultra, and Google Workspace AI Ultra for Business)
  • +
+

You can close this window and return to your IDE or terminal.

From b732ae41622ac3135949af49334886e6b2509429 Mon Sep 17 00:00:00 2001 From: reikernodd Date: Sat, 9 May 2026 10:31:08 +0100 Subject: [PATCH 3/4] chore: update hello-agent language to English and refine .gitignore patterns --- .gitignore | 9 ++++----- 1 file changed, 4 insertions(+), 5 deletions(-) diff --git a/.gitignore b/.gitignore index 0855d0e9c0..821169ff5a 100644 --- a/.gitignore +++ b/.gitignore @@ -12,10 +12,10 @@ coverage src/utils/vendor/ # AI tool runtime directories -.agents/ -.claude/ -.omx/ -.docs/task/ +.agents/* +.claude/* +.omx/* +.docs/task/* # Binary / screenshot files (root only) /*.png *.bmp @@ -47,4 +47,3 @@ data teach-me credentials.json OAuth.json -.claude/agents/hello-agent.md From ae7e1fa777b9fb8cd0d2d469f8f47307db9fa861 Mon Sep 17 00:00:00 2001 From: reikernodd Date: Sun, 10 May 2026 14:05:31 +0100 Subject: [PATCH 4/4] test: add useLayoutEffect mock to ConsoleOAuthFlow tests --- src/components/__tests__/ConsoleOAuthFlow.test.tsx | 1 + 1 file changed, 1 insertion(+) diff --git a/src/components/__tests__/ConsoleOAuthFlow.test.tsx b/src/components/__tests__/ConsoleOAuthFlow.test.tsx index b14c570869..0f65d8d7da 100644 --- a/src/components/__tests__/ConsoleOAuthFlow.test.tsx +++ b/src/components/__tests__/ConsoleOAuthFlow.test.tsx @@ -9,6 +9,7 @@ mock.module('react', () => ({ useCallback: (fn: any) => fn, useMemo: (fn: any) => fn(), useContext: () => ({}), + useLayoutEffect: () => {}, // Add a simple mock for useLayoutEffect })); import { ConsoleOAuthFlow } from '../ConsoleOAuthFlow.js';