Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 11 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Code without a keyboard. Send a message from your phone, and **txtcode** dispatc

1. **Text your AI** from WhatsApp, Telegram, Discord, Slack, Teams, or Signal
2. **It writes code** using Claude Code, Cursor, Codex, Gemini CLI, or other adapters
3. **You stay in control** with mode switching, tool calling, and session logs
3. **You stay in control** with mode switching and session logs

No port forwarding. No VPN. Just message and code.

Expand Down Expand Up @@ -66,10 +66,6 @@ Claude Code, Cursor CLI, OpenAI Codex, Gemini CLI, Kiro CLI, OpenCode, and Ollam
</td>
<td width="50%">

### 9 Built-in Tools

Terminal, process manager, git, file search, HTTP client, environment variables, network diagnostics, cron jobs, and system info all callable by the LLM.

### Session Logging

Per-session logs accessible from the TUI. Follow live, view by index, auto-pruned after 7 days.
Expand Down Expand Up @@ -159,7 +155,7 @@ txtcode supports **9 LLM providers** for chat mode. Configure one or more during
| **HuggingFace** | _Discovered at runtime_ | Inference Providers API |
| **OpenRouter** | _Discovered at runtime_ | Unified API for 100+ models |

All providers support tool calling and the LLM can invoke any built-in tool.
All providers are used in chat mode for general conversation and coding questions.

---

Expand All @@ -179,37 +175,19 @@ Use `/code` mode to route messages directly to a coding adapter with full coding

---

## 🛠️ Built-in Tools

The primary LLM in chat mode has access to **9 built-in tools** that it can call autonomously:

| Tool | Capabilities |
| :----------- | :---------------------------------------------------------------------------------------- |
| **Terminal** | Execute shell commands with timeout and output capture |
| **Process** | Manage background processes: list, poll, stream logs, kill, send input |
| **Git** | Full git operations (blocks force-push and credential config for safety) |
| **Search** | File and content search across the project |
| **HTTP** | Make HTTP requests (GET, POST, PUT, DELETE, PATCH, HEAD). Blocks cloud metadata endpoints |
| **Env** | Get, set, list, and delete environment variables. Masks sensitive values |
| **Network** | Ping, DNS lookup, reachability checks, port scanning |
| **Cron** | Create, list, and manage cron jobs |
| **Sysinfo** | CPU, memory, disk, uptime, OS details |

---

## 💬 Chat Commands

Send these commands in any messaging app while connected:

| Command | Description |
| :----------- | :----------------------------------------------------------------------------- |
| `/chat` | Switch to **Chat mode** to send messages to primary LLM with tools _(default)_ |
| `/code` | Switch to **Code mode** to send messages to coding adapter (full CLI control) |
| `/switch` | Switch primary LLM provider or coding adapter on the fly |
| `/cli-model` | Change the model used by the current coding adapter |
| `/cancel` | Cancel the currently running command |
| `/status` | Show adapter connection and current configuration |
| `/help` | Show available commands |
| Command | Description |
| :----------- | :---------------------------------------------------------------------------- |
| `/chat` | Switch to **Chat mode** to send messages to primary LLM _(default)_ |
| `/code` | Switch to **Code mode** to send messages to coding adapter (full CLI control) |
| `/switch` | Switch primary LLM provider or coding adapter on the fly |
| `/cli-model` | Change the model used by the current coding adapter |
| `/cancel` | Cancel the currently running command |
| `/status` | Show adapter connection and current configuration |
| `/help` | Show available commands |

---

Expand Down
45 changes: 9 additions & 36 deletions src/core/router.ts
Original file line number Diff line number Diff line change
Expand Up @@ -16,16 +16,6 @@ import { processWithOpenRouter } from "../providers/openrouter";
import { processWithXAI } from "../providers/xai";
import { logger } from "../shared/logger";
import { IDEAdapter, ModelInfo } from "../shared/types";
import { CronTool } from "../tools/cron";
import { EnvTool } from "../tools/env";
import { GitTool } from "../tools/git";
import { HttpTool } from "../tools/http";
import { NetworkTool } from "../tools/network";
import { ProcessTool } from "../tools/process";
import { ToolRegistry } from "../tools/registry";
import { SearchTool } from "../tools/search";
import { SysinfoTool } from "../tools/sysinfo";
import { TerminalTool } from "../tools/terminal";
import { ContextManager } from "./context-manager";

export const AVAILABLE_ADAPTERS = [
Expand All @@ -43,7 +33,6 @@ export class Router {
private provider: string;
private apiKey: string;
private model: string;
private toolRegistry: ToolRegistry;
private contextManager: ContextManager;
private pendingHandoff: string | null = null;
private currentAbortController: AbortController | null = null;
Expand All @@ -53,17 +42,6 @@ export class Router {
this.apiKey = process.env.AI_API_KEY || "";
this.model = process.env.AI_MODEL || "";

this.toolRegistry = new ToolRegistry();
this.toolRegistry.register(new TerminalTool());
this.toolRegistry.register(new ProcessTool());
this.toolRegistry.register(new GitTool());
this.toolRegistry.register(new SearchTool());
this.toolRegistry.register(new HttpTool());
this.toolRegistry.register(new EnvTool());
this.toolRegistry.register(new NetworkTool());
this.toolRegistry.register(new CronTool());
this.toolRegistry.register(new SysinfoTool());

this.contextManager = new ContextManager();

const ideType = process.env.IDE_TYPE || "";
Expand Down Expand Up @@ -151,28 +129,23 @@ export class Router {
private async _routeToProvider(instruction: string): Promise<string> {
switch (this.provider) {
case "anthropic":
return await processWithAnthropic(instruction, this.apiKey, this.model, this.toolRegistry);
return await processWithAnthropic(instruction, this.apiKey, this.model);
case "openai":
return await processWithOpenAI(instruction, this.apiKey, this.model, this.toolRegistry);
return await processWithOpenAI(instruction, this.apiKey, this.model);
case "gemini":
return await processWithGemini(instruction, this.apiKey, this.model, this.toolRegistry);
return await processWithGemini(instruction, this.apiKey, this.model);
case "openrouter":
return await processWithOpenRouter(instruction, this.apiKey, this.model, this.toolRegistry);
return await processWithOpenRouter(instruction, this.apiKey, this.model);
case "moonshot":
return await processWithMoonshot(instruction, this.apiKey, this.model, this.toolRegistry);
return await processWithMoonshot(instruction, this.apiKey, this.model);
case "minimax":
return await processWithMiniMax(instruction, this.apiKey, this.model, this.toolRegistry);
return await processWithMiniMax(instruction, this.apiKey, this.model);
case "huggingface":
return await processWithHuggingFace(
instruction,
this.apiKey,
this.model,
this.toolRegistry,
);
return await processWithHuggingFace(instruction, this.apiKey, this.model);
case "mistral":
return await processWithMistral(instruction, this.apiKey, this.model, this.toolRegistry);
return await processWithMistral(instruction, this.apiKey, this.model);
case "xai":
return await processWithXAI(instruction, this.apiKey, this.model, this.toolRegistry);
return await processWithXAI(instruction, this.apiKey, this.model);
default:
return `[ERROR] Unsupported AI provider: ${this.provider}. Run: txtcode config`;
}
Expand Down
40 changes: 0 additions & 40 deletions src/data/primary_llm_system_prompt.txt

This file was deleted.

97 changes: 19 additions & 78 deletions src/providers/anthropic.ts
Original file line number Diff line number Diff line change
@@ -1,97 +1,38 @@
import fs from "fs";
import path from "path";
import Anthropic from "@anthropic-ai/sdk";
import type {
ContentBlock,
MessageParam,
TextBlock,
ToolResultBlockParam,
ToolUnion,
ToolUseBlock,
} from "@anthropic-ai/sdk/resources/messages/messages";
import { logger } from "../shared/logger";
import { ToolRegistry } from "../tools/registry";

const MAX_ITERATIONS = 10;

function loadSystemPrompt(): string {
try {
const promptPath = path.join(__dirname, "..", "data", "primary_llm_system_prompt.txt");
return fs.readFileSync(promptPath, "utf-8");
} catch {
return "You are a helpful coding assistant.";
}
}
const SYSTEM_PROMPT =
"You are TxtCode AI — a helpful, knowledgeable coding assistant accessible via messaging. Be concise, use markdown for clarity, and suggest /code mode for deep coding work.";

export async function processWithAnthropic(
instruction: string,
apiKey: string,
model: string,
toolRegistry?: ToolRegistry,
): Promise<string> {
const startTime = Date.now();
logger.debug(`[Anthropic] Request → model=${model}, prompt=${instruction.length} chars`);

try {
const anthropic = new Anthropic({ apiKey });

const tools = toolRegistry
? (toolRegistry.getDefinitionsForProvider("anthropic") as unknown as ToolUnion[])
: undefined;

const messages: MessageParam[] = [{ role: "user", content: instruction }];

for (let i = 0; i < MAX_ITERATIONS; i++) {
const iterStart = Date.now();
const response = await anthropic.messages.create({
model,
max_tokens: 4096,
system: loadSystemPrompt(),
messages,
...(tools ? { tools } : {}),
});

logger.debug(
`[Anthropic] Response ← iteration=${i + 1}, stop=${response.stop_reason}, ` +
`tokens=${response.usage.input_tokens}in/${response.usage.output_tokens}out, ` +
`time=${Date.now() - iterStart}ms`,
);

const textParts = response.content
.filter((block: ContentBlock): block is TextBlock => block.type === "text")
.map((block: TextBlock) => block.text);

const toolCalls = response.content.filter(
(block: ContentBlock): block is ToolUseBlock => block.type === "tool_use",
);

if (toolCalls.length === 0 || !toolRegistry) {
logger.debug(`[Anthropic] Done in ${Date.now() - startTime}ms (${i + 1} iteration(s))`);
return textParts.join("\n") || "No response from Claude";
}

logger.debug(`[Anthropic] Tool calls: ${toolCalls.map((t) => t.name).join(", ")}`);

messages.push({ role: "assistant", content: response.content });

const toolResults: ToolResultBlockParam[] = [];
for (const toolUse of toolCalls) {
const result = await toolRegistry.execute(
toolUse.name,
toolUse.input as Record<string, unknown>,
);
toolResults.push({
type: "tool_result",
tool_use_id: toolUse.id,
content: result.output,
});
}

messages.push({ role: "user", content: toolResults });
}
const response = await anthropic.messages.create({
model,
max_tokens: 4096,
system: SYSTEM_PROMPT,
messages: [{ role: "user", content: instruction }],
});

const text = response.content
.filter((block): block is Anthropic.TextBlock => block.type === "text")
.map((block) => block.text)
.join("\n");

logger.debug(
`[Anthropic] Done in ${Date.now() - startTime}ms, ` +
`tokens=${response.usage.input_tokens}in/${response.usage.output_tokens}out`,
);

logger.warn(`[Anthropic] Reached max ${MAX_ITERATIONS} iterations`);
return "Reached maximum tool iterations.";
return text || "No response from Claude";
} catch (error: unknown) {
logger.error(`[Anthropic] API error after ${Date.now() - startTime}ms`, error);
throw new Error(
Expand Down
Loading