RLM (Recursive Language Models — Zhang, Kraska, Khattab, MIT CSAIL 2026) is a divide-and-conquer pattern for processing inputs that exceed context windows. The core insight: treat files as symbols (metadata only in context, never raw content), delegate semantic work to sub-agents, recurse bounded by max depth.
The reference implementation (claude-rlm) uses bash scripts, tmux sessions, and sentinel files. We don't need any of that — gent already has DelegateTool (parallel + chain modes), SubagentRunnerService, AgentActor, and BashTool. The RLM value is the programming model in the prompt, not the scaffolding.
Add an "rlm" agent whose systemPromptAddendum teaches the file-symbol mental model and decomposition strategies. The agent uses existing tools (bash, delegate, read, write, glob, grep) — no new tools needed. The one structural addition: depth propagation through the sub-agent pipeline so recursion is bounded.
File: packages/core/src/agent.ts
- Add
"rlm"toAgentNameliteral union - Add
RLM_PROMPTconstant with the programming model instructions (file-symbol protocol, decomposition strategy, workspace convention, depth awareness) - Add
rlmtoAgentsobject:allowedTools: ["read", "write", "bash", "glob", "grep", "delegate"](any agent withdelegatecan target any registered agent, including self for recursion) - Add
rlmtoAgentModelswith a sensible default (sonnet), but this is overridable — see step 2
File: packages/core/src/agent.ts
Add to SubagentRunner.run params:
rlmDepth?: number— current recursion depthrlmMaxDepth?: number— max recursion depthrlmModel?: string— model override for RLM sub-agents
The model is model-agnostic: the caller picks it. The DelegateTool params will accept an optional model field, and the RLM system prompt will teach the agent it can specify a model when delegating. The AgentModels["rlm"] entry serves as a fallback default only.
File: packages/core/src/tool.ts
Add to ToolContext:
rlmDepth?: numberrlmMaxDepth?: numberrlmModel?: string
This lets the delegate tool read the current depth/model from context (injected by AgentActor) rather than requiring the agent to manually track it.
File: packages/core/src/defaults.ts
rlmMaxDepth: 3
File: packages/runtime/src/agent/agent-loop.ts
- Add to
AgentRunInputFields:rlmDepth,rlmMaxDepth,rlmModel(allSchema.UndefinedOr(...)) - In
AgentActor.runEffect: when buildingToolContextfor tool calls (line ~1139), includerlmDepth,rlmMaxDepth,rlmModelfrominput - If
input.rlmModelis set, use it instead ofresolveAgentModelId(agent.name)for the provider call - In
AgentActor.runEffect: when building the system prompt for an rlm agent, append a depth status line:At max depth:## RLM Depth Current depth: {N}/{max}. Remaining: {max - N}."MAX DEPTH REACHED. Process directly, do not delegate further."
File: packages/runtime/src/agent/subagent-runner.ts
In InProcessRunner, pass rlmDepth, rlmMaxDepth, and rlmModel from params through to actor.run(...).
File: packages/core/src/tools/delegate.ts
- Add optional
modelfield toTaskParamsandTaskItemschemas — allows the agent to specify which model sub-agents should use - When calling
runner.runfor an agent named"rlm":- Read current depth from
ctx.rlmDepth(set by AgentActor for rlm sub-agents) - Auto-increment:
rlmDepth: (ctx.rlmDepth ?? 0) + 1 - Pass
rlmMaxDepth: ctx.rlmMaxDepth ?? DEFAULTS.rlmMaxDepth - Pass
rlmModel: params.model ?? ctx.rlmModel(explicit param wins, then inherit from parent)
- Read current depth from
- When a non-rlm agent delegates to rlm for the first time (ctx.rlmDepth is undefined), seed at depth 0
- Provider interface — sub-agents already use it
- Storage / SQLite schema — sub-agents already create sessions/branches
ToolRegistry— rlm uses existing tools- EventStore — SubagentSpawned/Succeeded/Failed already capture lifecycle
buildSystemPromptinsystem-prompt.ts— RLM instructions live insystemPromptAddendum
Cowork: "Analyze this 50k-line codebase"
└─ delegate(agent: "rlm", task: "Analyze /path")
└─ RLM depth=0/3
├─ bash: wc -l, head — inspects metadata
├─ bash: mkdir -p .rlm/analyze/{chunks,results}
├─ bash: split into chunks/
└─ delegate(tasks: [
{agent: "rlm", prompt: "Analyze chunk-01"},
{agent: "rlm", prompt: "Analyze chunk-02"},
])
├─ RLM depth=1/3: reads chunk, writes result
└─ RLM depth=1/3: reads chunk, writes result
└─ bash: cat results/* → final synthesis
- Depth bound:
DEFAULTS.rlmMaxDepth = 3. Hard stop in system prompt + could add enforcement in delegate tool. - Concurrency bound: existing
MAX_PARALLEL_TASKS=8,MAX_CONCURRENCY=4in delegate.ts. - Cost: Model is caller's choice. Default fallback in
AgentModelsis sonnet. The agent or user can override per-invocation via themodelfield on delegate params.
bun run typecheck— ensure AgentName union, new fields, etc. compile cleanbun run lint— no-any, no floating promisesbun run test— existing tests pass (new fields are optional, backwards compatible)- Manual smoke test:
bun run --cwd apps/tui dev -H "Use the rlm agent to analyze the packages/ directory"— verify it spawns sub-agents, creates workspace, produces aggregated output - Verify depth enforcement: check that at max depth the agent processes directly instead of delegating