|
2 | 2 |
|
3 | 3 | <p align="center"> |
4 | 4 | <a href="https://github.com/SuperagenticAI/rlm-code"> |
5 | | - <img src="https://raw.githubusercontent.com/SuperagenticAI/rlm-code/main/assets/rlm-code-logo.png" alt="RLM Code logo" width="300"> |
| 5 | + <img src="https://github.com/SuperagenticAI/rlm-code/raw/main/assets/rlm-code-logo.png" alt="RLM Code logo" width="320"> |
6 | 6 | </a> |
7 | 7 | </p> |
8 | 8 |
|
| 9 | +<p align="center"> |
| 10 | + <a href="https://pypi.org/project/rlm-code/"><img alt="PyPI Version" src="https://img.shields.io/pypi/v/rlm-code"></a> |
| 11 | + <a href="https://pypi.org/project/rlm-code/"><img alt="PyPI Python Versions" src="https://img.shields.io/pypi/pyversions/rlm-code"></a> |
| 12 | + <a href="https://pypi.org/project/rlm-code/"><img alt="PyPI Downloads" src="https://img.shields.io/pypi/dm/rlm-code"></a> |
| 13 | + <a href="https://pypi.org/project/rlm-code/"><img alt="PyPI Wheel" src="https://img.shields.io/pypi/wheel/rlm-code"></a> |
| 14 | + <a href="LICENSE"><img alt="License" src="https://img.shields.io/pypi/l/rlm-code"></a> |
| 15 | + <a href="https://github.com/SuperagenticAI/rlm-code/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/SuperagenticAI/rlm-code/actions/workflows/ci.yml/badge.svg"></a> |
| 16 | + <a href="https://github.com/SuperagenticAI/rlm-code/actions/workflows/pre-commit.yml"><img alt="Pre-commit" src="https://github.com/SuperagenticAI/rlm-code/actions/workflows/pre-commit.yml/badge.svg"></a> |
| 17 | + <a href="https://github.com/SuperagenticAI/rlm-code/actions/workflows/deploy-docs.yml"><img alt="Docs Deploy" src="https://github.com/SuperagenticAI/rlm-code/actions/workflows/deploy-docs.yml/badge.svg"></a> |
| 18 | + <a href="https://github.com/SuperagenticAI/rlm-code/actions/workflows/release.yml"><img alt="Release" src="https://github.com/SuperagenticAI/rlm-code/actions/workflows/release.yml/badge.svg"></a> |
| 19 | + <a href="https://github.com/SuperagenticAI/rlm-code/stargazers"><img alt="GitHub Stars" src="https://img.shields.io/github/stars/SuperagenticAI/rlm-code?style=social"></a> |
| 20 | + <a href="https://github.com/SuperagenticAI/rlm-code/issues"><img alt="GitHub Issues" src="https://img.shields.io/github/issues/SuperagenticAI/rlm-code"></a> |
| 21 | + <a href="https://github.com/SuperagenticAI/rlm-code/pulls"><img alt="GitHub Pull Requests" src="https://img.shields.io/github/issues-pr/SuperagenticAI/rlm-code"></a> |
| 22 | +</p> |
| 23 | + |
9 | 24 | **Run LLM-powered agents in a REPL loop, benchmark them, and compare results.** |
10 | 25 |
|
11 | | -RLM Code implements the [Recursive Language Models](https://arxiv.org/abs/2502.07503) (RLM) approach from the 2025 paper release. Instead of stuffing your entire document into the LLM's context window, RLM stores it as a Python variable and lets the LLM write code to analyze it — chunk by chunk, iteration by iteration. This is dramatically more token-efficient for large inputs. |
| 26 | +RLM Code implements the [Recursive Language Models](https://arxiv.org/abs/2502.07503) (RLM) approach from the 2025 paper release. Instead of stuffing your entire document into the LLM's context window, RLM stores it as a Python variable and lets the LLM write code to analyze it, chunk by chunk, iteration by iteration. This is dramatically more token-efficient for large inputs. |
12 | 27 |
|
13 | 28 | RLM Code wraps this algorithm in an interactive terminal UI with built-in benchmarks, trajectory replay, and observability. |
14 | 29 |
|
@@ -41,6 +56,10 @@ pip install rlm-code[tui,llm-all] |
41 | 56 | ``` |
42 | 57 | </details> |
43 | 58 |
|
| 59 | +<p align="center"> |
| 60 | + <img src="https://github.com/SuperagenticAI/rlm-code/raw/main/assets/rlm-lab.png" alt="RLM Research Lab view" width="980"> |
| 61 | +</p> |
| 62 | + |
44 | 63 | ## Quick Start |
45 | 64 |
|
46 | 65 | ### 1. Launch |
@@ -78,7 +97,7 @@ or for a free local model via [Ollama](https://ollama.com/): |
78 | 97 | /connect ollama llama3.2 |
79 | 98 | ``` |
80 | 99 |
|
81 | | -> You need the matching API key in your environment (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GEMINI_API_KEY`) or in a `.env` file in your project directory. Ollama needs no key — just a running Ollama server. |
| 100 | +> You need the matching API key in your environment (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, `GEMINI_API_KEY`) or in a `.env` file in your project directory. Ollama needs no key, just a running Ollama server. |
82 | 101 |
|
83 | 102 | Follow the interactive path with just `/connect` command instead: Check it worked: |
84 | 103 |
|
@@ -126,7 +145,7 @@ After at least two benchmark runs, export a compare report: |
126 | 145 | /rlm replay <run_id> |
127 | 146 | ``` |
128 | 147 |
|
129 | | -Walk through the last run one step at a time — see what code the LLM wrote, what output it got, and what it did next. |
| 148 | +Walk through the last run one step at a time, see what code the LLM wrote, what output it got, and what it did next. |
130 | 149 |
|
131 | 150 | ### 7. Use RLM Code as a coding agent (local/BYOK/ACP) |
132 | 151 |
|
@@ -218,7 +237,7 @@ If a run is going out of hand: |
218 | 237 |
|
219 | 238 | ## What You Can Do With It |
220 | 239 |
|
221 | | -- **Analyze large documents**: Feed in a 500-page PDF and ask questions — the LLM reads it in chunks via code |
| 240 | +- **Analyze large documents**: Feed in a 500-page PDF and ask questions, then the LLM reads it in chunks via code |
222 | 241 | - **Compare models**: Run the same benchmark with different providers and see who scores higher |
223 | 242 | - **Compare paradigms**: Test Pure RLM vs CodeAct vs Traditional approaches on the same task |
224 | 243 | - **Debug agent behavior**: Replay any run step-by-step to see exactly what the agent did |
|
0 commit comments