RLM (Recursive Language Model)
RLM is Gee-Code’s mode for handling complex tasks that benefit from persistent state, multi-step computation, and sandboxed code execution. When enabled, the AI gets a Python sandbox where it can execute code, store variables, and build up results iteratively.
How RLM Works
Section titled “How RLM Works”In standard mode, the AI uses tools (Read, Write, Edit, Bash) to interact with the filesystem. In RLM mode, the AI additionally gets a persistent Python REPL:
Standard Mode: AI -> Tool calls -> File system -> Results -> AI
RLM Mode: AI -> Tool calls + Python sandbox -> Persistent state -> AI |-- rlm_exec: Execute Python code |-- rlm_get_var: Read sandbox variables |-- rlm_chunk: Split large data for processing |-- llm_query: Send chunks to a sub-LLM for analysis |-- llm_batch: Process multiple chunks in parallelThe Sandbox
Section titled “The Sandbox”The sandbox provides an isolated Python execution environment:
- Persistent variables — variables survive across multiple exec calls
- Standard library access — json, re, math, collections, itertools, datetime
- File I/O through tools — file writes go through the tool system (not direct disk access)
- Sub-LLM calls — send content to a smaller, faster model for analysis
- Chunking — split large content at meaningful boundaries for parallel processing
When to Use RLM
Section titled “When to Use RLM”RLM is most useful for:
- Data transformation — processing large datasets, CSV manipulation, JSON restructuring
- Complex analysis — tasks requiring multi-step reasoning with intermediate state
- Large file processing — chunking and analyzing files too big for a single context window
- Code generation — building up code programmatically (templates, scaffolding)
- Context gathering —
/context gatheruses RLM for intelligent file discovery
Enabling RLM
Section titled “Enabling RLM”/rlm on # Enable RLM mode/rlm off # Disable/rlm # ToggleWhen enabled, all subsequent messages use the RLM execution path. The status line shows when RLM is active.
Per-Bead RLM
Section titled “Per-Bead RLM”Individual beads can enable RLM without turning it on globally:
Bead( title="Process data files", agent="code", rlm_enabled=True)RLM Tools
Section titled “RLM Tools”| Tool | Description |
|---|---|
rlm_exec | Execute Python code in the sandbox |
rlm_get_var | Read the value of a sandbox variable |
rlm_chunk | Split content into chunks (by size, lines, pattern, or separator) |
llm_query | Send a chunk + query to a sub-LLM for analysis |
llm_batch | Process multiple chunks in parallel |
rlm_complete | Signal early completion when the answer is found |
Chunking Strategies
Section titled “Chunking Strategies”Large content can be split intelligently:
strategy="size"— line-boundary aware chunks (default 8000 chars)strategy="lines"— fixed line count chunksstrategy="pattern"— split at code boundaries (class/function definitions)strategy="separator"— split at a separator string (paragraphs)
RLM Display
Section titled “RLM Display”RLM operations show animated progress with timing:
> Local Analysis (12.3s)|-- Step 1: Thinking (2.1s)| |-- Read: app.tsx (0.3s)| |-- Glob: **/*.tsx (0.1s)|-- Step 2: Processing (1.2s)| |-- rlm_exec: transform_data() ...|-- Step 3: PendingRLM vs Standard Mode
Section titled “RLM vs Standard Mode”| Aspect | Standard Mode | RLM Mode |
|---|---|---|
| State | Stateless between tool calls | Persistent Python sandbox |
| Execution | Tool calls only | Tool calls + Python REPL + sub-LLM |
| Best for | File editing, search, simple tasks | Complex computation, data processing |
| Token usage | Lower per interaction | Higher (multi-step sandbox) |
Next Steps
Section titled “Next Steps”- Planning & Execution — task decomposition with beads
- Tools Overview — the complete tool list
- Context Management —
/context gatheruses RLM