Skip to content

RLM (Recursive Language Model)

RLM is Gee-Code’s mode for handling complex tasks that benefit from persistent state, multi-step computation, and sandboxed code execution. When enabled, the AI gets a Python sandbox where it can execute code, store variables, and build up results iteratively.

In standard mode, the AI uses tools (Read, Write, Edit, Bash) to interact with the filesystem. In RLM mode, the AI additionally gets a persistent Python REPL:

Standard Mode:
AI -> Tool calls -> File system -> Results -> AI
RLM Mode:
AI -> Tool calls + Python sandbox -> Persistent state -> AI
|-- rlm_exec: Execute Python code
|-- rlm_get_var: Read sandbox variables
|-- rlm_chunk: Split large data for processing
|-- llm_query: Send chunks to a sub-LLM for analysis
|-- llm_batch: Process multiple chunks in parallel

The sandbox provides an isolated Python execution environment:

  • Persistent variables — variables survive across multiple exec calls
  • Standard library access — json, re, math, collections, itertools, datetime
  • File I/O through tools — file writes go through the tool system (not direct disk access)
  • Sub-LLM calls — send content to a smaller, faster model for analysis
  • Chunking — split large content at meaningful boundaries for parallel processing

RLM is most useful for:

  • Data transformation — processing large datasets, CSV manipulation, JSON restructuring
  • Complex analysis — tasks requiring multi-step reasoning with intermediate state
  • Large file processing — chunking and analyzing files too big for a single context window
  • Code generation — building up code programmatically (templates, scaffolding)
  • Context gathering/context gather uses RLM for intelligent file discovery
/rlm on # Enable RLM mode
/rlm off # Disable
/rlm # Toggle

When enabled, all subsequent messages use the RLM execution path. The status line shows when RLM is active.

Individual beads can enable RLM without turning it on globally:

Bead(
title="Process data files",
agent="code",
rlm_enabled=True
)
ToolDescription
rlm_execExecute Python code in the sandbox
rlm_get_varRead the value of a sandbox variable
rlm_chunkSplit content into chunks (by size, lines, pattern, or separator)
llm_querySend a chunk + query to a sub-LLM for analysis
llm_batchProcess multiple chunks in parallel
rlm_completeSignal early completion when the answer is found

Large content can be split intelligently:

  • strategy="size" — line-boundary aware chunks (default 8000 chars)
  • strategy="lines" — fixed line count chunks
  • strategy="pattern" — split at code boundaries (class/function definitions)
  • strategy="separator" — split at a separator string (paragraphs)

RLM operations show animated progress with timing:

> Local Analysis (12.3s)
|-- Step 1: Thinking (2.1s)
| |-- Read: app.tsx (0.3s)
| |-- Glob: **/*.tsx (0.1s)
|-- Step 2: Processing (1.2s)
| |-- rlm_exec: transform_data() ...
|-- Step 3: Pending
AspectStandard ModeRLM Mode
StateStateless between tool callsPersistent Python sandbox
ExecutionTool calls onlyTool calls + Python REPL + sub-LLM
Best forFile editing, search, simple tasksComplex computation, data processing
Token usageLower per interactionHigher (multi-step sandbox)