Skip to content

How It Works

Gee-Code operates as an agentic loop — you send a message, the AI processes it, calls tools as needed, and returns results. This loop continues until the task is complete.

You type a message
|
AI receives message + context (files, memory, tools)
|
AI decides: respond directly OR call tools
|
Tool results fed back to AI
|
AI continues until task complete

Each iteration is called a turn. A single user message can trigger multiple turns as the AI reads files, edits code, runs commands, and verifies results.

Before each AI call, Gee-Code assembles context from multiple sources:

SourceContent
System promptTool definitions, agent instructions, rules
User rules~/.gee-code/gee.md — your global preferences
Project rules.gee/gee.md — project-specific guidance
File contextFiles added via /add, /pick, or Ctrl+V
MemoryRelevant facts from the 3-layer memory system
Session historyPrevious messages in the conversation
Continuity ledgerPersistent state that survives context compaction

The AI sees all of this combined into a single context window before deciding how to respond.

Gee-Code supports two execution modes:

Tools execute directly on your machine. The MCP server runs locally, file operations hit your filesystem, and bash commands run in your shell. This is the most common setup.

Tools execute on the Gee backend. Useful for accessing server-side capabilities (web search, email, calendar, Google Drive) or running in environments without local tool access.

Switch modes:

/exec local # Run tools locally
/exec server # Run tools on Gee backend

Some tools need interactive input — AskUserQuestion presents options and waits for a selection. But the MCP server that hosts Gee-Code’s tools runs headless. It has no terminal to render a prompt in.

Question IPC solves this with a file-based bridge:

MCP Server REPL / Daemon
| |
| writes .question.json |
| ──────────────────────► |
| detects file, renders prompt
| user selects an option
| writes .answer.json
| ◄────────────────────── |
| reads answer, resumes |

Each Gee-Code session gets its own IPC directory under /tmp/gee-code-question-ipc/. The MCP server and REPL share a session ID via the GEE_IPC_SESSION environment variable so both sides read and write the same directory.

  1. The AI calls AskUserQuestion during a task
  2. The MCP server writes a .question.json file with the question data and a unique request ID
  3. The MCP server polls for a matching .answer.json file (200ms intervals, 5-minute timeout)
  4. The REPL’s question watcher detects the pending question file
  5. The REPL renders an interactive selector in your terminal
  6. You pick an option; the REPL writes .answer.json
  7. The MCP server reads the answer, cleans up both files, and returns the result to the AI

When a Gee runs as a daemon, there is no terminal to render prompts. The daemon’s question watcher intercepts pending questions and routes them through the approval system — which can notify you via SMS, email, or web UI and wait for a response.

Stale questions (older than 5 minutes) are automatically cleaned up. When a session ends, all IPC files for that session are removed.

Every tool has a danger level that determines whether it needs your approval:

LevelExamplesBehavior
SafeRead, Glob, Grep, RecallMemoryRuns automatically
CautionEdit, Write, Bash, GitMay require approval based on your settings
DangerForce push, hard reset, file deletionAlways requires explicit approval

When Gee-Code wants to run a caution or danger-level tool, you’ll see an approval prompt:

[1] Run [2] Always allow [3] Skip [4] Other

Choose Always allow to auto-approve that tool for the rest of the session. Choose Skip to reject it and suggest a different approach.

For autonomous Gees, approvals are managed through guardrails instead.

By default, the AI pauses after 50 iterations to prevent runaway loops. This means you stay in control even for complex tasks.

Adjust the limit:

/iterations 100 # Allow more iterations
/iterations 10 # Tighter control

When you type a message, here’s the full pipeline:

  1. Context assembly — files, memory, rules, and history are gathered
  2. Agent routing — your message goes to the current agent (default or specialized)
  3. AI inference — the model processes everything and decides on actions
  4. Tool execution — any tool calls are executed locally or on the server
  5. Result feeding — tool results go back to the AI for the next turn
  6. Completion — the AI responds when the task is done or it needs your input

This loop can run for many turns on a single message — reading files, making edits, running tests, fixing failures, and verifying results.