Context Management
Context management controls what the AI can see. The context window has a fixed token budget (typically 200K tokens for Claude models), and how it’s used determines how well the AI can help you.
Context Window Structure
Section titled “Context Window Structure”+-------------------------------+| System Prompt & Instructions | <-- Fixed: agent role, tools, rules+-------------------------------+| File Context (/add, /pick) | <-- User-controlled: pinned files+-------------------------------+| Memory & Rules | <-- Auto: preferences, project guidance+-------------------------------+| Conversation History | <-- Growing: messages + tool results+-------------------------------+| Continuity Ledger | <-- Persistent: goals, decisions, state+-------------------------------+As your conversation grows, history expands until it approaches the token limit. At that point, auto-compaction triggers.
Adding Files to Context
Section titled “Adding Files to Context”/pick — Interactive file picker
Section titled “/pick — Interactive file picker”/pickOpens a visual file browser. Navigate your project, preview files, and select what to add. This is the most common way to bring files into context.
/add — Pin specific files
Section titled “/add — Pin specific files”/add src/auth/login.py # Add a specific file/add src/components/ # Add all files in a directory/add src/components/*.tsx # Glob patterns work tooAdded files are included in every AI message. Use this for files the AI needs to reference repeatedly.
Ctrl+V — Paste anything
Section titled “Ctrl+V — Paste anything”Paste text, code, error messages, or screenshots directly into the conversation. Gee-Code handles all content types.
/context gather — Smart context gathering
Section titled “/context gather — Smart context gathering”/context gather auth systemThe AI analyzes your task and automatically discovers and adds relevant files. This uses RLM to scan your codebase intelligently.
/context — View and manage
Section titled “/context — View and manage”/context # Show what's currently in context/remove <file> # Remove a file from contextContext Budget
Section titled “Context Budget”The system tracks:
- File count — up to 20 files in context
- Token estimate — roughly 100K tokens of file content
- Deduplication — files tracked by absolute path, no duplicates
When the budget is full, remove existing files before adding new ones.
Auto-Compaction
Section titled “Auto-Compaction”When conversation history pushes token usage past 80% of the model’s limit, compaction triggers automatically:
- Measure — count tokens across all content
- Summarize — an AI call condenses older messages into a recap
- Replace — older messages swapped for the summary
- Preserve — recent messages (at least 10) kept intact
After compaction, the conversation continues normally. The continuity ledger survives compaction, so critical state is never lost.
Token Limits by Model
Section titled “Token Limits by Model”| Model | Context Limit |
|---|---|
| Claude Opus/Sonnet/Haiku | 200,000 tokens |
| GPT-4o | 128,000 tokens |
| Gemini 2.5 Flash/Pro | 200,000 tokens |
Context in Sub-Agents
Section titled “Context in Sub-Agents”When Gee-Code delegates to a sub-agent (via /delegate or a bead):
- The sub-agent gets its own system prompt (agent-specific)
- It receives the task description only
- It does not inherit conversation history
This isolation is intentional — sub-agents explore independently and return focused results.
- Pin key files early — use
/addfor files you’ll reference throughout the session - Watch utilization — large file contexts leave less room for conversation
- Let compaction work — don’t worry about long conversations; it preserves what matters
- Use the ledger — write important decisions there so they survive compaction
- Gather for complex tasks —
/context gatherhelps the AI discover what it needs - Images work too —
/image screenshot.pngadds visual context
Next Steps
Section titled “Next Steps”- Sessions & Continuity — how state persists across sessions
- Memory System — the 3-layer persistent memory
- Configuration — customize context behavior