AI coding assistants are powerful but forgetful.
Context Pilot gives them persistent memory, structured context,
and self-correcting workflows — all from your terminal.
$ git clone https://github.com/bigmoostache/context-pilot && cd context-pilot && ./deploy_local.sh
Every AI coding assistant — Cursor, Copilot, Claude Code — starts each session blind. No memory of your architecture. No understanding of your conventions. No awareness of what it broke last time. You spend half your time re-explaining context that should already be there.
Every session starts from zero. Decisions, preferences, architecture patterns — all lost.
Files opened randomly. Stale context piling up. No systematic way to manage what the AI sees.
AI edits your code. Compilation breaks. Tests fail. You only find out minutes later.
A terminal application that wraps around any LLM, giving it structured context management, persistent memory, file editing callbacks, and 58 specialized tools — all orchestrated through a beautiful TUI.
Context Pilot gives your AI a structured view of your entire project. Interactive directory tree with descriptions, smart file opening, git integration, and GitHub issue/PR awareness. Your AI always knows where it is and what changed.
├── ▼ src/ Main source: ~15K lines
│ ├── ▼ app/
│ │ ├── context.rs
│ │ ├── events.rs
│ │ └── mod.rs
│ ├── ▼ modules/ 14 modules
│ └── main.rs
├── Cargo.toml
└── README.md
Persistent memory, timestamped logs, conversation history, and scratchpads survive across sessions and even TUI restarts. Your AI remembers architecture decisions, user preferences, project conventions, and past mistakes.
M1 Project uses workspace with 14 crates high
M2 User prefers explicit error handling high
M3 Always run clippy before committing critical
M4 API keys stored in .env, never committed high
M5 Deploy via deploy_local.sh script medium
File edit callbacks automatically trigger on every change — run cargo check, clippy, tests, formatters, or any custom script. Blocking callbacks halt the AI until the check passes. Your AI self-corrects before you even notice the bug.
CB1 rust-check *.rs ✓ Build passed
CB2 structure * ✓ Checks passed
CB3 test-suite *.rs ⟳ Running...
CB4 typst-watch *.typ ✓ Compiled
── AI edits main.rs → CB1 fires → error →
── AI sees error → fixes → CB1 fires → ✓
File editing, git operations, web search, PDF generation, console management, memory, todos — every tool your AI needs, designed for LLM consumption.
Anthropic, Claude Code, DeepSeek, Grok, Groq. Switch providers without changing your workflow.
Custom system prompts, loadable knowledge skills, and slash commands. Shape your AI's personality and expertise.
Brave Search + Firecrawl integration. Your AI can research, read documentation, and scrape any website.
Embedded Typst compiler. Create reports, invoices, and documents from code. Auto-recompile on edit.
Auto-continuation with guard rails (cost, tokens, time, messages). The AI works through your todo list while you review. Spine notifications keep you informed.
git clone && ./deploy_local.sh
One command. Compiles from source. Installs to /usr/local/bin.
export ANTHROPIC_API_KEY=sk-...
Add your API key. Pick your LLM provider. That's it.
cd your-project && cpilot
Context Pilot reads your project structure and initializes.
Talk to your AI. It manages its own context.
Memory persists. Callbacks catch errors. Context stays clean.
Context Pilot isn't another AI editor. It's the infrastructure layer that makes any AI coding assistant dramatically more effective.
| Feature | Cursor | Aider | Claude Code | Context Pilot |
|---|---|---|---|---|
| Persistent memory across sessions | ✗ | ✗ | ~ | ✓ |
| Structured context management | ~ | ✗ | ✗ | ✓ |
| File edit callbacks (auto-check) | ✗ | ✗ | ✗ | ✓ |
| Multiple LLM providers | ~ | ✓ | ✗ | ✓ |
| Web search & scraping | ✗ | ✗ | ✗ | ✓ |
| PDF / document generation | ✗ | ✗ | ✗ | ✓ |
| Autonomous workflows with guard rails | ✗ | ✗ | ~ | ✓ |
| Open source (AGPL-3.0) | ✗ | ✓ | ✗ | ✓ |
| Terminal-native (no Electron) | ✗ | ✓ | ✓ | ✓ |
~15K lines. Ratatui TUI framework. No Electron. No Node. Just fast, reliable Rust.
14 independent crates in a Cargo workspace. Activate only what you need. Each module owns its tools, panels, and state.
Everything the AI sees is a context element — files, panels, memories, tools. All visible, all manageable, all measurable in tokens.
State serialized to .context-pilot/. Memories, logs, todos, conversation history — everything survives restarts.
Anthropic (Claude), Claude Code (API key), DeepSeek, Grok (xAI), and Groq. You bring your own API key. No vendor lock-in.
Currently Linux-native. macOS support is planned. Windows users can run it via WSL2 with zero code changes.
Yes. Context Pilot is open source under AGPL-3.0. You only pay for LLM API usage (your own keys).
Cursor is an IDE with built-in AI. Context Pilot is infrastructure that manages AI context — persistent memory, callbacks, structured tools. It's terminal-native, open source, and works with any LLM provider.
Aider is great for quick edits. Context Pilot adds structured context management (14 modules, 58 tools), persistent memory, file edit callbacks, web search, PDF generation, autonomous workflows, and a full TUI with real-time panels.
Currently requires cloud API keys. Local model support (Ollama, LM Studio) is on the roadmap.
You define callbacks with glob patterns (e.g., *.rs) and a bash script. When the AI edits a matching file, the script fires automatically. In blocking mode, the AI waits for the result and self-corrects if the check fails.
Open source. Free forever. Install in 2 minutes.
$ git clone https://github.com/bigmoostache/context-pilot && cd context-pilot && ./deploy_local.sh