Open Source · AGPL-3.0 · Pure Rust

Your AI's
copilot.

AI coding assistants are powerful but forgetful.
Context Pilot gives them persistent memory, structured context,
and self-correcting workflows — all from your terminal.

$ git clone https://github.com/bigmoostache/context-pilot && cd context-pilot && ./deploy_local.sh
58 Tools
5 LLM Providers
15K Lines of Rust
14 Modules

AI coding tools are brilliant.
But they have amnesia.

Every AI coding assistant — Cursor, Copilot, Claude Code — starts each session blind. No memory of your architecture. No understanding of your conventions. No awareness of what it broke last time. You spend half your time re-explaining context that should already be there.

🧠

No Memory

Every session starts from zero. Decisions, preferences, architecture patterns — all lost.

📂

Context Chaos

Files opened randomly. Stale context piling up. No systematic way to manage what the AI sees.

💥

Silent Breakage

AI edits your code. Compilation breaks. Tests fail. You only find out minutes later.

Context Pilot doesn't replace your AI.
It makes your AI competent.

A terminal application that wraps around any LLM, giving it structured context management, persistent memory, file editing callbacks, and 58 specialized tools — all orchestrated through a beautiful TUI.

Use Case 01

Lost in a massive codebase?

Context Pilot gives your AI a structured view of your entire project. Interactive directory tree with descriptions, smart file opening, git integration, and GitHub issue/PR awareness. Your AI always knows where it is and what changed.

Directory Tree File Management Git Integration GitHub CLI Web Search
context-pilot
├── ▼ src/  Main source: ~15K lines
│   ├── ▼ app/
│   │   ├── context.rs
│   │   ├── events.rs
│   │   └── mod.rs
│   ├── ▼ modules/  14 modules
│   └── main.rs
├── Cargo.toml
└── README.md
Use Case 02

AI forgets everything between sessions?

Persistent memory, timestamped logs, conversation history, and scratchpads survive across sessions and even TUI restarts. Your AI remembers architecture decisions, user preferences, project conventions, and past mistakes.

Memory Logs Conversation History Scratchpad Presets
memories
M1 Project uses workspace with 14 crates  high
M2 User prefers explicit error handling  high
M3 Always run clippy before committing  critical
M4 API keys stored in .env, never committed  high
M5 Deploy via deploy_local.sh script  medium
Use Case 03

Can't trust AI edits?

File edit callbacks automatically trigger on every change — run cargo check, clippy, tests, formatters, or any custom script. Blocking callbacks halt the AI until the check passes. Your AI self-corrects before you even notice the bug.

Callbacks Auto-Check Auto-Test Blocking Mode Pattern Matching
callbacks
CB1 rust-check    *.rs     ✓ Build passed
CB2 structure     *        ✓ Checks passed
CB3 test-suite    *.rs     ⟳ Running...
CB4 typst-watch   *.typ    ✓ Compiled

── AI edits main.rs → CB1 fires → error →
── AI sees error → fixes → CB1 fires → 

58 tools. 14 modules.
One terminal.

🛠

58 Specialized Tools

File editing, git operations, web search, PDF generation, console management, memory, todos — every tool your AI needs, designed for LLM consumption.

EditWriteOpengitgh brave_searchtypstconsolememory
🤖

5 LLM Providers

Anthropic, Claude Code, DeepSeek, Grok, Groq. Switch providers without changing your workflow.

👤

Agents & Skills

Custom system prompts, loadable knowledge skills, and slash commands. Shape your AI's personality and expertise.

🌐

Web Search & Scrape

Brave Search + Firecrawl integration. Your AI can research, read documentation, and scrape any website.

📄

PDF Generation

Embedded Typst compiler. Create reports, invoices, and documents from code. Auto-recompile on edit.

🔄

Autonomous Workflows

Auto-continuation with guard rails (cost, tokens, time, messages). The AI works through your todo list while you review. Spine notifications keep you informed.

spine_configuretodo_createcoucouconsole_watch

Up and running in 2 minutes.

01

Install

git clone && ./deploy_local.sh

One command. Compiles from source. Installs to /usr/local/bin.

02

Configure

export ANTHROPIC_API_KEY=sk-...

Add your API key. Pick your LLM provider. That's it.

03

Navigate to your project

cd your-project && cpilot

Context Pilot reads your project structure and initializes.

04

Code with AI

Talk to your AI. It manages its own context.

Memory persists. Callbacks catch errors. Context stays clean.

Not a replacement. A force multiplier.

Context Pilot isn't another AI editor. It's the infrastructure layer that makes any AI coding assistant dramatically more effective.

Feature Cursor Aider Claude Code Context Pilot
Persistent memory across sessions ~
Structured context management ~
File edit callbacks (auto-check)
Multiple LLM providers ~
Web search & scraping
PDF / document generation
Autonomous workflows with guard rails ~
Open source (AGPL-3.0)
Terminal-native (no Electron)

Built for developers who read the source.

Pure Rust

~15K lines. Ratatui TUI framework. No Electron. No Node. Just fast, reliable Rust.

Modular Workspace

14 independent crates in a Cargo workspace. Activate only what you need. Each module owns its tools, panels, and state.

Context Elements

Everything the AI sees is a context element — files, panels, memories, tools. All visible, all manageable, all measurable in tokens.

Persistence Layer

State serialized to .context-pilot/. Memories, logs, todos, conversation history — everything survives restarts.

Frequently asked questions.

What LLM providers are supported?

Anthropic (Claude), Claude Code (API key), DeepSeek, Grok (xAI), and Groq. You bring your own API key. No vendor lock-in.

Does it work on macOS / Windows?

Currently Linux-native. macOS support is planned. Windows users can run it via WSL2 with zero code changes.

Is it free?

Yes. Context Pilot is open source under AGPL-3.0. You only pay for LLM API usage (your own keys).

How is this different from Cursor?

Cursor is an IDE with built-in AI. Context Pilot is infrastructure that manages AI context — persistent memory, callbacks, structured tools. It's terminal-native, open source, and works with any LLM provider.

How is this different from Aider?

Aider is great for quick edits. Context Pilot adds structured context management (14 modules, 58 tools), persistent memory, file edit callbacks, web search, PDF generation, autonomous workflows, and a full TUI with real-time panels.

Can I use my own LLM / local models?

Currently requires cloud API keys. Local model support (Ollama, LM Studio) is on the roadmap.

How does the callback system work?

You define callbacks with glob patterns (e.g., *.rs) and a bash script. When the AI edits a matching file, the script fires automatically. In blocking mode, the AI waits for the result and self-corrects if the check fails.

Ready to give your AI a memory?

Open source. Free forever. Install in 2 minutes.

$ git clone https://github.com/bigmoostache/context-pilot && cd context-pilot && ./deploy_local.sh