ArgosBrain gives any MCP-compatible coding agent persistent structural memory of your codebase. Sub-millisecond lookups. No LLM in the retrieval loop. Runs local.
sections/*.liquid on every turn.Every session starts from scratch. Every query re-embeds files you've already seen. Every run rebuilds the repo map and throws it away.
The community has a name for this: context rot. Chroma's 2025 study measured it across 18 frontier models — every one degrades as input grows. Anthropic shipped a Memory Tool in September 2025, but it's a file primitive, not a brain.
Meanwhile, you're paying for the same file to be read 40 times a week. Cursor Ultra is $200/mo. Claude Max is $200/mo. Token bills don't lie.
Compiled Rust binary runs locally. Tree-sitter + SCIP parse your codebase into a unified graph. 28 languages. Updates instantly on file save.
Any agent asks structural questions via standard MCP tools — symbol_exists, resolve_member, list_symbols, search. Sub-ms answers. Integrate it into your custom internal tools effortlessly.
$0 per query, forever. No LLM in the retrieval loop. Local-first. Zero data egress. Toggle on/off, see the diff.
Images. PDFs. Audio. Video. Screenshots. Diagrams. When your team shares a file with the agent, ArgosBrain remembers it — linked to the code it's about — so two weeks later the "why was that button disabled in the mockup?" question resolves instantly.
Works with whichever agent you already use. No vision stack. No OCR server. No per-query fees on our side. Just a new MCP tool your agent calls after it reads a file.
Same repo. Same prompt. Same model (Claude Opus 4.7, temperature=0).
Left window: agent alone. Right window: agent + ArgosBrain.
"ArgosBrain cut our Claude API token burn by 60% and halved the 'thinking' wait times on our backend repo. It's the first local memory tool that actually understands implicit Go interfaces."
LLM summarization destroys ASTs. We parse them.
$52M raised in the category — zero products built for code.
100M tokens still hallucinate symbols. And cost real money per call.
Our answers are ground truth, in 0.8 milliseconds.
They ship your code to their servers and bill LLM cost per query.
We run local. $0 per query. Zero data egress.
Locked to one editor. We work in every MCP agent — including the ones above.
One brain, every tool.
We didn't write these reviews. Claude Opus 4.7 did — unprompted — during a live 1 237-turn coding session on a production Next.js SaaS. The agent graded ArgosBrain against Grep and RAG on real jobs it had to do that day. Below are its seven own-word assessments, unedited beyond light trimming. The eighth card (multi-modal) ships in v0.2 — it arrived after the review, so it's ours, labelled as such.
"The initial audit scoped src/app/api/ and found two SSRF sites. ArgosBrain surfaced four more in src/lib/services/ — the agent had to follow causal edges across directories Grep wasn't pointed at."
— Claude Opus 4.7 · dogfood session · 2026-04-22
2× RECALL VS. GREP"Argos returned a CLEAR match: uploadVideoToTikTok(videoBuffer: Buffer, …) takes a Buffer, not a URL. The agent was about to patch the call site as if it accepted a URL — that retrieval prevented a silently-broken commit."
— Claude Opus 4.7 · dogfood session · 2026-04-22
PREVENTED A BAD COMMIT"Before deleting an RLS-bypassing route I thought was dead, I asked Argos for its callers. It returned NO_CONFIDENT_MATCH — exhaustive over the ingested codebase. Not 'I didn't find any'; 'there are none.' Deleted with confidence, no regression."
— Claude Opus 4.7 · dogfood session · 2026-04-22
SAFE DEAD-CODE CUT"I was about to write a new handler. Argos pulled up the existing one from an older session — same behaviour, already tested. Saved me a duplicate route and the tech debt that comes with it."
— Claude Opus 4.7 · dogfood session · 2026-04-22
NO DUPLICATE HANDLERS"Before adding a new admin check, Argos surfaced ADMIN_EMAILS as the project's established pattern. The agent used the same convention instead of inventing its own. Tiny detail; compounds over months."
— Claude Opus 4.7 · dogfood session · 2026-04-22
STYLE-CONSISTENT PRS"'Does sanitizeHtml exist in this project?' — answered 'no' in 40ms with confidence 1.0. Grep on 400 files would have taken a full second and left the question ambiguous. The agent stopped hunting for ghosts."
— Claude Opus 4.7 · dogfood session · 2026-04-22
< 50 MS DEFINITIVE NEGATIVES"Before committing to a feature, the agent used Argos to map every file a change would touch — six, across three service boundaries. It flagged the effort as disproportionate and deferred the work. A human tech lead would have done the same scope check."
— Claude Opus 4.7 · dogfood session · 2026-04-22
ACCURATE EFFORT ESTIMATES"User shared a UI mockup. The LLM interpreted it — 'a 3-step Stripe checkout, Place Order button disabled until terms accepted' — and Argos stored that interpretation linked to checkoutHandler. Two weeks later, the 'why is the button disabled?' question resolved instantly."
1 CALL = IMAGE + CONTEXT + CODE LINKOne giant table is unreadable. Here's the same information split into seven categories — ArgosBrain first, everyone else ranked against us. Click any competitor for the full page with citations.
| ArgosBrain | $0 — no LLM on read path |
| Zep / Graphiti | Free retrieval (graph + semantic) |
| Mem0 | Embedding + vector search |
| MCP memory server | Substring + full body |
| Aider | ~1 000 tokens / request |
| Continue | Prompt tokens (chunks injected) |
| Cursor · Windsurf · Copilot | Prompt tokens every relevant query |
| CLAUDE.md | Full file in system prompt, every turn |
| Cline Memory Bank | Full MD bank at every session start |
| Letta | LLM tool-call on every read |
| ArgosBrain | SCIP + live LSP + tree-sitter, tiered per language |
| Aider | Tree-sitter surface names + PageRank |
| Continue | Tree-sitter text chunks for embedding |
| Copilot | Semantic repo indexing (opaque) |
| Cursor · Windsurf | Undocumented |
| Cline · Mem0 · Zep · Letta · CLAUDE.md · MCP memory | No code indexing — prose / text / JSON |
| ArgosBrain | File-hash invalidation, automatic |
| Copilot | 28-day auto-expire + citation validation |
| Aider | Recomputed per request (always fresh) |
| Zep / Graphiti | Bi-temporal edges (not code-aware) |
| Continue | On re-index |
| Cursor · Windsurf | Unknown |
| Cline · CLAUDE.md | Manual edit only |
| Mem0 · Letta · MCP memory | None |
| ArgosBrain | Yes, default — runs in-process |
| Windsurf · Zed · Cline · Aider · Continue · CLAUDE.md · MCP memory | Yes |
| Mem0 · Letta | OSS self-host yes; Cloud no |
| Zep | CE deprecated Apr 2025 — Graphiti OSS only |
| Cursor · Copilot | Cloud-only |
| ArgosBrain | LongMemCode 99.2–100% across 16 corpora, P99 ≤ 0.82 ms |
| Zep | DMR 94.8%, LongMemEval +18.5% / −90% latency |
| Mem0 | LoCoMo 91.6%, LongMemEval 93.4% |
| Letta | Terminal-Bench #1 OSS (Letta Code) |
| All others | None published |
| ArgosBrain | Is an MCP server — runs under every MCP client |
| MCP memory server | Yes (reference implementation) |
| Continue · Cline · Mem0 · Zep · Letta | MCP client only — can consume, not serve |
| CLAUDE.md | Convention, not a protocol |
| Cursor · Windsurf · Copilot · Aider | No — memory locked inside their tool |
"State integrity degrades at 500K to 2M tokens. Roughly one-fifth to one-tenth the scale where retrieval architecture becomes critical." — Mark Hendrickson · Apr 2026
Long-context models don't solve memory. BEAM scores showed RAG degrading from 30.7% at 1M tokens to 24.9% at 10M, and contradiction resolution near zero at every tier. ArgosBrain's verify / dispute / zone transitions are exactly the write-integrity layer those numbers say is missing.
"Every practitioner has felt it. Your GraphRAG system is useless for weeks — hallucinating, missing obvious connections. Then suddenly, it works." — Alexander Shereshevsky · Graph Praxis
Flat vector RAG breaks on codebases because codebases are high-connectivity graphs (call sites, inheritance, imports). ArgosBrain is graph-first by design — petgraph + HNSW + keyword hybrid — which is why every cell in the "code-native" row above is red except ours.
"We win at everything" is a lie and engineers smell it instantly. Here's what we don't ship today.
Cursor and Copilot ship memory inside the editor with zero install. ArgosBrain runs as an MCP server you configure.
Mem0 Cloud and Zep Cloud offer multi-user team memory out of the box. ArgosBrain is local-first; team sync is roadmap, not shipped.
Mem0 holds 91.6% on LoCoMo. ArgosBrain targets ≥91.6% on LongMemEval — match, not beat. Our moat is code, not chat.
For pure-English queries like "rate limit fail open" — no symbol names, no identifiers — Grep is still the faster tool. Argos is for structural code questions; we'll point you at Grep when that's the right answer.
Database rows, RLS policies, deploy logs, third-party API responses, runtime errors. Not our job. Use psql, provider CLIs, deploy hooks, browser devtools. We store code memory — not a proxy to production systems.
We don't ship a vision stack. Your agent's LLM interprets the file; we make sure that interpretation is remembered — linked to your codebase. One less binary, one less supply-chain surface, one less thing to audit.
LongMemCode is an open-source benchmark we authored and published under MIT license. Every system below runs the same 150-question hard subset. Results are public, reproducible, and neutral-hosted at longmemcode.com.
Tool-use histograms from two real Claude Code sessions on the same Next.js codebase (~400 files). The only variable: whether the project had a CLAUDE.md telling Claude to reach for ArgosBrain first. The analyzer that generates these digests — scripts/analyze_session.py — is open and runs on your own session file.
189 Edit 177 Read 146 Bash 127 Grep ← every code question 9 Glob 9 Write 6 ToolSearch 2 AskUserQuestion 0 mcp__argos__* ← MCP installed, never called
26 Bash 24 Read 23 Edit 16 TodoWrite 13 mcp__argos__symbol_exists 8 mcp__argos__search 3 ToolSearch 1 mcp__argos__ingest_codebase 0 Grep ← agent trusts structure now
In the second session, search also surfaced four SSRF call-sites the first session's Grep had missed — they lived outside the directory the audit was pointed at, reachable only via Causal edges in the call graph. Full write-up: docs/proof/session-2026-04-22-grep-vs-argos.md.
$ python3 scripts/analyze_session.py --redact <your-session.jsonl>The --redact flag auto-scrubs home paths, env-var-style secrets, UUIDs, hex digests, JWTs and high-entropy tokens before printing — safe to paste publicly. The analyzer never reads message bodies or tool-result content; only tool-use metadata.
argosbrain audit in your terminal to inspect the exact JSON payload before it's sent.