Some of these came from developer DMs; some we anticipated. If yours isn't here, open an issue on GitHub — we'll add the answer here.
For the developer who's configured language servers all their life and knows where the bodies are buried.
Language servers run as separate operating-system processes, never inside the Rust engine's address space. Their lifetime is bound to the ingest run — when the run finishes, the subprocess exits. Two independent timeouts protect against stuck RPC calls and runaway workspace imports. LSP ingestion is opt-in per build, not the default path, and the MCP stdio hot path that serves the agent never spawns a language server — it only reads from the already-ingested graph. A crashed language server cannot affect query latency, correctness, or the zero-panic guarantee.
tsserver / clangd / pylsp on a 10M-line monorepo? Does ingestion scale?
+
For languages where a mature SCIP indexer exists (Rust, Python, Go, TypeScript, JavaScript, Java, Scala, PHP, Ruby, C#, Dart), ingestion runs the compiler frontend offline, once, in batch. No long-lived language server. The subprocess dies at the end of ingest, so memory leaks have nowhere to accumulate. LSP live ingestion is used only where no mature SCIP indexer exists yet — Kotlin is the current example. For those, the timeouts above apply.
Loud, early failure with a diagnostic message pointing at the missing binary and suggesting install instructions. No silent degradation, no half-indexed state, no retry loop that burns CPU. Ingest either completes cleanly or surfaces an error the user can act on.
The retrieval path (MCP stdio → graph read → response) is written to the no-unwrap rule: zero unwrap() in library code, every fallible path returns Result. The ingest path is more forgiving (a single malformed file is skipped with a warning), but a panic in ingest still cannot reach the stdio loop — ingest runs on separate tasks and any panic there is caught and logged. The serving path stays up.
git checkout or rebase that touches hundreds of files?
+
Re-ingest cost is O(changed files), not O(repo size). Each file carries a content hash; unchanged files are skipped entirely before any parser touches them. On a cold re-run with zero diffs, a large repo completes in under 5 seconds. On a 300-file diff, expect a few seconds on SCIP-backed languages and sub-second on tree-sitter-backed ones. An optional file watcher can pick up changes within half a second of save.
Graph + HNSW stay in a bounded memory budget, with cold tiers spilling to disk and an LRU on hot nodes. The largest corpora we benchmark on LongMemCode (hundreds of thousands of symbols) stay in a few hundred MB resident on a laptop. We publish the numbers per corpus in LongMemCode — they're reproducible.
Every bundle header carries a format version. The reader refuses incompatible bundles with a clear error rather than silently misparsing. When the format evolves, the sync path downloads a re-baked bundle automatically. You don't get mysterious failures — you get a message telling you to run one command.
Because an MCP server that requires editing JSON by hand is dead on arrival.
claude_desktop_config.json. That's a 50% drop-off at the funnel. What have you done about it?
+
One-line installer: curl -sSL argosbrain.com/install | sh. It detects your OS + architecture, downloads a checksum-verified binary, installs to ~/.local/bin — no sudo, no package manager, no daemon. An argosbrain init command writes the MCP config snippet directly into the client's config file with a backup, so for Claude Code / Cursor / Zed / Continue you don't touch JSON.
INSTALL.md.Yes. The retrieval path has no network dependency — it's in-process Rust reading from local bincode storage. Bundle downloads happen at install / sync time; after that, everything is local. An air-gapped setup just pre-stages the bundles.
No. ArgosBrain is local-first by default — ingestion, storage, and retrieval all happen in-process on your machine. No cloud, no telemetry, no remote index. If a future version adds optional team sync, it will be opt-in and clearly labelled.
Any MCP-compatible client. Confirmed: Claude Code, Cursor, Zed, Continue, Cline (via MCP Agent mode). Aider integrates via its MCP support. Copilot does not support MCP today, so ArgosBrain can't be used inside Copilot — but it happily sits alongside.
Where we put the numbers and let anyone verify them.
Yes, obviously. That's why LongMemCode is MIT-licensed, publicly hosted, fully deterministic (no LLM judge), and ships with adapter stubs for Mem0 and Zep so anyone can run them and publish results. The whole point of publishing it is to let you disprove us. If someone runs Mem0 through it and publishes a score that beats ours, we update the scoreboard and the pitch. That's the deal.
Because agent UX lives at the tail. A memory system with a great P50 and a two-second P99 feels broken every hundred queries. P99 is the honest ceiling for what users actually experience in an interactive IDE loop. We publish P50 too, but we lead with P99 to stay honest.
Not yet. Adapter stubs for Mem0 and Zep are open in the repo for contribution. Neither has run. Cursor / Copilot / Windsurf memories aren't scriptable enough to run through a benchmark runner, but Mem0 and Zep are — nothing stops them from publishing a score. Their silence is not proof they'd lose; it's just that nobody has done the work yet.
The catch is that LongMemCode specifically measures structural retrieval quality on code — "does this symbol exist", "enumerate callers of X", "list every override of Y". A well-built code-structural memory should be near-perfect on these; they're deterministic questions with deterministic answers. That's the point. If we scored 85% we'd be admitting our engine has bugs. The interesting comparison isn't "100% vs 95%" — it's "100% vs grep-baseline's 6–54%", which tells you what kinds of questions text search literally cannot answer.
The questions we would rather not have to answer — and exactly because of that, the ones you should read first.
Three layers. (1) The benchmark itself: a public, MIT-licensed, deterministic benchmark for code memory didn't exist before LongMemCode. Whoever wants to copy us has to also out-score us publicly, which is a much higher bar than a feature-copy. (2) The ingestion pipeline: running SCIP + live LSP + bespoke tree-sitter drivers with custom semantic hooks in a single engine is not a weekend project. The hook logic per language is where months of work hide. (3) Hot-path cost: the $0/query graph retrieval is an architectural choice that's hard to retrofit onto an LLM-call-per-read product like Letta or a prompt-injection product like Cursor Memories.
If they do, they'll do it for Claude Code specifically. ArgosBrain is MCP-native and agent-portable — the same engine serves Claude Code, Cursor, Aider, Zed, Continue simultaneously. The moat narrows on Claude Code, but the cross-agent, cross-IDE story is still ours. Also: first-party memory from any one vendor tends to be scoped to what that vendor's own agent needs. A cross-vendor memory layer has a different customer.
MCP is the transport, not the engine. If the agent ecosystem consolidates on something else, ArgosBrain's graph + HNSW + SCIP/LSP/tree-sitter pipeline ports behind whatever new stdio / gRPC / HTTP interface wins. The expensive part — the ingestion and the graph — doesn't move.
Because sub-millisecond P99 at laptop RAM means you can't afford a GC pause or an interpreter overhead, and because the MCP process is going to run next to an agent that already eats RAM. Rust gives us predictable latency and a tiny footprint without giving up crash safety. Plus, the tree-sitter and async-lsp ecosystems are already first-class Rust citizens.
Product vs architecture. "ArgosBrain" is what users install and type into a config file. "Neurogenesis" is the internal codename for the graph engine and shows up in crate names, research references, and architecture docs. It's the same thing; one is the noun, the other is the proper name.
Open an issue on GitHub — we'll add the answer here.