About ArgosBrain

We're building
the memory layer
for coding agents.

Local-first. Built in Rust. Benchmark is MIT; the engine is commercial. Product of ManageVendors Inc. (Newark, DE). Made for developers who've watched one too many agents forget the shape of their own codebase.

01Our story

We started because the agents we loved kept failing the same way.

Every coding agent we used — Claude Code, Cursor, Aider, Continue — was brilliant at writing a function and amnesiac about the codebase it was writing into. They'd re-discover the same class hierarchy on every turn, burn tokens re-reading files they'd already read, and confidently propose changes that violated conventions buried three folders deep.

We tried every memory system we could find. Mem0 and Zep were built for chat transcripts, not code. Cursor Memories and CLAUDE.md are prose files injected into the prompt — great for preferences, useless for "enumerate every caller of DatabaseClient.connect." Letta calls the LLM to read from memory, which is exactly the cost we were trying to avoid.

None of them indexed code the way a compiler indexes code. So we built the one that does.

ArgosBrain is what happens when you point a language-server-aware graph at the "memory" problem and refuse to pay tokens on the read path.

The engine is called Neurogenesis internally — a Rust core that ingests your repo through SCIP, live LSP, or bespoke tree-sitter drivers depending on the language, builds a symbol graph + HNSW index, and serves it over MCP in under a millisecond. Zero LLM calls on the hot path. Works offline. Your code never leaves your machine.

We shipped it, published the benchmark, and made both open-source so that anyone — including our competitors — can verify the claims. That's still the deal.

02What we believe

Six principles, non-negotiable.

These aren't marketing values. They're the rules we write code against. When a trade-off comes up — ship faster vs ship safer, richer feature vs leaner binary, cloud convenience vs local sovereignty — these settle it.

01

Local by default

Your code is yours. Ingestion, storage, retrieval — all on your machine. Cloud is opt-in, clearly labelled, never default.

02

Zero tokens on read

Retrieval is a graph traversal, not an LLM call. $0/query. Every architectural decision downstream serves this.

03

Deterministic or it didn't happen

We benchmark with exact-match scoring, not LLM judges. If the answer isn't reproducible on your laptop, it isn't an answer.

04

Concede the seams

Every rough edge, every missing feature, every roadmap item — named out loud. The FAQ and Verdict pages exist so nothing hides.

05

Verifiable, not hidden

The LongMemCode benchmark is MIT-licensed and adapter stubs for competitors are in the benchmark repo. Anyone can run the numbers on their own laptop. The engine itself is commercial — but every claim we make about it is reproducible.

06

Tail latency over averages

We lead with P99 because agent UX lives at the tail. A 4-second hiccup every hundred queries feels broken; a slow P50 just feels slow.

03Who we are

Seven people. Distributed team. Rust, compilers, agents, growth.

A compact team spanning engineering, AI research, and GTM — distributed across multiple time zones. Backgrounds in compilers, distributed systems, large language model fine-tuning, and enterprise AI deployment. If you reach us, you're talking to someone who ships on the product.

Aurelian Jibleanu

Aurelian Jibleanu

Chief Technology Officer · Leadership

AI software engineer with 10+ years of experience building scalable systems. Leads our technical vision — proprietary AI models, the Neurogenesis engine, and the LongMemCode benchmark. Previously worked on machine-learning systems at enterprise scale. LinkedIn.

Michael Thompson

Michael Thompson

Lead Backend Engineer · Engineering

Full-stack engineer specialising in distributed systems and API architecture. Designs the infrastructure that powers the ingest pipeline, ensuring reliability at scale.

James Richardson

James Richardson

Senior Backend Engineer · Engineering

Database architect and performance-optimisation specialist. Makes sure our systems handle millions of requests efficiently while maintaining data integrity.

David Martinez

David Martinez

Lead Frontend Engineer · Engineering

UI/UX engineer passionate about intuitive interfaces. Leads dashboard and user-facing feature development, making complex AI tooling accessible to everyone.

Ryan Cooper

Ryan Cooper

Frontend Engineer · Engineering

React specialist focused on performance and accessibility. Builds responsive, fast-loading interfaces that work seamlessly across all devices and browsers.

Christopher Anderson

Christopher Anderson

AI Training Specialist · AI Research

Machine-learning engineer specialising in fine-tuning large language models. Develops and trains proprietary AI models for content generation, SEO optimisation, and industry-specific applications.

Diana Pantazi

Diana Pantazi

Sales Manager · Sales

Connects businesses with AI solutions that transform their operations. Works closely with enterprise clients to understand their needs and show how automation drives growth.

We're hiring Rust engineers with compiler or language-server chops. Say hello.

04Where to find us

Newark, Delaware. Distributed team.

ArgosBrain is a product of ManageVendors Inc., a Delaware C-corporation registered at 131 Continental Dr, Suite 305, Apt. 33, Newark, DE 19713. The legal entity lives in Delaware because the math on Delaware incorporation is settled; the engineering team is distributed. Support phone: +1 (310) 943-3476.

For everything else — bug reports, feature requests, benchmark disputes — GitHub Issues is the public canonical inbox. Press and partnership: contact.

Build something with us.

The benchmark is open source, the papers are CC BY 4.0, the roadmap is public. The engine itself is commercial — but everything around it is inspectable. If that sounds like your kind of Tuesday, come say hi.