For Engineering Leads · Tech Writers · Solo Engineers · Open Source Maintainers

Code documentation
in minutes, not weeks.
One slash command.

Type this in your AI agent ›
/argos-corporate-delivery
Claude Code · Cursor · Cline · Aider · Continue · Zed — any MCP-compatible agent.

An arc42-compliant, branded PDF documenting your codebase — architecture, APIs, operations, security, code map, project closure — produced from your repo in 5-15 minutes. Every claim carries a file:line citation. Standards: arc42, IEEE 1016, ISO/IEC/IEEE 26515, C4, MADR, OpenAPI 3.x. Single command, deterministic source-of-truth, $0 per retrieval.

01The problem

The doc nobody writes — until the auditor, the new hire, or the acquirer asks.

Every engineering team has the same drawer: a half-finished Confluence space, a stale README, a 2-year-old architecture diagram in Miro that mentions services that no longer exist. It works until someone asks "can you explain the system end-to-end?" — and then it costs four engineers two weeks to ship a document that's wrong on the day it's printed.

Tech-writer agencies quote $15-50K for a "software delivery package" off a 30-day onboarding. Internal teams burn ~120 engineering hours per cycle. The output is brittle: the moment the code changes, the doc is wrong, and nobody updates it because nobody owns the round trip.

"Acquirer's M&A diligence team asked for our architecture docs. We had three days. The team spent two weekends rebuilding the C4 diagrams from scratch. They missed the new event bus. The acquirer caught it. Embarrassing."
— CTO, Series B fintech (anonymized, post-acquisition)
02How teams document today

Six engineers. Two weeks. Forty Confluence pages nobody trusts.

  1. Spec out a doc structure. Half the team argues arc42 vs C4 vs "let's just use Notion". Two days lost.
  2. Manual code archaeology. Open files, draw boxes in Miro, hand-list the modules, hand-trace the call graph. Per service: 4-8 hours. Per repo: 60-120 hours.
  3. API surface inventory. Grep for routes, eyeball OpenAPI, miss three endpoints. Re-run when the auditor catches them.
  4. "Decision log" section. Empty until someone remembers a Slack thread from October. Half the ADRs are inferred, not real.
  5. Pretty-printing. Three days in InDesign / a Notion template / "let's just use Google Docs". Result: 40-page PDF that's 60% correct on day 1, 30% correct on day 30, deleted by day 90.

Math: 120h × $200/h = ~$24K per cycle internal. External tech-writer agencies: $15-50K per delivery. Both produce paper. The code keeps moving, the doc rots, and the next time someone asks for it, you start over.

03How Argos changes it

One slash command. 6 buckets. Branded PDF on the auditor's desk.

$ argosbrain ingest .
✓ ingested 3,412 files, 28,604 symbols, 187,221 call-graph edges  (12.4s)

[in your AI agent — Claude Code / Cursor / Cline / ...]
> /argos-corporate-delivery

Phase 1/4 · Discovery
✓ framework: Next.js + tRPC + Postgres
✓ 14 modules detected (Louvain communities, modularity Q=0.73)
✓ 32 hub functions ranked by call-centrality

Phase 2/4 · Bucket synthesis (6 sections, parallel)
✓ 01-architecture     → C4 Context, Container, Component diagrams
✓ 02-api              → 47 endpoints, OpenAPI 3.x export
✓ 03-operations       → deploy topology, runtime dependencies
✓ 04-security         → auth flow, sink reachability, secrets scan
✓ 05-code-and-maint   → module map, top 30 hubs, naming conventions
✓ 06-project-closure  → ADRs (MADR), known issues, handoff checklist

Phase 3/4 · Render (v0.26.0 — built into argosbrain)
✓ argosbrain delivery render ./out/Acme-Delivery
✓ 47 pages, A4, branded, Inter + JetBrains Mono
✓ Acme-Delivery.pdf written  (1.2 MB)

Phase 4/4 · Confidence pass
- Tier 1 (auto from graph)     38 sections — deterministic
- Tier 2 (LLM prose w/ labels)  6 sections — reviewer-flagged
- Tier 3 (TODO templates)       3 sections — fill in 30 min

Total: 11 minutes. Cost: $0 retrieval + ~$1.50 LLM prose.

The PDF that lands on the auditor's desk:

ACME-DELIVERY.pdf · cover page

  Vendor:   ManageVendors Inc.       Project:  Acme Order System
  Client:   Acme Corp                Commit:   a4c08dd1 (HEAD)
  Date:     2026-05-08               Standard: arc42 + IEEE 1016 + C4

  ── 6 buckets, 47 pages ──────────────────────────────────────
  01  ARCHITECTURE       C4 model, hub functions, module map
  02  API SURFACE        47 endpoints, OpenAPI export, version policy
  03  OPERATIONS         deploy topology, dependencies, runbook
  04  SECURITY           auth flow, sink reachability, CVE map
  05  CODE & MAINTENANCE naming conventions, ownership map, hot spots
  06  PROJECT CLOSURE    ADRs, known issues, handoff checklist
  ─────────────────────────────────────────────────────────────

  Every claim cites file:line. Every diagram regenerable. Every
  section reproducible across re-runs. The PDF you give the
  client is the same one your CI pipeline can verify tomorrow.

Re-run on every release. The PDF stays current as long as the code does. No tech-writer drift, no Confluence rot.

04Side-by-side

The math for one delivery cycle.

Metric Today (manual / agency) With /argos-corporate-delivery
Time per cycle2 weeks · 6 engineers5-15 minutes · one engineer
Cost$15-50K agency · $24K internal~$1.50 LLM prose · $0 retrieval
Standards covered"we use the Confluence template"arc42 · IEEE 1016 · 26515 · C4 · MADR · OpenAPI
Source-of-truthHand-drawn in MiroTree-sitter AST + SCIP graph · re-runnable
file:line citationsRareEvery architectural claim
Re-run on next releaseRebuild from scratchOne CLI call · cached behind content hash
Output formatWord + PDF + screenshotsBranded PDF + Markdown source · A4, custom cover

Numbers based on the Acme reference repo (Next.js + Postgres SaaS, 28k symbols). Same engine drove the Kubernetes 1.32.0 audit (38,771 symbols, 4.2s ingest).

05What you get

6 buckets. Standards-aligned. Branded PDF + Markdown source.

  • 01 · Architecture — C4 model (Context / Container / Component / Code), hub functions ranked by call-centrality, module map (Louvain communities), dependency graph, framework detection. arc42-aligned.
  • 02 · API surface — every endpoint inventoried with input / output shapes, OpenAPI 3.x export where REST exists, version policy section, authentication map. IEEE 1016 §5 aligned.
  • 03 · Operations — deploy topology, runtime dependencies, environment variables, runbook skeleton, observability hooks. ISO/IEC/IEEE 26515 §6 aligned.
  • 04 · Security — auth flow walkthrough, sink reachability matrix (auto from find_sinks + check_reachability), secrets scan, CVE map for direct deps. Pairs with /argos-security-reviewer.
  • 05 · Code & maintenance — naming conventions per kind, top 30 hub functions, file ownership map (git-blame integrated), known hot spots, technical-debt candidates.
  • 06 · Project closure — Architecture Decision Records (MADR format), known issues, follow-up backlog, engineer-handoff checklist for the next maintainer.
  • Branded PDF deliverable — A4-paginated, custom cover page with vendor / client / project / commit-SHA placeholders, Inter + JetBrains Mono typography, Mermaid diagrams + matplotlib charts. Renders via argosbrain delivery render (built into the CLI as of v0.26.0).
  • Markdown source — every section editable, every diagram a regenerable spec file, every chart a JSON spec. Check it into your repo's docs/ directory if you want; re-run the renderer on every release.
06FAQ

The questions every engineering lead asks.

What standards does the documentation follow?

arc42 (architecture views), IEEE 1016 (software design descriptions), ISO/IEC/IEEE 26515 (agile documentation), C4 model (Context / Container / Component / Code), MADR (architecture decision records), OpenAPI 3.x where REST APIs exist. Output structure: 6 buckets — architecture, API, operations, security, code-and-maintenance, project closure. The orchestrator picks the relevant subset for your codebase; pure-library repos skip API and operations, microservices get the full sweep.

How does ArgosBrain know my codebase well enough to document it?

It ingests your code structurally. Tree-sitter AST per file, SCIP graph where indexers exist (Rust, Python, Go, TypeScript, Java, Scala, PHP, Ruby, C#, Dart), Louvain communities for module boundaries, PageRank centrality for hub functions. Every claim in the doc carries a file:line citation, so the auditor (or next engineer) can verify the source structurally rather than trust the LLM's narrative.

Is the documentation actually accurate, or is it LLM-generated drift?

Three confidence tiers, surfaced inline:

  • Tier 1 — AUTO from the structural graph (architecture diagrams, hubs, modules, dependencies, API surface). Deterministic, byte-reproducible across runs, no LLM in the read path.
  • Tier 2 — LLM prose with confidence labels (operational narratives, "why" explanations). Reviewers see what's machine-fact vs LLM-prose at a glance.
  • Tier 3 — TODO templates (project goals, business context, customer-facing rollout). The team fills these in — usually 30 minutes total.
Can I customize the cover page for my client?

Yes. The render pipeline accepts vendor name, client name, project name, and commit SHA placeholders. Header / footer are templated. Enterprise tier adds logo embedding and per-customer brand sheets.

Does this replace technical writers?

No — it removes the most painful 80%. The structural sections (every architecture page that needs file:line citations, the API surface, the dependency map, the module boundaries) generate themselves. Your tech writer focuses on the 20% that genuinely benefits from a human voice — the "why" decisions, the user-facing narrative, the rollout plan. The orchestrator's TODO templates are designed to land cleanly into that 20%.

How long does the full package take to generate?

5-15 minutes for a small-to-medium repo (under 250k LOC). Initial ingest dominates the time budget; the orchestration phases are sub-second per section. Re-runs after a code change are typically 1-3 minutes thanks to content-hash skip — only the changed files re-ingest, the rest of the graph is reused.

Which agents/IDEs work?

Any MCP-compatible client. Confirmed: Claude Code, Cursor, Cline, Aider, Continue, Zed. The skill is installed at ~/.claude/skills/argos-corporate-delivery/SKILL.md (Claude Code's global skills layout) — Cursor and the rest pick it up via their own skill registries. Run argosbrain init --install-skills after install to lay them down.

07Next