Memtrace
The persistent memory layer for coding agents. A bi-temporal, episodic, structural knowledge graph — built from AST, not guesswork.
Early Access — Memtrace is under active development. Core indexing and structural search are stable. Temporal features (evolution scoring, timeline replay) are functional but may have rough edges. Report issues here.
Memtrace gives coding agents something they've never had: structural memory. Not vector similarity. Not semantic chunking. A real knowledge graph compiled from your codebase's AST — where every function, class, interface, and API endpoint exists as a node with deterministic, typed relationships.
Index once. Every agent query after that resolves through graph traversal — callers, callees, implementations, imports, blast radius, temporal evolution — in milliseconds, with zero token waste.
npm install -g memtrace # binary + 12 skills + MCP server — one command
memtrace start # launches the graph database and auto-indexes the current project
That's it. Run memtrace start from your project root — it spins up the graph database and kicks off indexing automatically. Claude and Cursor (v2.4+) pick up the skills and MCP tools automatically.
https://github.com/user-attachments/assets/e7d6a1e9-c912-4e65-a421-bd0256dffa5a
Built-in UI at
localhost:3030— explore your graph, trace dependencies, spot dead code, and visualize architecture at a glance
Why Memtrace Exists
Good code intelligence tools already exist. GitNexus and CodeGrapherContext build AST-based graphs with symbol relationships, and they work well for understanding what's in your codebase right now.
Memtrace is a bi-temporal episodic structural knowledge graph. It builds on that same AST foundation and adds two dimensions:
- Temporal memory — every symbol carries its full version history. Agents can reason about what changed, when it changed, and how the architecture evolved — not just what exists today. Six scoring algorithms (impact, novelty, recency, directional, compound, overview) let agents ask different temporal questions.
- Cross-service API topology — Memtrace maps HTTP call graphs between repositories, detecting which services call which endpoints across your architecture.
On top of that, the structural layer is comprehensive:
- Symbols are nodes — functions, classes, interfaces, types, endpoints
- Relationships are edges —
CALLS,IMPLEMENTS,IMPORTS,EXPORTS,CONTAINS - Community detection — Louvain algorithm identifies architectural modules automatically
- Hybrid search — Tantivy BM25 + vector embeddings + Reciprocal Rank Fusion, all on top of the graph
- Rust-native — compiled binary, no Python/JS runtime overhead, sub-8ms average query latency
The agent doesn't just search your code. It remembers it.
Benchmarks
All four systems run on the same machine, same mempalace checkout, same 1,000 queries, same evaluator. Ground truth is extracted by Python's stdlib ast module — not from any tool's index — so no system gets a home-field advantage in the dataset itself. Full reproduction scripts, raw per-query results, and methodology notes live in benchmarks/fair/.
Results (1,000 Python symbol-lookup queries on mempalace)
| Tool | Coverage | Acc@1 | Acc@5 | Acc@10 | Avg lat | Tokens |
|---|---|---|---|---|---|---|
| Memtrace (ArcadeDB) | 100.0% | 96.7% | 100.0% | 100.0% | 9.16 ms | 195 |
| ChromaDB (all-MiniLM-L6-v2) | 100.0% | 62.3% | 86.1% | 87.9% | 58.5 ms | 1,937 |
| GitNexus (eval-server) | 99.5% | 27.1% | 89.7% | 89.9% | 191.2 ms | 213 |
| CodeGrapherContext (CLI) | 67.2% | 6.4% | 66.4% | 66.7% | 1627.2 ms | 221 |
- Coverage = the tool returned any result for the query (separates "did you index it?" from "did you rank it well?")
- Acc@K = the correct file appeared in the top K ranked results
- Avg latency = wall-clock per query, including all protocol overhead (MCP JSON-RPC for Memtrace, HTTP for GitNexus, in-process for ChromaDB, subprocess spawn for CGC)
- Tokens = average response size in tokens (chars / 4)
What the numbers say, read fairly:
- Memtrace is exact-symbol lookup's sweet spot: 100% coverage, rank-1 hit in 96.7% of queries, and the correct file is in the top-10 every single time. 9 ms per query, 195 tokens per response.
- ChromaDB shows what semantic embeddings look like for this workload — 88% top-10 but rank-1 is probabilistic, and the response is 10× larger because it returns 800-char chunks rather than symbol metadata.
- GitNexus finds the right file 90% of the time — the old "12.8% accuracy" claim from the Acc@1-only harness understated it massively. GitNexus leads its response with execution flows, pushing standalone definitions down the list, which costs it rank-1 but not top-10.
- CodeGrapherContext's 67.2% coverage means its parser extracted two-thirds of the symbols Python's AST finds. Among symbols it did index, top-10 hit rate is excellent (~99%). Latency is dominated by the CLI re-initialising FalkorDB per call — operational, not algorithmic.
Where each tool shines — this benchmark measures exact-symbol lookup only. Different workloads produce different rankings: ChromaDB wins on natural-language queries, GitNexus on execution-flow traces, Memtrace on exact lookup / typo tolerance / temporal queries / cross-service API topology. See benchmarks/fair/README.md for a per-workload breakdown.
Mem0 and Graphiti are strong conversational memory engines designed for tracking entity knowledge (e.g. User -> Likes -> Apples). They excel at that. For code intelligence specifically, the tradeoff is that they rely on LLM inference to build their graphs — which adds cost and time when processing thousands of source files.
Graphiti processes data through add_episode(), which triggers multiple LLM calls per episode — entity extraction, relationship resolution, deduplication. At ~50 episodes/minute (source), ingesting 1,500 code files takes 1–2 hours.
Mem0 processes data through client.add(), which queues async LLM extraction and conflict resolution per memory item (source). Bulk ingestion with infer=True (default) means every file passes through an LLM pipeline. Throughput is bounded by your LLM provider's rate limits.
Both accumulate $10–50+ in API costs for large codebases because every relationship is inferred rather than parsed.
Memtrace takes a different approach: it indexes 1,500 files in 1.2–1.8 seconds for $0.00 — no LLM calls, no API costs, no rate limits. Native Tree-sitter AST parsers resolve deterministic symbol references (CALLS, IMPLEMENTS, IMPORTS) locally. The tradeoff is that Memtrace is purpose-built for code — it doesn't handle conversational entity memory the way Mem0 and Graphiti do.
GitNexus and CodeGrapherContext both build AST-based code graphs with structural relationships — solid tools in the same space. Memtrace shares that foundation and extends it with temporal memory, API topology, and a Rust runtime:
| Capability | Memtrace | GitNexus | CodeGrapher |
|---|---|---|---|
| AST-based graph | Yes | Yes | Yes |
| Structural relationships (CALLS, IMPLEMENTS, IMPORTS) | Yes | Yes | Yes |
| Bi-temporal version history per symbol | Yes — 6 scoring modes | Git-diff only | No |
| Cross-service HTTP API topology | Yes | No | No |
| Community detection (Louvain) | Yes | Yes | No |
| Hybrid search (BM25 + vector + RRF) | Yes — Tantivy + embeddings | No | BM25 + optional embeddings |
| Language | Rust (compiled binary) | JavaScript | Python |
| Coverage (1K queries) | 100% | 99.5% | 67.2% |
| Acc@1 (1K queries) | 96.7% | 27.1% | 6.4% |
| Acc@10 (1K queries) | 100% | 89.9% | 66.7% |
| Query latency (1K queries) | 9.16 ms avg (11.4 ms p95) | 191.2 ms avg | 1627.2 ms avg |
| Tokens per query | 195 avg | 213 avg | 221 avg |
| Index time (~250 files / 2.3K nodes / 5.8K edges) | ~4 sec (≈500 ms of real work + ~3 s Docker / Bolt / schema DDL startup on first run) | ~6 sec | ~1 sec (cached) |
All numbers from the fair benchmark on the same machine, same mempalace checkout, same 1,000 queries. Ground truth is extracted by Python's stdlib ast — not from any tool's index — so no system is advantaged in the dataset itself. Metrics are coverage (did the tool index it?), Acc@1 (is the correct file first?), and Acc@10 (is it in the top-10?), which together separate parser coverage from rank quality.
The latency difference is primarily Rust vs. interpreted runtimes, and ArcadeDB's Graph-OLAP engine (native CSR projections, PageRank/betweenness as in-database procedures) vs. HTTP/embedding pipelines. The feature difference is temporal memory and API topology — dimensions Memtrace adds on top of the shared AST-graph foundation.
25+ MCP Tools
Memtrace exposes a full structural toolkit via the Model Context Protocol:
Search & Discovery
Relationships
Impact Analysis
Code Quality
|
Temporal Analysis
Graph Algorithms
API Topology
Indexing & Watch
|
12 Agent Skills
Memtrace ships skills that teach Claude how to use the graph. They fire automatically based on what you ask — no prompt engineering required.
| Skill | You say... | |
|---|---|---|
| Search | memtrace-search |
"find this function", "where is X defined" |
| Relationships | memtrace-relationships |
"who calls this", "show class hierarchy" |
| Evolution | memtrace-evolution |
"what changed this week", "how did this evolve" |
| Impact | memtrace-impact |
"what breaks if I change this", "blast radius" |
| Quality | memtrace-quality |
"find dead code", "complexity hotspots" |
| Architecture | memtrace-graph |
"show me the architecture", "find bottlenecks" |
| APIs | memtrace-api-topology |
"list API endpoints", "service dependencies" |
| Index | memtrace-index |
"index this project", "parse this codebase" |
Plus 4 workflow skills that chain multiple tools with decision logic:
| Skill | You say... |
|---|---|
memtrace-codebase-exploration |
"I'm new to this project", "give me an overview" |
memtrace-change-impact-analysis |
"what will break if I refactor this" |
memtrace-incident-investigation |
"something broke", "root cause analysis" |
memtrace-refactoring-guide |
"help me refactor", "clean up tech debt" |
Temporal Engine
Six scoring algorithms for different temporal questions:
| Mode | Best for |
|---|---|
compound |
General-purpose "what changed?" — weighted blend of impact, novelty, recency |
impact |
"What broke?" — ranks by blast radius (in_degree^0.7 × (1 + out_degree)^0.3) |
novel |
"What's unexpected?" — anomaly detection via surprise scoring |
recent |
"What changed near the incident?" — exponential time decay |
directional |
"What was added vs removed?" — asymmetric scoring |
overview |
Quick module-level summary |
Uses Structural Significance Budgeting to surface the minimum set of changes covering ≥80% of total significance.
Compatibility
| Editor / Agent | MCP Tools (25+) | Skills (12) | Install |
|---|---|---|---|
| Claude Code | ✅ | ✅ | npm install -g memtrace — fully automatic |
| Claude Desktop | ✅ | ✅ | Automatic — shared with Claude Code |
| Cursor (v2.4+) | ✅ | ✅ | npm install -g memtrace — fully automatic |
| Windsurf | ✅ | Coming soon | Add MCP server manually |
| VS Code (Copilot) | ✅ | — | Add MCP server manually |
| Cline / Roo Code | ✅ | — | Add MCP server manually |
| Codex CLI | ✅ | Coming soon | Add MCP server manually |
| Any MCP client | ✅ | — | Add MCP server manually |
MCP tools work with any editor or agent that supports the Model Context Protocol. Skills are workflow prompts that teach the agent how to chain tools — Claude Code, Claude Desktop, and Cursor (v2.4+) all load them natively from the same
SKILL.mdformat.
Setup
Claude Code + Claude Desktop
npm install -g memtrace handles everything automatically — binary, 12 skills, MCP server, plugin, and marketplace all register in one command for both Claude Code and Claude Desktop.
For manual setup:
claude plugin marketplace add syncable-dev/memtrace
claude plugin install memtrace-skills@memtrace --scope user
claude mcp add memtrace -- memtrace mcp -e MEMTRACE_ARCADEDB_BOLT_URL=bolt://localhost:7687
Cursor
Cursor v2.4+ supports Agent Skills natively, and npm install -g memtrace handles everything automatically — no separate Cursor plugin is needed because Cursor reads the same SKILL.md format as Claude.
What the installer writes:
- MCP server →
~/.cursor/mcp.json(global — works in every project you open) - 12 skills + 4 workflows →
~/.cursor/skills/memtrace-*/SKILL.md
For a project-local install (so the skills travel with your repo and teammates get them on clone), run inside the project:
memtrace install --only cursor --local
This writes to .cursor/mcp.json and .cursor/skills/ relative to the project root instead of your home directory.
For a manual install (without the npm package), clone this repo and copy the skills directly:
cp -R plugins/memtrace-skills/skills/* ~/.cursor/skills/
Then register the MCP server manually (see the "Other Editors" JSON below).
Other Editors (Windsurf, VS Code, Cline)
After npm install -g memtrace, add the MCP server to your editor's config:
{
"mcpServers": {
"memtrace": {
"command": "memtrace",
"args": ["mcp"],
"env": { "MEMTRACE_ARCADEDB_BOLT_URL": "bolt://localhost:7687" }
}
}
}
Config file locations by editor
| Editor | Config file |
|---|---|
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
| VS Code (Copilot) | .vscode/mcp.json in your project root |
| Cline | Cline MCP settings in the extension panel |
Uninstall
memtrace uninstall # removes skills, MCP server, plugin, and settings
npm uninstall -g memtrace # removes the binary
Already ran npm uninstall first? The cleanup script is persisted at ~/.memtrace/uninstall.js:
node ~/.memtrace/uninstall.js
Languages
Rust · Go · TypeScript · JavaScript · Python · Java · C · C++ · C# · Swift · Kotlin · Ruby · PHP · Dart · Scala · Perl — and more via Tree-sitter.
Requirements
| Dependency | Purpose |
|---|---|
| ArcadeDB | Graph + document + vector database — auto-managed via memtrace start (pulls arcadedata/arcadedb:latest) |
| Node.js ≥ 18 | npm installation |
| Git | Temporal analysis (commit history) |
Documentation · npm · Issues
Built by Syncable · Proprietary EULA · Free to use