Anchor MCP
Persistent, local-first memory for LLMs using Voronoi-partitioned vector storage.
The Problem
LLMs are stateless. Every new conversation is a blank slate — your project context, coding preferences, client details, all gone. Standard "memory" features hallucinate. RAG systems need infrastructure. Nothing gives you stable, partitioned, local-first memory.
The Solution
Anchor is an MCP server that gives any LLM persistent long-term memory. It runs entirely on your machine — no cloud, no API keys, no data leaving your laptop. Knowledge is partitioned into 16 stable Voronoi cells with frozen centroids, so your client briefs, coding rules, and project specs stay mathematically separated and never contaminate each other.
Quick Install
pip install git+https://github.com/ArkyaAI/anchor-mcp.git
Claude Desktop Configuration
Add to your claude_desktop_config.json:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.jsonWindows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"anchor": {
"command": "python3",
"args": ["-m", "anchor"]
}
}
}
Restart Claude Desktop. Done.
Usage
Once connected, Claude can store and recall memories naturally:
Store a preference:
"Remember that I always use TypeScript with strict mode, Tailwind for styling, and Vitest for testing."
Claude calls anchor_store(name="Tech Stack", content="TypeScript strict mode, Tailwind CSS, Vitest", tags="coding,frontend")
Recall context:
"What testing framework do we use?"
Claude calls anchor_recall(query="testing framework") → returns your stored preference.
Auto-retrieve (the killer feature):
anchor_auto scans the conversation and silently retrieves relevant memories. You don't ask for it — Claude checks its memory and responds as if it always knew your context.
Inspect your memory map:
"Show me how my memories are distributed."
Claude calls anchor_inspect() → shows a visual map of which Voronoi cells hold your knowledge.
How It Works
- Local Embeddings —
all-MiniLM-L6-v2generates 384-dim vectors on your CPU. No API calls. - Voronoi Partitioning — 16 frozen centroids divide the vector space into stable regions. Your coding rules cluster in one cell, client details in another. They don't drift.
- FAISS Search — Meta's FAISS library handles similarity search with custom ID mapping.
- Welford Statistics — Streaming mean/variance per cell detects when knowledge areas are growing dense or shifting.
Available Tools
| Tool | Description | Parameters |
|---|---|---|
anchor_store |
Save a named memory with optional tags | name, content, tags |
anchor_recall |
Semantic search across all memories | query, top_k |
anchor_auto |
Auto-retrieve relevant context from conversation | conversation_context |
anchor_list |
List all stored memories | tag (optional filter) |
anchor_delete |
Remove a memory by ID | anchor_id |
anchor_inspect |
View Voronoi cell distribution and stats | cell_id (optional) |
Storage
All data lives locally in ~/.anchor/:
~/.anchor/
├── config.json # Settings
├── anchors/*.json # Your stored memories
├── index/vectors.faiss # Vector index
├── index/id_map.json # ID mapping
└── cells/
├── centroids.npy # Frozen Voronoi centroids
└── stats.db # Cell statistics (SQLite)
To reset: rm -rf ~/.anchor
To backup: copy the ~/.anchor directory.
Requirements
- Python: 3.10+
- Disk: ~500MB (embedding model + dependencies)
- RAM: ~200MB overhead
- OS: macOS, Linux, Windows (WSL)
License
MIT — Built by Arkya AI