Mason โ the context builder for LLMs ๐ท
Mason gives LLMs a persistent map of your codebase so they stop exploring from scratch every session.
The problem: Every time an LLM starts a new conversation about your code, it greps, reads files, and pieces together the architecture โ burning tokens on context it already understood yesterday. On a 164-file project, answering "what features does this app have?" requires reading 8+ files across multiple tool calls.
Mason's fix: A concept map that persists across sessions. One tool call returns a feature-to-file lookup table โ the LLM knows exactly where to look, without exploring.
Measured result (deepeval, Claude Sonnet, 164-file KMP project):
| Question | With Mason | Without Mason | Token saving |
|---|---|---|---|
| List all features | 10,258 tok | 31,346 tok | 67% |
| Trace data flow | 12,010 tok | 15,258 tok | 21% |
| Compare platforms | 10,897 tok | 19,353 tok | 44% |
| Onboarding flow | 10,271 tok | 11,432 tok | 10% |
| Average | 36% |
Same answer quality (0.9/1.0 on all tests, both paths). Reproduce: bench/.
Quick start
claude mcp add mason --scope user -- npx -p mason-context mason-mcp
Restart Claude Code, then ask: "use mason to analyze this project and create a snapshot."
That's it โ Mason will analyze your codebase and create a concept map. Next session, it loads the map instead of re-exploring everything.
How it works
Concept map
Mason's core feature. It persists a feature-to-file map in .mason/snapshot.json that survives across conversations. When the LLM needs to understand your project, it reads this map instead of grepping through your entire codebase:
{
"features": {
"home screen": {
"files": ["HomeScreen.kt", "HomeViewModel.kt", "GetWeatherDataUseCase.kt"]
}
},
"flows": {
"weather fetch": {
"chain": ["HomeViewModel.kt", "WeatherRepositoryImpl.kt", "WeatherServiceImpl.kt"]
}
}
}
The map is generated by the LLM itself โ Mason provides the analysis tools, and the LLM interprets your code to decide what the features and flows are. This means the map captures architectural understanding, not just file listings.
Create one by asking your AI assistant to "create a mason snapshot", or via CLI:
mason set-llm gemini # configure a provider (no API key needed)
mason snapshot ~/my-project # generate concept map
mason snapshot --install-hook # auto-update on every commit
Change impact analysis
Before editing a file, Mason can tell you what else might be affected. It combines three signals that would each require multiple tool calls to gather manually:
- Co-change history โ files that historically change together in git commits
- References โ files that import or mention the target by name
- Related tests โ test files paired to the target by naming convention
mason impact WeatherRepository.kt -d ~/my-project
Also available as the get_impact MCP tool โ ask your assistant "what would be affected if I changed WeatherRepository?"
Git history analysis
Mason aggregates hundreds of commits into actionable stats: which files change most often (hot files you should be careful with), which directories haven't been touched in months (potentially stale code), and what commit conventions the team follows. This is the kind of analysis that would take dozens of git log calls to compute manually.
mason analyze ~/my-project
MCP tools
Mason exposes 6 tools via the Model Context Protocol. Any MCP-compatible client (Claude Code, Cursor, etc.) can use them:
| Tool | What it does |
|---|---|
get_snapshot |
Load the concept map โ maps features/flows to files |
save_snapshot |
Persist the concept map for future sessions |
get_impact |
Change impact: co-change history, references, related tests |
analyze_project |
Git history: commit patterns, hot files, stale dirs |
full_analysis |
All-in-one first visit: git stats + structure + code samples + test map |
get_code_samples |
Smart file previews selected by architectural role |
CLI usage
Mason also works as a standalone CLI for generating CLAUDE.md files and running analysis without an MCP client. Configure an LLM provider once, then use any command:
mason set-llm claude|gemini|ollama|openai # configure provider
mason generate # analyze codebase + LLM -> CLAUDE.md
mason analyze # git stats only (no LLM needed)
mason impact File.kt # change impact analysis
mason snapshot # create/update concept map
Most providers work without an API key โ claude, gemini, and ollama all use their respective CLIs directly.
Security
What the snapshot contains: Feature names, relative file paths, and flow descriptions. No source code, secrets, or business logic.
What it doesn't touch: Mason respects .gitignore (via git ls-files) and has a deny-list that blocks .env, .pem, .key, credentials, and other sensitive files from being sampled. Path traversal protection ensures all file access stays within the project root.
LLM data flow: Generating a snapshot via CLI sends sampled file contents to your configured LLM provider โ the same way any AI coding assistant reads your code. Use ollama for fully local generation. The MCP server tools (get_snapshot, get_impact, etc.) only read local files.
Language support
Mason is completely language-agnostic. It uses file naming patterns and git history rather than language-specific parsing, so it works with any project that has source files and a git repository โ TypeScript, Kotlin, Python, Go, Rust, Swift, Java, C#, Dart, and more.
License
MIT