parsiya

Trailmark MCP Server

Community parsiya
Updated

Trailmark MCP Server

Trailmark MCP Server

Trailmark MCP Server is a standalone MCP wrapper around railofbits/trailmark.

While I do understand the ToB's usage with Claude skills, my usecaserequires an MCP server that can analyze and server multiple graphs. The servercan scan multiple repositories and the LLM can request information from eachseparately.

Mostly created with OpenAI GPT-5.5 via Github Copilot in VS Code. Point your LLMto the ai-docs directory for documentation and development support.

Requirements

  • Python 3.12+
  • uv

Project metadata:

  • package name: trailmark-mcp
  • CLI command: trailmark-mcp

Install

Install runtime and development dependencies:

uv sync --group dev

Quick Start

Start server over stdio:

uv run trailmark-mcp serve --transport stdio

Smoke-test direct scan path without an MCP client:

uv run trailmark-mcp scan /path/to/repo

Skip preanalysis during scan when needed:

uv run trailmark-mcp scan /path/to/repo --skip-preanalysis

How The Server Works

Primary lifecycle entrypoint is open_repository(...).

Behavior summary:

  • if no snapshot exists, the server scans source, optionally runs preanalysis, and saves the first snapshot
  • if a snapshot exists and rescan=False, the server reloads the latest snapshot into a live session
  • if rescan=True, the server rebuilds from source and saves a fresh snapshot

This means the common flow is:

  1. call open_repository
  2. use graph tools against returned session
  3. call save_snapshot after meaningful in-memory mutations when you want persistence

Session Model

session_id is MCP wrapper state, not Trailmark core state.

Current semantics:

  • each open_repository(...) call creates a new session id
  • multiple live sessions can coexist
  • tools accept session_id to target a specific graph
  • omitted session_id uses the most recently opened still-open session
  • closing the default session promotes the most recently opened remaining session

Use current_repository(session_id=...) to verify which repository a session points to.

Public MCP Tools

Lifecycle:

  • open_repository
  • current_repository
  • close_repository
  • save_snapshot

Navigation:

  • graph_summary
  • diff_graphs
  • search_nodes
  • callers_of
  • callees_of
  • ancestors_of
  • reachable_from
  • paths_between
  • entrypoint_paths_to
  • attack_surface
  • complexity_hotspots
  • functions_that_raise

Context and mutation:

  • subgraph
  • annotations_of
  • findings
  • nodes_with_annotation
  • run_preanalysis
  • annotate_node
  • clear_annotations
  • augment_findings

Notes:

  • diff_graphs(before_session_id, after_session_id) treats after as the new state
  • search_nodes supports contains, exact, and suffix
  • removed helper surfaces like scan_repository and tool_manifest are intentionally not part of the public runtime anymore

Snapshot Behavior

Snapshots are written under the analyzed repository, not under this server repository:

<target-repo>/.trailmark/snapshots/<timestamp>/

Current snapshot artifacts include:

  • graph.json
  • summary.json
  • entrypoints.json
  • hotspots.json
  • subgraphs.json
  • scan-metadata.json

Snapshots support reload into a live session. Use rescan=True when you explicitly need a fresh rebuild from source.

Repository Layout

Key files:

  • src/trailmark_mcp/cli.py: CLI entrypoint for scan and serve
  • src/trailmark_mcp/mcp_app.py: MCP tool registration
  • src/trailmark_mcp/tool_catalog.py: declarative metadata for exposed tools
  • src/trailmark_mcp/services/registry.py: session tracking
  • src/trailmark_mcp/services/runtime.py: main Trailmark-backed runtime behavior

Development

Run focused test suite:

uv run --group dev pytest tests/test_tool_catalog.py tests/test_registry.py tests/test_stdio_server.py

Current CI runs that same focused suite on Python 3.12.

Extension rule:

  1. add or change runtime behavior
  2. register tool in mcp_app.py
  3. update metadata in tool_catalog.py
  4. update tests
  5. update docs if public behavior changed

Use In VS Code

VS Code can launch this server directly through MCP using a workspace-level mcp.json file.

Typical setup:

  1. open this repository in VS Code
  2. make sure dependencies are installed with uv sync --group dev
  3. keep the server definition in .vscode/mcp.json
  4. let the MCP client start the server over stdio

This repository already includes .vscode/mcp.json for local use.

Example mcp.json:

{
  "servers": {
    "trailmark-mcp": {
      "type": "stdio",
      "command": "uv",
      "args": [
        "run",
        "trailmark-mcp",
        "serve",
        "--transport",
        "stdio"
      ]
    }
  }
}

If you use this server from a larger multi-project workspace, copy the same definition into that workspace root's .vscode/mcp.json and make sure the command runs in an environment where uv and this project are available.

MCP Server · Populars

MCP Server · New

    Lissy93

    bug-bounties

    ⚔️ A compiled list of companies who have active programs for responsible disclosure. MCP-enabled.

    Community Lissy93
    samvallad33

    Vestige

    Cognitive memory for AI agents — FSRS-6 spaced repetition, 29 brain modules, 3D dashboard, single 22MB Rust binary. MCP server for Claude, Cursor, VS Code, Xcode, JetBrains.

    Community samvallad33
    HarimxChoi

    google-surf-mcp

    ✨Anti-Bot Search MCP: No API Key✨

    Community HarimxChoi
    syncable-dev

    Memtrace

    The missing memory layer for coding agents

    Community syncable-dev
    kunwar-shah

    Claudex

    MCP server with persistent memory + FTS5 search for Claude Code conversation history. Index your ~/.claude/projects/, expose 10 MCP tools, browse via web UI. MIT-licensed.

    Community kunwar-shah