OneMCP - Generic MCP Aggregator
A universal Model Context Protocol (MCP) aggregator that combines multiple external MCP servers into a unified interface with progressive discovery.
Version 0.2.0 - Production-ready generic aggregator with meta-tool architecture for improved efficiency and extensibility.
Built with the official MCP Go SDK from Anthropic/Google collaboration.
What is OneMCP?
OneMCP is a generic MCP aggregator that:
- Aggregates tools from multiple external MCP servers
- Supports custom internal tools with type-safe registration
- Exposes a unified meta-tool interface to reduce token usage
- Supports progressive tool discovery (search before loading schemas)
- Works with any MCP-compliant server
Why OneMCP?
When working with many MCP servers, exposing hundreds of tools directly to LLMs consumes massive amounts of tokens and context window. As explained in Anthropic's Code Execution with MCP article, the meta-tool pattern solves this by:
- Reducing token overhead: Instead of loading 50+ tool schemas (tens of thousands of tokens), expose just 2 meta-tools
- Progressive discovery: LLMs search for relevant tools only when needed
- Preserving context: More room for actual conversation and code, less for tool definitions
- Scaling gracefully: Add new servers without increasing baseline token usage
OneMCP implements this pattern as a universal aggregator that works with:
- Any LLM that supports MCP (Claude, OpenAI, Gemini, local models via Claude Desktop, etc.)
- Any MCP-compliant server
- Any deployment scenario (local development, production APIs, agent frameworks)
Architecture
OneMCP Aggregator
├── Meta-Tools (2)
│ ├── tool_search - Discover available tools
│ └── tool_execute - Execute a single tool
│
├── Internal Tools (optional)
│ └── Custom Go-based tools with type-safe handlers
│
└── External MCP Servers (configured via .onemcp.json)
├── Playwright (21 tools) - Browser automation
├── Filesystem (N tools) - File operations
└── Your Server (N tools) - Any MCP-compliant server
Benefits
- Token Efficiency: 99% reduction - expose 2 meta-tools instead of hundreds of individual tools
- Progressive Discovery: Search first, load schemas only for needed tools
- Universal: Works with any MCP-compliant server
- Flexible: Support both external servers (config) and internal tools (Go code)
- Type-Safe: Built-in tools leverage Go's type system with automatic schema inference
Performance Optimizations
OneMCP includes several optimizations for token efficiency and speed:
- Configurable Result Limit: Returns 5 tools per search by default (configurable via
.onemcp.json) - LLM-Powered Semantic Search: Claude, Codex, or Copilot intelligently match queries to tools
- Progressive Discovery: Four detail levels (names_only → summary → detailed → full_schema)
- Schema Caching: External tool schemas cached at startup, no repeated fetching
- Lazy Loading: Schemas only sent when explicitly requested via detail_level
Token Usage Examples (default 5 tools):
names_onlysearch: ~50 tokens totalsummarysearch: ~200-400 tokens totalfull_schemasearch: ~2000-5000 tokens total
LLM-Powered Semantic Search
OneMCP uses LLM-powered semantic search to intelligently match your queries to the right tools. Instead of exact keyword matching, it understands intent and context using AI models.
Example: Query "take a picture of the page" → finds browser_screenshot
Choose from 3 LLM providers based on your needs:
1. Claude (Anthropic, Default)
- Best for: Highest quality semantic understanding with Claude models
- Speed: ~3-5 seconds per search
- Quality: Excellent - Claude Haiku/Sonnet/Opus reason about tool descriptions
- Memory: <10MB RAM
- Requirements: Claude CLI (
brew install anthropics/claude/claude-code) - Cost: Uses local Claude CLI
{
"settings": {
"searchProvider": "claude",
"claudeModel": "haiku" // Options: "haiku" (fast, default), "sonnet", "opus"
}
}
2. Codex (OpenAI GPT-5)
- Best for: OpenAI's latest Codex models for tool search
- Speed: ~3-5 seconds per search
- Quality: Excellent - GPT-5 Codex reasoning
- Memory: <10MB RAM
- Requirements: Codex CLI
- Cost: Uses Codex CLI
{
"settings": {
"searchProvider": "codex",
"codexModel": "gpt-5-codex-mini" // Options: "gpt-5-codex-mini" (default), "gpt-5-codex"
}
}
3. Copilot (GitHub Copilot)
- Best for: GitHub Copilot integration for tool discovery
- Speed: ~3-5 seconds per search
- Quality: Excellent - Uses Claude Haiku 4.5 via GitHub Copilot
- Memory: <10MB RAM
- Requirements: GitHub CLI with Copilot (
gh copilot) - Cost: Requires GitHub Copilot subscription
{
"settings": {
"searchProvider": "copilot",
"copilotModel": "claude-haiku-4.5" // Default model
}
}
How it works: For each search, OneMCP sends your query + all tool schemas to the LLM, which ranks tools by semantic relevance. The LLM understands context, synonyms, and intent far better than traditional keyword search.
Performance Comparison:
| Provider | Latency | Memory | Quality | Requirements |
|---|---|---|---|---|
| Claude (haiku) | ~3s | <10MB | ⭐⭐⭐⭐⭐ | Claude CLI |
| Codex (gpt-5-codex-mini) | ~3s | <10MB | ⭐⭐⭐⭐⭐ | Codex CLI |
| Copilot | ~3s | <10MB | ⭐⭐⭐⭐⭐ | GitHub CLI + Copilot |
Recommendation: Use Claude with haiku (default) for best balance of speed and quality.
Technology
OneMCP is built with:
- Official MCP Go SDK v1.1.0 - Anthropic/Google collaboration
- Go 1.25 - Modern, efficient, and type-safe
- JSON-RPC 2.0 - Standard protocol for MCP communication
- Multiple Transports - Stdio (command), HTTP (SSE), and more
The official SDK provides:
- Type-safe tool registration with automatic schema inference
- Multiple transport options (stdio via CommandTransport, HTTP via SSE, StreamableHTTP, in-memory for testing)
- Built-in client for connecting to external servers
- Full support for MCP protocol features
Supported Transports:
- Command (stdio): Execute local commands and communicate via stdin/stdout using JSON-RPC - most common for local tools
- Streamable HTTP: Connect to remote HTTP-based MCP servers using JSON-RPC over HTTP with optional SSE streaming (MCP spec 2025-03-26+) - ideal for cloud services
- In-Memory: Direct in-process communication - useful for testing
Protocol Details:
- All MCP communication uses JSON-RPC 2.0 for message encoding
- Stdio transport: JSON-RPC messages over stdin/stdout
- Streamable HTTP transport: JSON-RPC via HTTP POST/GET with optional Server-Sent Events (SSE) for streaming responses
- Single endpoint (no dual endpoint complexity)
- Supports both request/response and streaming
- Session management via
Mcp-Session-Idheader - Automatic reconnection with
Last-Event-IDfor resilience
Quick Start
1. Build the aggregator
# Build for macOS
GOOS=darwin GOARCH=amd64 go build -o one-mcp ./cmd/one-mcp
# Build for Linux
GOOS=linux GOARCH=amd64 go build -o one-mcp-linux ./cmd/one-mcp
2. Configure OneMCP
Create .onemcp.json:
{
"settings": {
"searchResultLimit": 5,
"searchProvider": "claude"
},
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp"],
"category": "browser",
"enabled": true
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
"category": "filesystem",
"enabled": true
}
}
}
3. Run the aggregator
# Start OneMCP aggregator (uses .onemcp.json by default)
./one-mcp
# Use custom config file
ONEMCP_CONFIG=/path/to/config.json ./one-mcp
# Or with custom server name/version
MCP_SERVER_NAME=my-aggregator MCP_SERVER_VERSION=0.2.0 ./one-mcp
# Enable debug logging
MCP_LOG_LEVEL=debug ./one-mcp
4. Use with MCP Clients
Add to your MCP client config. For example, Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json):
{
"mcpServers": {
"onemcp": {
"command": "/path/to/one-mcp",
"env": {
"MCP_SERVER_NAME": "my-aggregator",
"MCP_LOG_FILE": "/tmp/onemcp.log"
}
}
}
}
Meta-Tools API
1. tool_search
Discover available tools with optional filtering. Uses LLM-powered semantic search to intelligently match your query to the most relevant tools. Returns 5 tools per query by default (configurable via .onemcp.json).
Arguments:
query(optional) - Search query in natural language (e.g., "take a screenshot", "navigate to webpage", "read files")category(optional) - Filter by category (e.g., "browser", "filesystem")detail_level(optional) - Level of detail to return:"names_only"- Just tool names and categories (minimal tokens)"summary"- Name, category, and description (default)"detailed"- Includes argument schema"full_schema"- Complete schema with all details
offset(optional) - Number of results to skip for pagination (default: 0)
Semantic Search: The LLM understands natural language queries, context, and intent. It matches your query to tool descriptions semantically, not just by keywords.
Schema Caching: External tool schemas are cached at startup for fast repeated searches.
Hybrid Approach: Search returns 5 tools inline by default (configurable) plus a schema_file path (/tmp/onemcp-tools-schema.json) containing ALL executable tools with full schemas (external and internal tools only, excluding meta-tools which are already exposed via MCP's tools/list). For comprehensive tool exploration, search the schema file using filesystem tools instead of paginating through search results. This reduces token usage while maintaining access to complete tool information.
Example - Basic search:
{
"tool_name": "tool_search",
"arguments": {
"query": "navigate",
"detail_level": "summary"
}
}
Example - Paginated search:
{
"tool_name": "tool_search",
"arguments": {
"query": "browser",
"category": "browser",
"detail_level": "detailed",
"offset": 5
}
}
Returns:
{
"total_count": 21,
"returned_count": 5,
"offset": 0,
"limit": 5,
"has_more": true,
"schema_file": "/tmp/onemcp-tools-schema.json",
"message": "Showing 5 of 21 tools. For complete tool list with full schemas, search with filesystem tools in: /tmp/onemcp-tools-schema.json",
"tools": [
{
"name": "playwright_browser_navigate",
"category": "browser",
"description": "Navigate to a URL",
"schema": {...}
},
{
"name": "playwright_browser_click",
"category": "browser",
"description": "Click an element",
"schema": {...}
}
]
}
2. tool_execute
Execute a single tool by name.
Arguments:
tool_name(required) - Name of the tool (e.g.,playwright_browser_navigate)arguments(required) - Tool-specific arguments
Example:
{
"tool_name": "tool_execute",
"arguments": {
"tool_name": "playwright_browser_navigate",
"arguments": {
"url": "https://example.com"
}
}
}
Configuration
OneMCP uses .onemcp.json for configuration. The configuration file supports JSON with comments (JSONC) format - add // for line comments or /* */ for block comments to document your configuration.
See .onemcp.json.example for a complete example with comments.
Settings
Configure OneMCP behavior:
{
"settings": {
"searchResultLimit": 5,
"searchProvider": "claude"
}
}
Available Settings:
searchResultLimit(number) - Number of tools to return per search query. Default: 5. Lower values reduce token usage but require more searches for discovery.searchProvider(string) - LLM provider for semantic search. Options:"claude"(default),"codex","copilot". See "LLM-Powered Semantic Search" section above for details.claudeModel(string) - Claude model to use whensearchProvideris"claude". Options:"haiku"(default),"sonnet","opus".codexModel(string) - Codex model to use whensearchProvideris"codex". Options:"gpt-5-codex-mini"(default),"gpt-5-codex".copilotModel(string) - Copilot model to use whensearchProvideris"copilot". Default:"claude-haiku-4.5".
External Server Configuration
Define external MCP servers in the mcpServers section. OneMCP supports multiple transport types:
1. Command Transport (stdio) - Most common, runs a local command:
{
"mcpServers": {
"playwright": {
"command": "npx", // Command to execute
"args": ["-y", "@playwright/mcp"], // Command arguments
"env": { // Optional: Environment variables
"DEBUG": "1"
},
"category": "browser", // Optional: Category for grouping tools
"enabled": true // Required: Whether to load this server
}
}
}
2. HTTP Transport (Streamable HTTP) - Connect to remote MCP server via HTTP:
{
"mcpServers": {
"remote-server": {
"url": "https://api.example.com/mcp", // HTTP endpoint URL (Streamable HTTP)
"category": "api",
"enabled": true
}
}
}
Note: OneMCP uses Streamable HTTP transport (MCP spec 2025-03-26+) for all HTTP connections. This is the modern standard that replaces the deprecated SSE transport.
Configuration Fields:
command(string) - Command to execute (for stdio transport)args(array) - Command arguments (stdio only)url(string) - HTTP endpoint URL (for Streamable HTTP transport)env(object) - Environment variables (stdio only)category(string) - Category for grouping toolsenabled(boolean) - Whether to load this server
Note: Provide either command or url, not both.
Environment Variables
ONEMCP_CONFIG- Configuration file path (default: ".onemcp.json")MCP_SERVER_NAME- Server name (default: "one-mcp-aggregator")MCP_SERVER_VERSION- Server version (default: "0.2.0")MCP_LOG_FILE- Log file path (default: "/tmp/one-mcp.log")MCP_LOG_LEVEL- Log level: "debug" or "info" (default: "info")
Tool Naming Convention
External tools are automatically prefixed with their server name:
browser_navigatefromplaywright→playwright_browser_navigatetake_screenshotfromchrome→chrome_take_screenshot
This prevents naming conflicts when aggregating multiple servers.
Progressive Discovery Workflow
The recommended workflow for LLMs:
- Search for tools: Use
tool_searchwith filters to find relevant tools - Get detailed schemas: Use
detail_level: "full_schema"for tools you plan to use - Execute tools: Use
tool_executewith validated arguments
Example conversation:
User: "Take a screenshot of example.com"
LLM: Let me search for screenshot tools...
→ tool_search(query="screenshot", detail_level="full_schema")
LLM: Found playwright_browser_navigate and playwright_browser_take_screenshot.
Let me navigate first...
→ tool_execute(tool_name: "playwright_browser_navigate", arguments: {url: "https://example.com"})
LLM: Now taking screenshot...
→ tool_execute(tool_name: "playwright_browser_take_screenshot", arguments: {filename: "example.png"})
Logging
Logs are written to the file specified by MCP_LOG_FILE (default: /tmp/one-mcp.log):
time=2025-11-11T10:00:00.000+00:00 level=INFO msg="Starting OneMCP aggregator server over stdio..." name=one-mcp-aggregator version=0.2.0
time=2025-11-11T10:00:01.000+00:00 level=INFO msg="Loaded external MCP server" name=playwright tools=21 category=browser
time=2025-11-11T10:00:02.000+00:00 level=INFO msg="Registered tool" name=playwright_browser_navigate category=browser
time=2025-11-11T10:00:03.000+00:00 level=INFO msg="Executing tool" name=playwright_browser_navigate
time=2025-11-11T10:00:04.000+00:00 level=INFO msg="Tool execution successful" name=playwright_browser_navigate execution_time_ms=245
Troubleshooting
External server fails to start
- Check that the command path is correct in
.onemcp.json - Verify required environment variables are set
- Check logs in
MCP_LOG_FILEfor startup errors - Test the server command manually:
command args...
Tool not found
- Use
tool_searchto verify the tool exists - Check that tool names include the server prefix (e.g.,
playwright_browser_navigate) - Verify the external server is enabled in
.onemcp.json
Tool execution fails
- Use
tool_searchwithdetail_level: "full_schema"to see required arguments - Check argument types match the schema
- Review logs for detailed error messages
Development
Project Structure
.
├── cmd/
│ └── one-mcp/
│ └── main.go # Entry point
├── internal/
│ ├── mcp/
│ │ └── server.go # Aggregator server with meta-tools
│ ├── tools/
│ │ ├── types.go # Tool type definitions
│ │ └── registry.go # Tool registry and dispatcher
│ └── mcpclient/
│ └── client.go # External MCP server client
├── .onemcp.json # Configuration (settings + external servers)
├── go.mod
└── README.md
Adding External Servers
Simply add to the mcpServers section in .onemcp.json - no code changes required:
{
"settings": {
"searchResultLimit": 5,
"searchProvider": "claude"
},
"mcpServers": {
"your-server": {
"command": "/path/to/your-mcp-server",
"args": ["--config", "config.json"],
"env": {
"API_KEY": "your-key"
},
"category": "custom",
"enabled": true
}
}
}
OneMCP will automatically:
- Start the external server
- Fetch its tool list
- Prefix tool names with
your-server_ - Make tools discoverable via
tool_search - Route
tool_executecalls to the external server
Adding Internal Tools
Note: Adding internal tools requires modifying the OneMCP source code. You'll need to:
- Clone this repository:
git clone https://github.com/radutopala/onemcp.git - Make your changes (see steps below)
- Rebuild the binary:
go build -o one-mcp ./cmd/one-mcp
To add custom internal tools directly to the OneMCP aggregator:
1. Define your tool struct with input/output types
// internal/tools/mytools.go
package tools
type CalculatorInput struct {
A int `json:"a" jsonschema:"First number"`
B int `json:"b" jsonschema:"Second number"`
}
type CalculatorOutput struct {
Result int `json:"result" jsonschema:"Calculation result"`
}
2. Implement the tool handler
func (s *AggregatorServer) handleCalculate(ctx context.Context, req *mcp.CallToolRequest, input CalculatorInput) (*mcp.CallToolResult, any, error) {
result := CalculatorOutput{
Result: input.A + input.B,
}
resultJSON, _ := json.Marshal(result)
return &mcp.CallToolResult{
Content: []mcp.Content{
&mcp.TextContent{Text: string(resultJSON)},
},
}, nil, nil
}
3. Register the tool in the server
// internal/mcp/server.go - in registerMetaTools() or a new registration function
func (s *AggregatorServer) registerCustomTools(server *mcp.Server) error {
mcp.AddTool(server, &mcp.Tool{
Name: "calculate",
Description: "Add two numbers together",
}, s.handleCalculate)
return nil
}
4. Call the registration function
// In NewAggregatorServer(), after registerMetaTools()
if err := aggregator.registerCustomTools(server); err != nil {
return nil, fmt.Errorf("failed to register custom tools: %w", err)
}
Key Points
- Type Safety: The official SDK automatically infers schemas from your Go structs
- Struct Tags: Use
jsonschema:"description"to document arguments - Handler Signature:
func(ctx, *CallToolRequest, InputType) (*CallToolResult, any, error) - Response Format: Always return JSON in TextContent for consistency with meta-tools
- No Schema Required: If you don't provide
inputSchemain the Tool struct, it's inferred from your input type
Example: Echo Tool
// Simple echo tool that returns what you send
type EchoInput struct {
Message string `json:"message" jsonschema:"Message to echo back"`
}
func (s *AggregatorServer) handleEcho(ctx context.Context, req *mcp.CallToolRequest, input EchoInput) (*mcp.CallToolResult, any, error) {
return &mcp.CallToolResult{
Content: []mcp.Content{
&mcp.TextContent{Text: input.Message},
},
}, nil, nil
}
// Register in server
mcp.AddTool(server, &mcp.Tool{
Name: "echo",
Description: "Echo back a message",
}, s.handleEcho)
Internal tools are directly exposed via tools/list alongside the 2 meta-tools, making them immediately available without needing tool_search.
When to use internal tools vs external servers:
- Use external servers (recommended): For most use cases - no code changes needed, just configuration
- Use internal tools: Only when you need tight integration with OneMCP's core logic or want Go's type safety for custom business logic
License
MIT License - See LICENSE file for details.
Contributing
Contributions welcome! Please open an issue or PR on GitHub.