Workspace MCP Server
An MCP (Model Context Protocol) server that gives AI agents read-time access to workspace databases and project management APIs.
Built with FastMCP, this service acts as a universal data gateway — allowing any MCP-compatible agent (Claude Desktop, Copilot, custom orchestrators) to introspect, query, and correlate business data without writing integration code.
Architecture & Stack Rationale
Why MCP + FastMCP?
Model Context Protocol is the emerging standard for connecting LLMs to external tools and data. Instead of building one-off function calls or plugin APIs per agent, MCP provides a single, typed interface that any compliant LLM can discover and invoke at runtime.
- FastMCP was chosen over raw
mcpSDK because it eliminates boilerplate: tools are plain Python functions with a decorator — no JSON-RPC wire protocol handling, no manual capability negotiation. - The SSE (Server-Sent Events) transport allows long-lived connections from agents running in browsers, desktop apps, or cloud functions, and is trivially load-balanced behind Knative or any HTTP ingress.
┌──────────────────────┐ MCP/SSE ┌──────────────────────────────┐
│ AI Agent │ ◄──────────────► │ Workspace MCP Server │
│ (Claude, Copilot…) │ │ ├─ Database Tools (Mongo) │
└──────────────────────┘ │ └─ API Tools (REST) │
└──────────────────────────────┘
Why MongoDB for Workspace Data?
Workspace applications produce semi-structured, schema-flexible data — project documents, task metadata, user activity logs, and configuration records. MongoDB's document model:
- Handles polymorphic schemas without migrations (different document types emit different fields).
- Provides native aggregation pipelines for reporting and analytics (counts, grouping, windowed stats).
- Supports text indexes for full-text search across notes, descriptions, and logs.
- The
analyze_databasetool gives AI agents runtime introspection — they discover collections, field types, and indexes on their own, without hardcoded schemas.
Why a Dedicated Project Management API Adapter?
The project management REST API tracks tickets, tasks, projects, and users. Rather than exposing raw HTTP to the LLM (security risk + prompt overhead), the server wraps the API with:
- Managed authentication — supports both Bearer token and Basic Auth; credentials never leak into the agent conversation.
- Structured error boundaries — non-2xx responses are captured and returned as strings, never as raw stack traces.
- Metadata tools (
get_metadata,get_custom_fields) that let the agent self-discover valid enum values and custom field IDs — no hardcoding, no "I don't know what statuses exist."
Why Knative + Kubernetes for Deployment?
Production workloads demand zero-downtime updates and automatic scaling:
| Requirement | Solution |
|---|---|
| Scale to zero when idle | Knative Serving auto-scales down to 0 replicas |
| Per-environment isolation | Kubernetes Namespace separation |
| Secret rotation | Pre-deployment kubectl create secret from CI/CD |
| Canary rollouts | Knative revision traffic splitting |
The deploy_kn.sh script handles the full lifecycle: Docker build, image push to private registry, Kubernetes Secret upsert, and Knative Service apply — all in a single idempotent command.
Tools Overview
Database (MongoDB)
| Tool | Purpose |
|---|---|
analyze_database |
Introspect databases, collections, indexes, and inferred schemas |
get_sample_documents |
Peek at documents from any collection |
get_distinct_values |
Distinct values and frequency for a field |
query_documents |
Filtered queries with projection and sort |
aggregate_documents |
MongoDB aggregation pipelines |
count_documents |
Document count matching a filter |
search_documents |
Full-text search across documents |
Project Management API (REST)
| Tool | Purpose |
|---|---|
check_api_connection |
Verify API connectivity and return caller identity |
list_projects |
List all accessible projects |
search_items |
Query items by structured condition |
get_item_detail |
Full detail — linked items, references, comments |
get_metadata |
Discover valid priorities, statuses, categories, projects |
find_user |
Resolve users by name or email |
get_custom_fields |
List custom fields with API IDs |
Quick Start
pip install -r requirements.txt
cp .env.example .env
# Edit WORKSPACE_MONGO_URI, API_TOKEN, API_SITE_ID
python server.py
Server starts on http://0.0.0.0:8080 with SSE transport. Connect any MCP client to http://host:8080/sse.
Environment Variables
| Variable | Description |
|---|---|
WORKSPACE_MONGO_URI |
MongoDB connection string |
API_TOKEN |
Project management API token |
API_EMAIL |
Email for Basic Auth (omit for Bearer token) |
API_BASE_URL |
API instance URL |
API_SITE_ID |
Site ID (preferred over auto-resolve) |
SERVER_PORT |
HTTP server port (default 8080) |
Docker
docker build -t workspace-mcp .
docker run -p 8080:8080 --env-file .env workspace-mcp
Deployment
export AGENT_NAME=workspace-mcp
export DEPLOYMENT_CONTEXT=workspace
bash deploy.sh
Requires: Docker, kubectl (with Knative Serving installed), access to a container registry, and a .env.workspace file with deployment-specific variables (registry, namespace, pull secret, credentials).