proxmox-mcp
A Docker-first Model Context Protocol (MCP) server for Proxmox VE.
It exposes safe, structured tools for read-only cluster inspection, VM/container lifecycle operations, snapshots, migration, provisioning helpers, and a guarded generic Proxmox API escape hatch.
Status
Alpha. Built with API-token auth and explicit confirmation gates for mutating operations.
Run with Docker Compose
Docker Compose is the primary supported way to run this MCP server.
cp .env.example .env
# edit .env with your Proxmox URL and token
docker compose up -d --build
By default the MCP server listens on:
http://127.0.0.1:8000/mcp
Default MCP transport settings in .env.example:
MCP_TRANSPORT=streamable-http
MCP_PORT=8000
MCP_PATH=/mcp
Inside Docker, the app binds to 0.0.0.0 in the container, but Compose publishes it only on host loopback by default:
ports:
- "127.0.0.1:8000:8000"
Do not publish this unauthenticated MCP endpoint on all interfaces unless you put real network controls in front of it.
Supported MCP transports:
streamable-httpโ default for Docker and most remote deploymentssseโ legacy HTTP/SSE MCP transportstdioโ local subprocess mode
For HTTPS exposure of the MCP endpoint, put this service behind a reverse proxy such as Caddy, Traefik, or nginx and terminate TLS there. Keep the container on plain HTTP internally unless you have a specific reason to do otherwise.
Proxmox protocol configuration
Prefer HTTPS for the Proxmox API:
PVE_BASE_URL=https://proxmox.lan:8006
PVE_VERIFY_SSL=false # only for self-signed homelab certs
If you intentionally need plain HTTP for Proxmox, make it explicit:
PVE_BASE_URL=http://proxmox.lan:8006
PVE_ALLOW_INSECURE_HTTP=true
Plain HTTP sends credentials over the network. That is usually a bad idea outside a tightly controlled lab network.
Authentication
Prefer a Proxmox API token:
PVE_API_TOKEN_ID=user@pam!token-name
PVE_API_TOKEN_SECRET=replace-me
Password-ticket auth is also supported, but API tokens are cleaner for MCP:
PVE_USERNAME=user@pam
PVE_PASSWORD=replace-me
Hermes config for Docker HTTP MCP
mcp_servers:
proxmox:
url: "http://127.0.0.1:8000/mcp"
timeout: 120
connect_timeout: 30
If deployed on a remote host behind TLS:
mcp_servers:
proxmox:
url: "https://proxmox-mcp.example.internal/mcp"
timeout: 120
connect_timeout: 30
Run without Docker
Local stdio mode remains available for development:
MCP_TRANSPORT=stdio uvx proxmox-mcp
# or from a checkout:
MCP_TRANSPORT=stdio uv run proxmox-mcp
Local HTTP mode without Docker:
MCP_TRANSPORT=streamable-http MCP_HOST=127.0.0.1 MCP_PORT=8000 MCP_PATH=/mcp uv run proxmox-mcp
Safety model
GETrequests are allowed by default.POST,PUT, andDELETErequireconfirm=true.- High-level lifecycle/provisioning tools also require
confirm=true. - Secrets are never intentionally returned.
- Generic API paths reject full URLs, query strings, fragments, traversal, encoded traversal, and encoded slash tricks.
- Path segments are validated and encoded before being sent to Proxmox.
Tool coverage
Phase 1: Read-only
pve_get_versionpve_get_cluster_statuspve_list_nodespve_list_resourcespve_list_vmspve_get_vm_statuspve_get_vm_configpve_list_storagepve_list_backupspve_get_task_statuspve_get_node_metrics
Phase 2: Safe actions
pve_start_vmpve_shutdown_vmpve_stop_vmpve_reboot_vmpve_suspend_vmpve_resume_vmpve_create_snapshotpve_delete_snapshotpve_rollback_snapshotpve_migrate_vm
Phase 3: Admin/provisioning + escape hatch
pve_clone_vmpve_create_lxcpve_create_qemu_vmpve_delete_vmpve_resize_diskpve_set_vm_configpve_api_request
Development
uv sync --extra dev
uv run pytest
uv run ruff check .