What is this?
MCP (Model Context Protocol) is an open standard that lets AI assistants (ChatGPT, Claude, VS Code Copilot, JetBrains AI, Codex, and others) use external tools. This server exposes the entire Zabbix API as MCP tools — allowing any compatible AI assistant to query hosts, check problems, manage templates, acknowledge events, and perform any other Zabbix operation.
The server runs as a standalone HTTP service. AI clients connect to it over the network.
Features
- Complete API coverage - All 57 Zabbix API groups (219 tools): hosts, problems, triggers, templates, users, dashboards, and more
- Multi-server support - Connect to multiple Zabbix instances (production, staging, ...) with separate tokens
- Single config file - One TOML file, no scattered environment variables
- Read-only mode - Per-server write protection to prevent accidental changes
- Auto-reconnect - Transparent re-authentication on session expiry
- Production-ready - systemd service, logrotate, security hardening
- Generic fallback -
zabbix_raw_api_calltool for any API method not explicitly defined
Quick Start
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
sudo ./deploy/install.sh
sudo nano /etc/zabbix-mcp/config.toml # fill in your Zabbix URL + API token
sudo systemctl start zabbix-mcp-server
sudo systemctl enable zabbix-mcp-server
Done. The server is running on http://127.0.0.1:8080/mcp.
Installation
Requirements
- Linux server with Python 3.10+
- Network access to your Zabbix server(s)
- Zabbix API token (User settings > API tokens)
Install
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
sudo ./deploy/install.sh
The install script will:
- Create a dedicated system user
zabbix-mcp(no login shell) - Create a Python virtual environment in
/opt/zabbix-mcp/venv - Install the server and all dependencies
- Copy the example config to
/etc/zabbix-mcp/config.toml - Install a systemd service unit (
zabbix-mcp-server) - Set up logrotate for
/var/log/zabbix-mcp/*.log(daily, 30 days retention)
Upgrade
cd zabbix-mcp-server
git pull
sudo ./deploy/install.sh update
The update command will upgrade the package to the latest version, refresh the systemd unit and logrotate config, and restart the service if it is running.
Configure
Edit the config file with your Zabbix server details:
sudo nano /etc/zabbix-mcp/config.toml
Minimal configuration - just fill in your Zabbix URL and API token:
[server]
transport = "http"
host = "127.0.0.1"
port = 8080
[zabbix.production]
url = "https://zabbix.example.com"
api_token = "your-api-token"
read_only = true
verify_ssl = true
All available options with detailed descriptions are documented in config.example.toml.
Authentication
The HTTP endpoint can be protected with a bearer token. There are two ways to configure it:
Option 1 - token directly in config:
[server]
auth_token = "your-secret-token-here"
Option 2 - token from environment variable (recommended for production):
[server]
auth_token = "${MCP_AUTH_TOKEN}"
When auth_token is set, all clients must include it in the Authorization header:
Authorization: Bearer your-secret-token-here
When auth_token is not set, the server accepts unauthenticated connections. This is only safe when the server is bound to 127.0.0.1 (default).
Multiple Zabbix servers
You can connect to multiple Zabbix instances. Each tool has a server parameter to select which one to use (defaults to the first defined):
[zabbix.production]
url = "https://zabbix.example.com"
api_token = "prod-token"
read_only = true
[zabbix.staging]
url = "https://zabbix-staging.example.com"
api_token = "staging-token"
read_only = false
Start
sudo systemctl start zabbix-mcp-server
sudo systemctl enable zabbix-mcp-server
Verify the server is running:
sudo systemctl status zabbix-mcp-server
Logs
# Live log stream
tail -f /var/log/zabbix-mcp/server.log
# Via journalctl
sudo journalctl -u zabbix-mcp-server -f
Docker
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
cp config.example.toml config.toml
nano config.toml # fill in your Zabbix details
docker compose up -d
The config file is mounted read-only into the container. Logs are stored in a Docker volume.
Upgrade:
git pull
docker compose up -d --build
Logs:
docker compose logs -f
Manual Installation (pip)
If you prefer to install manually without the deploy script:
python3 -m venv /opt/zabbix-mcp/venv
/opt/zabbix-mcp/venv/bin/pip install /path/to/zabbix-mcp-server
/opt/zabbix-mcp/venv/bin/zabbix-mcp-server --config /path/to/config.toml
Connecting AI Clients
The server uses the Streamable HTTP transport and listens on http://127.0.0.1:8080/mcp by default.
MCP (Model Context Protocol) is an open standard that lets AI assistants use external tools. Any MCP-compatible client can connect to this server - ChatGPT, VS Code, Claude, Codex, JetBrains, and others.
The MCP client configuration is the same for all clients:
{
"mcpServers": {
"zabbix": {
"url": "http://your-server:8080/mcp"
}
}
}
Where to put this config depends on the client:
| Client | Config location |
|---|---|
| ChatGPT (initMAX widget) | MCP server settings in the widget configuration |
| VS Code (Copilot / Continue / Cline) | .vscode/mcp.json or extension settings |
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows) |
| Claude Code | .mcp.json in project root or ~/.claude/settings.json for global |
| OpenAI Codex | MCP server settings in the Codex configuration |
| JetBrains IDEs | MCP server settings in the IDE |
When auth_token is configured on the server, clients must include the bearer token in requests:
Authorization: Bearer your-secret-token-here
Example Prompts
Once connected, you can ask your AI assistant things like:
| Prompt | What it does |
|---|---|
| "Show me all current problems" | Calls problem_get to list active alerts |
| "Which hosts are down?" | Calls host_get with status filter |
| "Acknowledge event 12345 with message 'investigating'" | Calls event_acknowledge |
| "What triggers fired in the last hour?" | Calls trigger_get with time filter and only_true |
| "List all hosts in group 'Linux servers'" | Calls hostgroup_get then host_get with group filter |
| "Show me CPU usage history for host 'web-01'" | Calls host_get, item_get, then history_get |
| "Put host 'db-01' into maintenance for 2 hours" | Calls maintenance_create |
| "Export the template 'Template OS Linux'" | Calls configuration_export |
| "How many items does host 'app-01' have?" | Calls item_get with countOutput |
| "Check the health of the MCP server" | Calls health_check |
The AI chains multiple tools automatically when needed.
Available Tools
All tools accept an optional server parameter to target a specific Zabbix instance (defaults to the first configured server).
| Category | Tool | Description |
|---|---|---|
| Monitoring | problem_get |
Get active problems and alerts — the primary tool for checking what is wrong right now |
event_get / event_acknowledge |
Retrieve events and acknowledge, close, or comment on them | |
history_get / trend_get |
Query raw historical metric data or aggregated trends for capacity planning | |
dashboard_* / map_* |
Create, update, and manage dashboards and network maps | |
| Data Collection | host_* / hostgroup_* |
Manage monitored hosts, host groups, and their membership |
item_* / trigger_* / graph_* |
Manage data collection items, trigger expressions, and graphs | |
template_* / templategroup_* |
Manage monitoring templates and template groups | |
maintenance_* |
Schedule and manage maintenance periods to suppress alerts | |
discoveryrule_* / *prototype_* |
Low-level discovery rules and item/trigger/graph prototypes | |
configuration_export / _import |
Export or import full Zabbix configuration (YAML, XML, JSON) | |
| Alerts | action_* / mediatype_* |
Configure automated alert actions and notification channels (email, Slack, webhook, ...) |
alert_get |
Query the history of sent notifications and remote commands | |
script_execute |
Execute global scripts on hosts (SSH, IPMI, custom commands) | |
| Users & Access | user_* / usergroup_* / role_* |
Manage user accounts, permission groups, and RBAC roles |
token_* |
Create, list, and manage API tokens for service accounts | |
| Administration | proxy_* / proxygroup_* |
Manage Zabbix proxies and proxy groups for distributed monitoring |
auditlog_get |
Query the audit trail of all configuration changes and logins | |
settings_get / _update |
View and modify global Zabbix server settings | |
| Generic | zabbix_raw_api_call |
Call any Zabbix API method directly by name — use for methods not covered above |
health_check |
Verify MCP server status and connectivity to all configured Zabbix servers |
Common Parameters (get methods)
| Parameter | Description |
|---|---|
server |
Target Zabbix server name — defaults to the first configured server when omitted |
output |
Fields to return: extend returns all fields, or pass comma-separated field names (e.g. hostid,name,status) |
filter |
Exact match filter as JSON object — e.g. {"status": 0} returns only enabled objects |
search |
Pattern match filter as JSON object — e.g. {"name": "web"} finds all objects containing "web" in the name |
limit |
Maximum number of results to return — use to avoid large responses |
sortfield / sortorder |
Sort results by a field name in ASC (ascending) or DESC (descending) order |
countOutput |
Return the count of matching objects instead of the actual data — useful for statistics |
Configuration Reference
All available options with detailed descriptions are in config.example.toml. Quick overview:
| Section | Parameter | Description |
|---|---|---|
[server] |
transport |
"http" (recommended) or "stdio" |
host |
HTTP bind address — 127.0.0.1 (localhost only) or 0.0.0.0 (all interfaces) |
|
port |
HTTP port (default: 8080) |
|
log_level |
debug, info, warning, or error |
|
log_file |
Path to log file (in addition to stderr) | |
auth_token |
Bearer token for HTTP authentication (supports ${ENV_VAR}) |
|
rate_limit |
Max Zabbix API calls per minute — protects Zabbix from flooding (default: 60, set to 0 to disable) |
|
[zabbix.<name>] |
url |
Zabbix frontend URL |
api_token |
API token (supports ${ENV_VAR}) |
|
read_only |
Block write operations (default: true) |
|
verify_ssl |
Verify TLS certificates (default: true) |
Zabbix Compatibility
| Zabbix Version | Status | Notes |
|---|---|---|
| 7.0 LTS, 7.2, 7.4 | Fully supported | All API methods match this version — complete feature coverage |
| 6.0 LTS, 6.2, 6.4 | Supported | Core methods work, some newer API methods (e.g. proxy groups, MFA) may return errors |
| 5.0 LTS, 5.2, 5.4 | Basic support | Core monitoring and data collection work, newer features unavailable |
The server uses the standard Zabbix JSON-RPC API. Methods not available in your Zabbix version will return an error from the Zabbix server — the MCP server itself does not enforce version checks.
Development
git clone https://github.com/initMAX/zabbix-mcp-server.git
cd zabbix-mcp-server
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
Test with MCP Inspector:
npx @modelcontextprotocol/inspector zabbix-mcp-server --config config.toml
License
AGPL-3.0 - see LICENSE.