🦆 MCP Rubber Duck
An MCP (Model Context Protocol) server that acts as a bridge to query multiple OpenAI-compatible LLMs. Just like rubber duck debugging, explain your problems to various AI "ducks" and get different perspectives!
__
<(o )___
( ._> /
`---' Quack! Ready to debug!
Features
- 🔌 Universal OpenAI Compatibility: Works with any OpenAI-compatible API endpoint
- 🦆 Multiple Ducks: Configure and query multiple LLM providers simultaneously
- 💬 Conversation Management: Maintain context across multiple messages
- 🏛️ Duck Council: Get responses from all your configured LLMs at once
- 💾 Response Caching: Avoid duplicate API calls with intelligent caching
- 🔄 Automatic Failover: Falls back to other providers if primary fails
- 📊 Health Monitoring: Real-time health checks for all providers
- 🎨 Fun Duck Theme: Rubber duck debugging with personality!
Supported Providers
Any provider with an OpenAI-compatible API endpoint, including:
- OpenAI (GPT-4, GPT-3.5)
- Google Gemini (Gemini 2.5 Flash, Gemini 2.0 Flash)
- Anthropic (via OpenAI-compatible endpoints)
- Groq (Llama, Mixtral, Gemma)
- Together AI (Llama, Mixtral, and more)
- Perplexity (Online models with web search)
- Anyscale (Open source models)
- Azure OpenAI (Microsoft-hosted OpenAI)
- Ollama (Local models)
- LM Studio (Local models)
- Custom (Any OpenAI-compatible endpoint)
Quick Start
For Claude Desktop Users
👉 Complete Claude Desktop setup instructions below in Claude Desktop Configuration
Installation
Prerequisites
- Node.js 20 or higher
- npm or yarn
- At least one API key for a supported provider
Install from Source
# Clone the repository
git clone https://github.com/yourusername/mcp-rubber-duck.git
cd mcp-rubber-duck
# Install dependencies
npm install
# Build the project
npm run build
# Run the server
npm start
Configuration
Method 1: Environment Variables
Create a .env
file in the project root:
# OpenAI
OPENAI_API_KEY=sk-...
OPENAI_DEFAULT_MODEL=gpt-4o-mini # Optional: defaults to gpt-4o-mini
# Google Gemini
GEMINI_API_KEY=...
GEMINI_DEFAULT_MODEL=gemini-2.5-flash # Optional: defaults to gemini-2.5-flash
# Groq
GROQ_API_KEY=gsk_...
GROQ_DEFAULT_MODEL=llama-3.3-70b-versatile # Optional: defaults to llama-3.3-70b-versatile
# Ollama (Local)
OLLAMA_BASE_URL=http://localhost:11434/v1 # Optional
OLLAMA_DEFAULT_MODEL=llama3.2 # Optional: defaults to llama3.2
# Together AI
TOGETHER_API_KEY=...
# Custom Provider
CUSTOM_API_KEY=...
CUSTOM_BASE_URL=https://api.example.com/v1
CUSTOM_DEFAULT_MODEL=custom-model # Optional: defaults to custom-model
# Global Settings
DEFAULT_PROVIDER=openai
DEFAULT_TEMPERATURE=0.7
LOG_LEVEL=info
# Optional: Custom Duck Nicknames (Have fun with these!)
OPENAI_NICKNAME="DUCK-4" # Optional: defaults to "GPT Duck"
GEMINI_NICKNAME="Duckmini" # Optional: defaults to "Gemini Duck"
GROQ_NICKNAME="Quackers" # Optional: defaults to "Groq Duck"
OLLAMA_NICKNAME="Local Quacker" # Optional: defaults to "Local Duck"
CUSTOM_NICKNAME="My Special Duck" # Optional: defaults to "Custom Duck"
Note: Duck nicknames are completely optional! If you don't set them, you'll get the charming defaults (GPT Duck, Gemini Duck, etc.). If you use a config.json
file, those nicknames take priority over environment variables.
Method 2: Configuration File
Create a config/config.json
file based on the example:
cp config/config.example.json config/config.json
# Edit config/config.json with your API keys and preferences
Claude Desktop Configuration
This is the most common setup method for using MCP Rubber Duck with Claude Desktop.
Step 1: Build the Project
First, ensure the project is built:
# Clone the repository
git clone https://github.com/yourusername/mcp-rubber-duck.git
cd mcp-rubber-duck
# Install dependencies and build
npm install
npm run build
Step 2: Configure Claude Desktop
Edit your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add the MCP server configuration:
{
"mcpServers": {
"rubber-duck": {
"command": "node",
"args": ["/absolute/path/to/mcp-rubber-duck/dist/index.js"],
"env": {
"OPENAI_API_KEY": "your-openai-api-key-here",
"OPENAI_DEFAULT_MODEL": "gpt-4o-mini",
"GEMINI_API_KEY": "your-gemini-api-key-here",
"GEMINI_DEFAULT_MODEL": "gemini-2.5-flash",
"DEFAULT_PROVIDER": "openai",
"LOG_LEVEL": "info"
}
}
}
}
Important: Replace the placeholder API keys with your actual keys:
your-openai-api-key-here
→ Your OpenAI API key (starts withsk-
)your-gemini-api-key-here
→ Your Gemini API key from Google AI Studio
Step 3: Restart Claude Desktop
- Completely quit Claude Desktop (⌘+Q on Mac)
- Launch Claude Desktop again
- The MCP server should connect automatically
Step 4: Test the Integration
Once restarted, test these commands in Claude:
Check Duck Health
Use the list_ducks tool with check_health: true
Should show:
- ✅ GPT Duck (openai) - Healthy
- ✅ Gemini Duck (gemini) - Healthy
List Available Models
Use the list_models tool
Ask a Specific Duck
Use the ask_duck tool with prompt: "What is rubber duck debugging?", provider: "openai"
Compare Multiple Ducks
Use the compare_ducks tool with prompt: "Explain async/await in JavaScript"
Test Specific Models
Use the ask_duck tool with prompt: "Hello", provider: "openai", model: "gpt-4"
Troubleshooting Claude Desktop Setup
If Tools Don't Appear
- Check API Keys: Ensure your API keys are correctly entered without typos
- Verify Build: Run
ls -la dist/index.js
to confirm the project built successfully - Check Logs: Look for errors in Claude Desktop's developer console
- Restart: Fully quit and restart Claude Desktop after config changes
Connection Issues
- Config File Path: Double-check you're editing the correct config file path
- JSON Syntax: Validate your JSON syntax (no trailing commas, proper quotes)
- Absolute Paths: Ensure you're using the full absolute path to
dist/index.js
- File Permissions: Verify Claude Desktop can read the dist directory
Health Check Failures
If ducks show as unhealthy:
- API Keys: Verify keys are valid and have sufficient credits/quota
- Network: Check internet connection and firewall settings
- Rate Limits: Some providers have strict rate limits for new accounts
Available Tools
🦆 ask_duck
Ask a single question to a specific LLM provider.
{
"prompt": "What is rubber duck debugging?",
"provider": "openai", // Optional, uses default if not specified
"temperature": 0.7 // Optional
}
💬 chat_with_duck
Have a conversation with context maintained across messages.
{
"conversation_id": "debug-session-1",
"message": "Can you help me debug this code?",
"provider": "groq" // Optional, can switch providers mid-conversation
}
📋 list_ducks
List all configured providers and their health status.
{
"check_health": true // Optional, performs fresh health check
}
📊 list_models
List available models for LLM providers.
{
"provider": "openai", // Optional, lists all if not specified
"fetch_latest": false // Optional, fetch latest from API vs cached
}
🔍 compare_ducks
Ask the same question to multiple providers simultaneously.
{
"prompt": "What's the best programming language?",
"providers": ["openai", "groq", "ollama"] // Optional, uses all if not specified
}
🏛️ duck_council
Get responses from all configured ducks - like a panel discussion!
{
"prompt": "How should I architect a microservices application?"
}
Usage Examples
Basic Query
// Ask the default duck
await ask_duck({
prompt: "Explain async/await in JavaScript"
});
Conversation
// Start a conversation
await chat_with_duck({
conversation_id: "learning-session",
message: "What is TypeScript?"
});
// Continue the conversation
await chat_with_duck({
conversation_id: "learning-session",
message: "How does it differ from JavaScript?"
});
Compare Responses
// Get different perspectives
await compare_ducks({
prompt: "What's the best way to handle errors in Node.js?",
providers: ["openai", "groq", "ollama"]
});
Duck Council
// Convene the council for important decisions
await duck_council({
prompt: "Should I use REST or GraphQL for my API?"
});
Provider-Specific Setup
Ollama (Local)
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama3.2
# Ollama automatically provides OpenAI-compatible endpoint at localhost:11434/v1
LM Studio (Local)
- Download LM Studio from https://lmstudio.ai/
- Load a model in LM Studio
- Start the local server (provides OpenAI-compatible endpoint at localhost:1234/v1)
Google Gemini
- Get API key from Google AI Studio
- Add to environment:
GEMINI_API_KEY=...
- Uses OpenAI-compatible endpoint (beta)
Groq
- Get API key from https://console.groq.com/keys
- Add to environment:
GROQ_API_KEY=gsk_...
Together AI
- Get API key from https://api.together.xyz/
- Add to environment:
TOGETHER_API_KEY=...
Verifying OpenAI Compatibility
To check if a provider is OpenAI-compatible:
- Look for
/v1/chat/completions
endpoint in their API docs - Check if they support the OpenAI SDK
- Test with curl:
curl -X POST "https://api.provider.com/v1/chat/completions" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "model-name",
"messages": [{"role": "user", "content": "Hello"}]
}'
Development
Run in Development Mode
npm run dev
Run Tests
npm test
Lint Code
npm run lint
Type Checking
npm run typecheck
Docker Support
Build Docker Image
docker build -t mcp-rubber-duck .
Run with Docker
docker run -it \
-e OPENAI_API_KEY=sk-... \
-e GROQ_API_KEY=gsk_... \
mcp-rubber-duck
Architecture
mcp-rubber-duck/
├── src/
│ ├── server.ts # MCP server implementation
│ ├── config/ # Configuration management
│ ├── providers/ # OpenAI client wrapper
│ ├── tools/ # MCP tool implementations
│ ├── services/ # Health, cache, conversations
│ └── utils/ # Logging, ASCII art
├── config/ # Configuration examples
└── tests/ # Test suites
Troubleshooting
Provider Not Working
- Check API key is correctly set
- Verify endpoint URL is correct
- Run health check:
list_ducks({ check_health: true })
- Check logs for detailed error messages
Connection Issues
- For local providers (Ollama, LM Studio), ensure they're running
- Check firewall settings for local endpoints
- Verify network connectivity to cloud providers
Rate Limiting
- Enable caching to reduce API calls
- Configure failover to alternate providers
- Adjust
max_retries
andtimeout
settings
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Submit a pull request
License
MIT License - see LICENSE file for details
Acknowledgments
- Inspired by the rubber duck debugging method
- Built on the Model Context Protocol (MCP)
- Uses OpenAI SDK for universal compatibility
Support
- Report issues: https://github.com/yourusername/mcp-rubber-duck/issues
- Documentation: https://github.com/yourusername/mcp-rubber-duck/wiki
- Discussions: https://github.com/yourusername/mcp-rubber-duck/discussions
🦆 Happy Debugging with your AI Duck Panel! 🦆