๐ฌ Research Powerpack MCP ๐ฌ
Stop tab-hopping for research. Start getting structured context.
The ultimate research toolkit for your AI coding assistant. It searches the web, mines Reddit, scrapes any URL, and synthesizes everything into perfectly structured context your LLM actually understands.
๐งญ Quick Navigation
โก Get Started โข๐ฏ Why Research Powerpack โข๐ฎ Tools โขโ๏ธ Configuration โข๐ Examples
research-powerpack-mcp is the research assistant your AI has been missing. Stop asking your LLM to guess about things it doesn't know. This MCP server acts like a senior researcher -- searching the web, mining Reddit discussions, scraping documentation, and synthesizing everything into structured context so your AI can give you answers you can actually trust.
๐Batch Web Search 100 keywords in parallel |
๐ฌReddit Mining Real opinions, not marketing |
๐Universal Scraping JS rendering + geo-targeting |
๐งDeep Research AI synthesis with citations |
Here's how it works:
- You: "What's the best database for my use case?"
- AI + Powerpack: Searches Google, mines Reddit threads, scrapes docs, synthesizes findings.
- You: Get an actually informed answer with real community opinions and citations.
- Result: Better decisions, faster. No more juggling 47 browser tabs.
๐ฏ Why Research Powerpack
Manual research is tedious and error-prone. research-powerpack-mcp replaces that entire workflow with a single integrated pipeline.
| โ Without Research Powerpack | โ With Research Powerpack |
|
|
This isn't just fetching random pages. Research Powerpack builds high-signal, low-noise context with CTR-weighted ranking, smart comment allocation, and intelligent token distribution that prevents massive responses from breaking your LLM's context window.
๐ Get Started in 60 Seconds
1. Install
npm install research-powerpack-mcp
2. Configure Your MCP Client
| Client | Config File | Docs |
|---|---|---|
| ๐ฅ๏ธ Claude Desktop | claude_desktop_config.json |
Setup |
| โจ๏ธ Claude Code | ~/.claude.json or CLI |
Setup |
| ๐ฏ Cursor | .cursor/mcp.json |
Setup |
| ๐ Windsurf | MCP settings | Setup |
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"research-powerpack": {
"command": "npx",
"args": ["mcp-researchpowerpack"],
"env": {
"SERPER_API_KEY": "your_key",
"REDDIT_CLIENT_ID": "your_id",
"REDDIT_CLIENT_SECRET": "your_secret",
"SCRAPEDO_API_KEY": "your_key",
"OPENROUTER_API_KEY": "your_key"
}
}
}
}
or quick install (for macOS):
cat ~/Library/Application\ Support/Claude/claude_desktop_config.json | jq '.mcpServers["research-powerpack"] = {
"command": "npx",
"args": ["research-powerpack-mcp@latest"],
"disabled": false,
"env": {
"OPENROUTER_API_KEY": "xxx",
"REDDIT_CLIENT_ID": "xxx",
"REDDIT_CLIENT_SECRET": "xxx",
"RESEARCH_MODEL": "xxxx",
"SCRAPEDO_API_KEY": "xxx",
"SERPER_API_KEY": "xxxx"
}
}' | tee ~/Library/Application\ Support/Claude/claude_desktop_config.json
Claude Code (CLI)
One command to set everything up:
claude mcp add research-powerpack npx \
--scope user \
--env SERPER_API_KEY=your_key \
--env REDDIT_CLIENT_ID=your_id \
--env REDDIT_CLIENT_SECRET=your_secret \
--env OPENROUTER_API_KEY=your_key \
--env OPENROUTER_BASE_URL=https://openrouter.ai/api/v1 \
--env RESEARCH_MODEL=x-ai/grok-4.1-fast \
-- research-powerpack-mcp
Or manually add to ~/.claude.json:
{
"mcpServers": {
"research-powerpack": {
"command": "npx",
"args": ["mcp-researchpowerpack"],
"env": {
"SERPER_API_KEY": "your_key",
"REDDIT_CLIENT_ID": "your_id",
"REDDIT_CLIENT_SECRET": "your_secret",
"OPENROUTER_API_KEY": "your_key",
"OPENROUTER_BASE_URL": "https://openrouter.ai/api/v1",
"RESEARCH_MODEL": "x-ai/grok-4.1-fast"
}
}
}
}
Cursor/Windsurf
Add to .cursor/mcp.json or equivalent:
{
"mcpServers": {
"research-powerpack": {
"command": "npx",
"args": ["mcp-researchpowerpack"],
"env": {
"SERPER_API_KEY": "your_key"
}
}
}
}
Zero Crash Promise: Missing API keys? No problem. The server always starts. Tools that require missing keys return helpful setup instructions instead of crashing.
๐ฎ Tool Reference
๐web_search Batch Google search |
๐ฌsearch_reddit Find Reddit discussions |
๐get_reddit_post Fetch posts + comments |
๐scrape_links Extract any URL |
๐งdeep_research AI synthesis |
web_search
Batch web search using Google via Serper API. Search up to 100 keywords in parallel.
| Parameter | Type | Required | Description |
|---|---|---|---|
keywords |
string[] |
Yes | Search queries (1-100). Use distinct keywords for maximum coverage. |
Supports Google operators: site:, -exclusion, "exact phrase", filetype:
{
"keywords": [
"best IDE 2025",
"VS Code alternatives",
"Cursor vs Windsurf comparison"
]
}
search_reddit
Search Reddit via Google with automatic site:reddit.com filtering.
| Parameter | Type | Required | Description |
|---|---|---|---|
queries |
string[] |
Yes | Search queries (max 10) |
date_after |
string |
No | Filter results after date (YYYY-MM-DD) |
Search operators: intitle:keyword, "exact phrase", OR, -exclude
{
"queries": [
"best mechanical keyboard 2025",
"intitle:keyboard recommendation"
],
"date_after": "2024-01-01"
}
get_reddit_post
Fetch Reddit posts with smart comment allocation (1,000 comment budget distributed automatically).
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
urls |
string[] |
Yes | โ | Reddit post URLs (2-50) |
fetch_comments |
boolean |
No | true |
Whether to fetch comments |
max_comments |
number |
No | auto | Override comment allocation |
Smart Allocation:
- 2 posts โ ~500 comments/post (deep dive)
- 10 posts โ ~100 comments/post
- 50 posts โ ~20 comments/post (quick scan)
{
"urls": [
"https://reddit.com/r/programming/comments/abc123/post_title",
"https://reddit.com/r/webdev/comments/def456/another_post"
]
}
scrape_links
Universal URL content extraction with automatic fallback modes.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
urls |
string[] |
Yes | โ | URLs to scrape (3-50) |
timeout |
number |
No | 30 |
Timeout per URL (seconds) |
use_llm |
boolean |
No | false |
Enable AI extraction |
what_to_extract |
string |
No | โ | Extraction instructions for AI |
Automatic Fallback: Basic โ JS rendering โ JS + US geo-targeting
{
"urls": ["https://example.com/article1", "https://example.com/article2"],
"use_llm": true,
"what_to_extract": "Extract the main arguments and key statistics"
}
deep_research
AI-powered batch research with web search and citations.
| Parameter | Type | Required | Description |
|---|---|---|---|
questions |
object[] |
Yes | Research questions (2-10) |
questions[].question |
string |
Yes | The research question |
questions[].file_attachments |
object[] |
No | Files to include as context |
Token Allocation: 32,000 tokens distributed across questions:
- 2 questions โ 16,000 tokens/question (deep dive)
- 10 questions โ 3,200 tokens/question (rapid multi-topic)
{
"questions": [
{ "question": "What are the current best practices for React Server Components in 2025?" },
{ "question": "Compare Bun vs Node.js for production workloads with benchmarks." }
]
}
โ๏ธ Environment Variables & Tool Availability
Research Powerpack uses a modular architecture. Tools are automatically enabled based on which API keys you provide:
| ENV Variable | Tools Enabled | Free Tier |
|---|---|---|
SERPER_API_KEY |
web_search, search_reddit |
2,500 queries/mo |
REDDIT_CLIENT_ID + SECRET |
get_reddit_post |
Unlimited |
SCRAPEDO_API_KEY |
scrape_links |
1,000 credits/mo |
OPENROUTER_API_KEY |
deep_research + AI in scrape_links |
Pay-as-you-go |
RESEARCH_MODEL |
Model for deep_research |
Default: perplexity/sonar-deep-research |
LLM_EXTRACTION_MODEL |
Model for AI extraction in scrape_links |
Default: openrouter/gpt-oss-120b:nitro |
Configuration Examples
# Search-only mode (just web_search and search_reddit)
SERPER_API_KEY=xxx
# Reddit research mode (search + fetch posts)
SERPER_API_KEY=xxx
REDDIT_CLIENT_ID=xxx
REDDIT_CLIENT_SECRET=xxx
# Full research mode (all 5 tools)
SERPER_API_KEY=xxx
REDDIT_CLIENT_ID=xxx
REDDIT_CLIENT_SECRET=xxx
SCRAPEDO_API_KEY=xxx
OPENROUTER_API_KEY=xxx
Full Power Mode
For the best research experience, configure all four API keys:
SERPER_API_KEY=your_serper_key # Free: 2,500 queries/month
REDDIT_CLIENT_ID=your_reddit_id # Free: Unlimited
REDDIT_CLIENT_SECRET=your_reddit_secret
SCRAPEDO_API_KEY=your_scrapedo_key # Free: 1,000 credits/month
OPENROUTER_API_KEY=your_openrouter_key # Pay-as-you-go
This unlocks:
- 5 research tools working together
- AI-powered content extraction in scrape_links
- Deep research with web search and citations
- Complete Reddit mining (search โ fetch โ analyze)
Total setup time: ~10 minutes. Total free tier value: ~$50/month equivalent.
๐ API Key Setup Guides
๐ Serper API (Google Search) โ FREE: 2,500 queries/monthWhat you get
- Fast Google search results via API
- Enables
web_searchandsearch_reddittools
Setup Steps
- Go to serper.dev
- Click "Get API Key" (top right)
- Sign up with email or Google
- Copy your API key from the dashboard
- Add to your config:
SERPER_API_KEY=your_key_here
Pricing
- Free: 2,500 queries/month
- Paid: $50/month for 50,000 queries
What you get
- Full Reddit API access
- Fetch posts and comments with upvote sorting
- Enables
get_reddit_posttool
Setup Steps
- Go to reddit.com/prefs/apps
- Scroll down and click "create another app..."
- Fill in:
- Name:
research-powerpack(or any name) - App type: Select "script" (important!)
- Redirect URI:
http://localhost:8080
- Name:
- Click "create app"
- Copy your credentials:
- Client ID: The string under your app name
- Client Secret: The "secret" field
- Add to your config:
REDDIT_CLIENT_ID=your_client_id REDDIT_CLIENT_SECRET=your_client_secret
What you get
- JavaScript rendering support
- Geo-targeting and CAPTCHA handling
- Enables
scrape_linkstool
Setup Steps
- Go to scrape.do
- Click "Start Free"
- Sign up with email
- Copy your API key from the dashboard
- Add to your config:
SCRAPEDO_API_KEY=your_key_here
Credit Usage
- Basic scrape: 1 credit
- JavaScript rendering: 5 credits
- Geo-targeting: +25 credits
What you get
- Access to 100+ AI models via one API
- Enables
deep_researchtool - Enables AI extraction in
scrape_links
Setup Steps
- Go to openrouter.ai
- Sign up with Google/GitHub/email
- Go to openrouter.ai/keys
- Click "Create Key"
- Copy the key (starts with
sk-or-...) - Add to your config:
OPENROUTER_API_KEY=sk-or-v1-xxxxx
Recommended Models for Deep Research
# Default (optimized for research)
RESEARCH_MODEL=perplexity/sonar-deep-research
# Fast and capable
RESEARCH_MODEL=x-ai/grok-4.1-fast
# High quality
RESEARCH_MODEL=anthropic/claude-3.5-sonnet
# Budget-friendly
RESEARCH_MODEL=openai/gpt-4o-mini
Recommended Models for AI Extraction (use_llm in scrape_links)
# Default (fast and cost-effective for extraction)
LLM_EXTRACTION_MODEL=openrouter/gpt-oss-120b:nitro
# High quality extraction
LLM_EXTRACTION_MODEL=anthropic/claude-3.5-sonnet
# Budget-friendly
LLM_EXTRACTION_MODEL=openai/gpt-4o-mini
Note:
RESEARCH_MODELandLLM_EXTRACTION_MODELare independent. You can use a powerful model for deep research and a faster/cheaper model for content extraction, or vice versa.
๐ Recommended Workflows
Research a Technology Decision
1. web_search โ ["React vs Vue 2025", "Next.js vs Nuxt comparison"]
2. search_reddit โ ["best frontend framework 2025", "Next.js production experience"]
3. get_reddit_post โ [URLs from step 2]
4. scrape_links โ [Documentation and blog URLs from step 1]
5. deep_research โ [Synthesize findings into specific questions]
Competitive Analysis
1. web_search โ ["competitor name review", "competitor vs alternatives"]
2. scrape_links โ [Competitor websites, review sites]
3. search_reddit โ ["competitor name experience", "switching from competitor"]
4. get_reddit_post โ [URLs from step 3]
Debug an Obscure Error
1. web_search โ ["exact error message", "error + framework name"]
2. search_reddit โ ["error message", "framework + error type"]
3. get_reddit_post โ [URLs with solutions]
4. scrape_links โ [Stack Overflow answers, GitHub issues]
๐ ๏ธ Development
git clone https://github.com/yigitkonur/mcp-researchpowerpack.git
cd mcp-researchpowerpack
npm install
npm run dev
npm run build
npm run typecheck
๐ง Troubleshooting
Expand for troubleshooting tips| Problem | Solution |
|---|---|
| Tool returns "API key not configured" | Add the required ENV variable to your MCP config. The error message tells you exactly which key is missing. |
| Reddit posts returning empty | Check your REDDIT_CLIENT_ID and REDDIT_CLIENT_SECRET. Make sure you created a "script" type app. |
| Scraping fails on JavaScript sites | This is expected for the first attempt. The tool auto-retries with JS rendering. If still failing, the site may be blocking scrapers. |
| Deep research taking too long | Use a faster model like x-ai/grok-4.1-fast instead of perplexity/sonar-deep-research. |
| Token limit errors | Reduce the number of URLs/questions per request. The tool distributes a fixed token budget. |
MIT ยฉ Yigit Konur