Spider MCP - Web Search Crawler Service
A web search MCP service based on pure crawler technology, built with Node.js.
Features
- โ No Official API Required: Completely based on crawler technology, no dependency on third-party official APIs
- ๐ Intelligent Search: Supports Bing web and news search
- ๐ฐ News Search: Built-in news search with time filtering
- ๐ท๏ธ Pure Crawler: No official API dependency, uses Puppeteer for web scraping
- ๐ High Performance: Supports batch web scraping
- ๐ Health Monitoring: Complete health check and metrics monitoring
- ๐ Structured Logging: Uses Winston for structured logs
- ๐ Anti-Detection: Supports User-Agent rotation and other anti-bot measures
- ๐ Smart URL Cleaning: Automatically cleans promotional parameters while preserving essential information
Tech Stack
- Node.js (>= 18.0.0)
- Express.js - Web framework
- Puppeteer - Browser automation
- Cheerio - HTML parsing
- Axios - HTTP client
- Winston - Logging
- @modelcontextprotocol/sdk - MCP protocol support
Quick Start
1. Install dependencies
npm install
or use pnpm
pnpm install
2. Download Puppeteer browser
npx puppeteer browsers install chrome
3. Environment configuration
Copy and configure the environment variables file:
cp .env.example .env
Edit the .env file according to your needs.
4. Start the service
Development mode:
npm run dev
Production mode:
npm start
The service will start at http://localhost:3000.
MCP Tools
web_search
Unified search tool supporting both web and news search:
- Web Search:
searchType: "web" - News Search:
searchType: "news"with time filtering - Note:
searchTypeis a required parameter and must be explicitly specified
Usage Examples:
# Web search
Use web_search tool to search "Node.js tutorial" with searchType set to web, return 10 results
# News search
Use web_search tool to search "tech news" with searchType set to news, return 5 results from past 24 hours
Other Tools
get_webpage_content: Get webpage content and convert to specified formatget_webpage_source: Get raw HTML source code of webpagebatch_webpage_scrape: Batch scrape multiple webpages
MCP Configuration
Chatbox Configuration
Create mcp-config.json file in Chatbox:
{
"mcpServers": {
"spider-mcp": {
"command": "node",
"args": ["src/mcp/server.js"],
"env": {
"NODE_ENV": "production"
},
"description": "Spider MCP - Web search and webpage scraping tools",
"capabilities": {
"tools": {}
}
}
}
}
Other MCP Clients
{
"mcpServers": {
"spider-mcp": {
"command": "node",
"args": ["path/to/spider-mcp/src/mcp/server.js"]
}
}
}
Important Notes
- Anti-bot Measures: This service uses various techniques to avoid detection, but still needs to comply with robots.txt and terms of use
- Rate Limiting: It's recommended to control request frequency reasonably to avoid putting pressure on target websites
- Legal Compliance: Please ensure compliance with local laws and website terms of use when using this service
- Resource Consumption: Puppeteer will start Chrome browser, please pay attention to memory and CPU usage
- URL Cleaning: Automatically cleans promotional parameters but may affect some special link functionality
Development
Project Structure
spider-mcp/
โโโ src/
โ โโโ index.js # Main entry file
โ โโโ mcp/
โ โ โโโ server.js # MCP server
โ โโโ routes/ # Route definitions
โ โ โโโ search.js # Search routes
โ โ โโโ health.js # Health check routes
โ โโโ services/ # Business logic
โ โ โโโ searchService.js # Search service
โ โโโ utils/ # Utility functions
โ โโโ logger.js # Logging utility
โโโ logs/ # Log files directory
โโโ tests/ # Test files
โโโ package.json # Project configuration
โโโ .env.example # Environment variables example
โโโ mcp-config.json # MCP configuration example
โโโ README.md # Project documentation
License
MIT License
Contributing
Issues and Pull Requests are welcome!