๐ฏ SlopWatch - AI Accountability MCP Server
Stop AI from lying about what it implemented! Track what AI claims vs what it actually does.
๐ What's New in v2.7.0
โจ Ultra-Minimal Responses - 90% less verbose output ๐ Combined Tool - Single call instead of 2 separate tools โก Seamless Workflow - Perfect for AI pair programming ๐ฏ Cursor MCP Compatible - Works seamlessly with Cursor IDE
๐ค Why SlopWatch?
Ever had AI say "I've added error handling to your function" but it actually didn't? Or claim it "implemented user authentication" when it just added a comment?
SlopWatch catches AI lies in real-time.
โก Quick Start
๐ฏ Option 1: Smithery (Easiest - 1 click install)
- Visit smithery.ai/server/@JoodasCode/slopwatch
- Click "Install to Cursor" or "Install to Claude"
- Done! โจ
Smithery handles hosting, authentication, and updates automatically
๐ง Option 2: NPM Direct (Manual Setup)
For Cursor IDE:
{
"mcpServers": {
"slopwatch": {
"command": "npx",
"args": ["slopwatch-mcp-server"]
}
}
}
Manual Cursor Setup:
- Open Cursor Settings (
Cmd+Shift+J
on Mac,Ctrl+Shift+J
on Windows) - Go to Features โ Model Context Protocol
- Click "Add New MCP Server"
- Configure:
- Name: SlopWatch
- Type: stdio
- Command:
npx slopwatch-mcp-server
For Claude Desktop:Add to your claude_desktop_config.json
:
{
"mcpServers": {
"slopwatch": {
"command": "npx",
"args": ["slopwatch-mcp-server"]
}
}
}
Global NPM Install:
npm install -g slopwatch-mcp-server
๐ฎ How to Use
Method 1: Combined Tool (Recommended โญ)
Perfect for when AI implements something and you want to verify it:
// AI implements code, then verifies in ONE call:
slopwatch_claim_and_verify({
claim: "Add input validation to calculateSum function",
originalFileContents: {
"utils/math.js": "function calculateSum(a, b) { return a + b; }"
},
updatedFileContents: {
"utils/math.js": "function calculateSum(a, b) {\n if (typeof a !== 'number' || typeof b !== 'number') {\n throw new Error('Invalid input');\n }\n return a + b;\n}"
}
});
// Response: "โ
PASSED (87%)"
Method 2: Traditional 2-Step Process
For when you want to claim before implementing:
// Step 1: Register claim
slopwatch_claim({
claim: "Add error handling to user login",
fileContents: {
"auth.js": "function login(user) { return authenticate(user); }"
}
});
// Response: "Claim ID: abc123"
// Step 2: Verify after implementation
slopwatch_verify({
claimId: "abc123",
updatedFileContents: {
"auth.js": "function login(user) {\n try {\n return authenticate(user);\n } catch (error) {\n throw new Error('Login failed');\n }\n}"
}
});
// Response: "โ
PASSED (92%)"
๐ ๏ธ Available Tools
Tool | Description | Response |
---|---|---|
slopwatch_claim_and_verify |
โญ Recommended - Claim and verify in one call | โ
PASSED (87%) |
slopwatch_status |
Get your accountability stats | Accuracy: 95% (19/20) |
slopwatch_setup_rules |
Generate .cursorrules for automatic enforcement | Minimal rules content |
๐ฏ Cursor IDE Integration
SlopWatch is designed specifically for Cursor IDE and AI pair programming:
Automatic Detection
- Detects when AI claims to implement features
- Automatically suggests verification
- Integrates seamlessly with Cursor's Composer
Smart Workflow
1. AI: "I'll add error handling to your function"
2. SlopWatch: Automatically tracks the claim
3. AI: Implements the code
4. SlopWatch: Verifies implementation matches claim
5. Result: โ
PASSED (92%) or โ FAILED (23%)
Perfect for:
- Code reviews - Verify AI actually implemented what it claimed
- Pair programming - Real-time accountability during development
- Learning - Understand what AI actually does vs what it says
- Quality assurance - Catch implementation gaps before they become bugs
๐ก Real-World Examples
Example 1: API Endpoint Enhancement
// AI says: "I'll add rate limiting to your API endpoint"
slopwatch_claim_and_verify({
claim: "Add rate limiting middleware to /api/users endpoint",
originalFileContents: {
"routes/users.js": "app.get('/api/users', (req, res) => { ... })"
},
updatedFileContents: {
"routes/users.js": "const rateLimit = require('express-rate-limit');\nconst limiter = rateLimit({ windowMs: 15*60*1000, max: 100 });\napp.get('/api/users', limiter, (req, res) => { ... })"
}
});
// Result: โ
PASSED (94%)
Example 2: React Component Update
// AI claims: "Added responsive design with CSS Grid"
slopwatch_claim_and_verify({
claim: "Make UserCard component responsive using CSS Grid",
originalFileContents: {
"components/UserCard.jsx": "const UserCard = () => <div className=\"user-card\">...</div>"
},
updatedFileContents: {
"components/UserCard.jsx": "const UserCard = () => <div className=\"user-card grid-responsive\">...</div>",
"styles/UserCard.css": ".grid-responsive { display: grid; grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); gap: 1rem; }"
}
});
// Result: โ
PASSED (89%)
๐ Accountability Stats
Track your AI's honesty over time:
slopwatch_status();
// Returns: "Accuracy: 95% (19/20)"
- Accuracy Score: Percentage of claims that were actually implemented
- Claim Count: Total number of implementation claims tracked
- Success Rate: How often AI delivers what it promises
๐ง Advanced Configuration
Auto-Enforcement with .cursorrules
Generate automatic accountability rules:
slopwatch_setup_rules();
This creates a .cursorrules
file that automatically enforces SlopWatch verification for all AI implementations.
Custom Verification
SlopWatch analyzes:
- File changes - Did the files actually get modified?
- Code content - Does the new code match the claim?
- Implementation patterns - Are the right patterns/libraries used?
- Keyword matching - Does the code contain relevant keywords?
๐ Why Choose SlopWatch?
For Developers:
- Catch AI lies before they become bugs
- Learn faster by seeing what AI actually does
- Improve code quality through automatic verification
- Save time with streamlined accountability
For Teams:
- Standardize AI interactions across team members
- Track AI reliability over time
- Reduce debugging from AI implementation gaps
- Build trust in AI-assisted development
For Cursor Users:
- Native integration with Cursor's Composer
- Seamless workflow - no context switching
- Real-time feedback during development
- Ultra-minimal responses - no verbose output
๐ฏ Getting Started with Cursor
- Install SlopWatch using one of the methods above
- Open Cursor and start a new chat with Composer
- Ask AI to implement something: "Add input validation to my function"
- Watch SlopWatch work: It automatically tracks and verifies the claim
- Get instant feedback: โ PASSED (87%) or โ FAILED (23%)
๐ Troubleshooting
Common Issues:
- Tools not showing: Restart Cursor after installation
- Verification failed: Check if files were actually modified
- NPM errors: Try
npm cache clean --force
and reinstall
Debug Mode:
Enable detailed logging by setting DEBUG=true
in your environment.
๐ Roadmap
- Visual dashboard for accountability metrics
- Integration with Git for commit verification
- Team analytics for multi-developer projects
- Custom verification rules for specific frameworks
- IDE extensions for other editors
๐ค Contributing
We welcome contributions! Please see our Contributing Guide for details.
๐ License
MIT License - see LICENSE for details.
๐ Support
- GitHub Issues: Report bugs or request features
- Documentation: Full docs and examples
- Community: Join the discussion
Made with โค๏ธ for the Cursor community
Stop AI from lying about what it implemented. Start using SlopWatch today!