Selvage: AI-Powered Code Review Automation Tool
๐ ํ๊ตญ์ด
A modern CLI tool that helps AI analyze Git diffs to improve code quality, find bugs, and identify security vulnerabilities.
Selvage: Code reviews with an edge!
No more waiting for reviews! AI instantly analyzes your code changes to provide quality improvements and bug prevention.With smart context analysis (AST-based) that's accurate and cost-effective, plus multi-turn processing for large codebases - seamlessly integrated with all Git workflows.
Table of Contents- โจ Key Features
- ๐ Quick Start
- ๐ฏ Practical Usage Guide
- MCP Mode Usage
- โจ๏ธ CLI Usage
- ๐ Smart Context Analysis and Supported AI Models
- ๐ฏ Smart Context Analysis
- Supported AI Models
- ๐ Review Result Storage Format
- ๐ง Troubleshooting
- ๐ค Contributing
- ๐ License
- ๐ Change Log
- ๐ Contact and Community
โจ Key Features
- ๐ค Multiple AI Model Support: Leverage the latest LLM models including OpenAI GPT-5, Anthropic Claude Sonnet-4, Google Gemini, and more
- ๐ Git Workflow Integration: Support for analyzing staged, unstaged, and changes between specific commits/branches
- ๐ฏ Optimized Context Analysis: Tree-sitter based AST analysis automatically extracts the smallest code blocks containing changed lines along with their dependency statements, providing contextually optimized information for each situation
- ๐ Automatic Multi-turn Processing: Automatic prompt splitting when context limits are exceeded, supporting stable large-scale code reviews (Large Context Mode now auto-triggers once total tokens exceed 200k, even without provider errors)
- ๐ค MCP Mode Support: Register as MCP mode in Cursor, Claude Code, etc., and request code reviews through natural language like "Review current changes"
- ๐ Claude Code Plugin: Install via marketplace with a single command โ includes dedicated
/reviewskill andselvage-revieweragent for seamless integration - ๐ง Agent-Delegated Review (
get_review_context): Returns structured review context (diff + Smart Context + system prompt) so host agents (Claude Code, Cursor, Antigravity, etc.) can perform code reviews with their own LLM โ no API key required - ๐ Open Source: Freely use and modify under Apache-2.0 License
๐ Quick Start
Common Setup
1. Installation
Recommended Method (using uv)
# Install uv (run once)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Install Selvage
uv tool install selvage
Alternative Method (using pipx)
# Install pipx (macOS)
brew install pipx
# Install Selvage
pipx install selvage
Traditional Method (pip)
# โ ๏ธ May cause externally-managed-environment error on some systems
pip install selvage
macOS/Linux users: If you encounter errors with pip install, please use the uv or pipx methods above.
2. API Key Setup
Get an API key from OpenRouter and set it up:
export OPENROUTER_API_KEY="your_openrouter_api_key_here"
MCP Mode Usage (Recommended)
Register as MCP mode in Cursor, Claude Code, etc., to request code reviews through natural language.
Cursor Integration
Register in Cursor's MCP configuration file (path may vary depending on user environment):
Common path: ~/.cursor/mcp.json
// Method 1: Using environment variables (if already set)
{
"mcpServers": {
"selvage": {
"command": "uvx",
"args": ["selvage", "mcp"]
}
}
}
// Method 2: Direct specification
{
"mcpServers": {
"selvage": {
"command": "uvx",
"args": ["selvage", "mcp"],
"env": {
"OPENROUTER_API_KEY": "your_openrouter_api_key_here"
}
}
}
}
Claude Code Integration
Method A: Plugin via Marketplace (Recommended)
Install the Selvage plugin from the marketplace to get the dedicated /review skill and selvage-reviewer agent:
# Step 1: Add Selvage marketplace
/plugin marketplace add selvage-lab/selvage
# Step 2: Install the plugin
/plugin install selvage@selvage-lab-selvage
After installation, use the /review skill directly:
/review # Review unstaged changes
/review staged # Review staged changes
/review branch main # Review against main branch
/review commit abc1234 # Review from specific commit
๐ก No API key required! The plugin uses
get_review_contextto leverage Claude Code's own LLM for code review, so no external API key is needed.
Method B: MCP Server Registration
# Method 1: Using environment variables (if already set)
claude mcp add selvage -- uvx selvage mcp
# Method 2: Direct specification
claude mcp add selvage -e OPENROUTER_API_KEY=your_openrouter_api_key_here -- uvx selvage mcp
Usage
After restarting your IDE, request reviews from your Coding Assistant:
Please review current changes using selvage mcp
Review changes between current branch and main branch using claude-sonnet-4-thinking with selvage mcp
๐ Done! Selvage will analyze the code, review it, and deliver results through your Coding Assistant.
CLI Mode Usage
For direct terminal usage:
selvage review --model claude-sonnet-4-thinking
๐ก More Options: CLI Usage | Practical Usage Guide
๐ฏ Practical Usage Guide
MCP Mode Usage
Basic Usage
# Basic review request
Please review current changes using selvage mcp
# Review staged changes
Review staged work using gpt-5-high with selvage mcp
# Review against specific branch
Review current branch against main branch using selvage mcp
# Review with automatic model selection
Review current branch against main branch using selvage mcp, automatically selecting appropriate model
Agent-Delegated Review (No API Key Required)
The get_review_context tool returns structured review context so host agents can perform code reviews with their own LLM โ no Selvage API key needed.
# Request agent-delegated review context
Get review context for current changes using selvage mcp, then review the code
# Agent-delegated review for staged changes
Get review context for staged changes using selvage mcp
# Agent-delegated review against branch
Get review context comparing current branch to main using selvage mcp
๐ก How it works: Selvage extracts diff + AST-based Smart Context + system prompt and returns it as structured context. The host agent (Claude Code, Cursor, Antigravity, etc.) then performs the review directly with its own LLM, without needing an external API key.
Advanced Workflows
Multi-model Comparison Review
Review staged work using both gpt-5-high and claude-sonnet-4-thinking with selvage mcp, then compare the results
Stepwise Code Improvement Workflow
1. Review current changes using claude-sonnet-4-thinking with selvage mcp
2. Critically evaluate review feedback for validity against current codebase and set priorities
3. Apply improvements sequentially based on established priorities
CI/CD Integration Scenarios
# Code quality verification before PR creation
Review changes against main branch using selvage mcp for code quality verification before PR creation
# Final check before deployment
Perform comprehensive review of staged changes using selvage mcp for final check before deployment
โจ๏ธ CLI Usage
Direct terminal usage method. While MCP mode is recommended, CLI is useful for scripts and CI/CD.
Configuring Selvage
# View all settings
selvage config list
# Set default model
selvage config model <model_name>
# Set default language
selvage config language <language_name>
Code Review
selvage review [OPTIONS]
Key Options
--repo-path <path>: Git repository path (default: current directory)--staged: Review only staged changes--target-commit <commit_id>: Review changes from specific commit to HEAD (e.g., abc1234)--target-branch <branch_name>: Review changes between current branch and specified branch (e.g., main)--model <model_name>: AI model to use (e.g., claude-sonnet-4-thinking)--open-ui: Automatically launch UI after review completion--no-print: Don't output review results to terminal (terminal output enabled by default)--skip-cache: Perform new review without using cache
Usage Examples
# Review current working directory changes
selvage review
# Final check before commit
selvage review --staged
# Review specific files only
git add specific_files.py && selvage review --staged
# Code review before sending PR
selvage review --target-branch develop
# Quick and economical review for simple changes
selvage review --model gemini-2.5-flash
# Review and then view detailed results in web UI
selvage review --target-branch main --open-ui
Git Workflow Integration
Team Collaboration Scenarios
# Code quality verification before Pull Request creation
selvage review --target-branch main --model claude-sonnet-4-thinking
# Pre-analysis of changes for code reviewers
selvage review --target-branch develop --model claude-sonnet-4-thinking
# Comprehensive review of all changes after specific commit
selvage review --target-commit a1b2c3d --model claude-sonnet-4-thinking
Development Stage Quality Management
# Quick feedback during development (before WIP commit)
selvage review --model gemini-2.5-flash
# Final verification of staged changes (before commit)
selvage review --staged --model claude-sonnet-4-thinking
# Emergency review before hotfix deployment
selvage review --target-branch main --model claude-sonnet-4-thinking
Large-scale Code Review
# Large codebases are automatically handled
selvage review --model claude-sonnet-4 # Usage is the same, multi-turn processing automatically applied after detection
Selvage automatically handles large code changes that exceed LLM model context limits.Once usage reaches roughly 200k tokens (tiktoken basis), Large Context Mode starts automatically, so just wait for it to complete.
Cost Optimization
# Use economical models for small changes
selvage review --model gemini-2.5-flash
Viewing Results
Review results are output directly to the terminal and automatically saved to files simultaneously.
For additional review management and re-examination, you can use the web UI:
# Manage all saved review results in web UI
selvage view
# Run UI on different port
selvage view --port 8502
Key UI Features:
- ๐ Display list of all review results
- ๐จ Markdown format display
- ๐๏ธ JSON structured result view
๐ Smart Context Analysis and Supported AI Models
๐ฏ Smart Context Analysis
Selvage uses Tree-sitter based AST analysis to precisely extract only the code blocks related to changed lines, ensuring both cost efficiency and review quality simultaneously.
How Smart Context Works
- Precise Extraction: Extracts only the minimal function/class blocks containing changed lines + related dependencies (imports, etc.)
- Cost Optimization: Dramatically reduces token usage by sending only necessary context instead of entire files
- Quality Assurance: Maintains high review accuracy through AST-based precise code structure understanding
Smart Context Automatic Application
Selvage analyzes file size and change scope to automatically select the most efficient review method:
๐ฏ Small Changes โ Fast and accurate analysis with Smart Context
๐ Small Files โ Complete context understanding with full file analysis
๐ Partial Edits in Large Files โ Focused analysis of related code with Smart Context
๐ Large Changes in Big Files โ Comprehensive review with full file analysis
๐ก Automatic Optimization: The optimal analysis method for each situation is automatically applied without requiring any manual configuration.
Smart Context Supported Languages
- Python, JavaScript, TypeScript, Java, Kotlin
Universal Context Extraction Support
- Major Programming Languages: Go, Ruby, PHP, C#, C/C++, Rust, Swift, Dart, etc.
๐ Universal context extraction method provides excellent code review quality for major programming languages.Smart Context supported languages are continuously expanding.
Supported AI Models
๐ Manage all models below with just one OpenRouter API key!
OpenAI Models (OpenRouter or OpenAI API Key)
- gpt-5.2-codex: โญ Recommended - Most capable agentic coding model with stronger reasoning (400K context)
Anthropic Models (OpenRouter or Anthropic API Key)
- claude-opus-4.6: โญ Recommended - Frontier reasoning model with extended thinking (1M context)
- claude-sonnet-4.5: Hybrid reasoning model with extended thinking for advanced coding (1M context)
Google Models (OpenRouter or Google API Key)
- gemini-3-pro: โญ Recommended - Most advanced reasoning model (1M+ tokens)
- gemini-3-flash: High speed, high value model for agentic workflows (1M+ tokens)
๐ OpenRouter Provided Models (OpenRouter API Key Only)
- minimax-m2.5 (MiniMax): โญ Recommended - State-of-the-art open-source model for coding (SWE-bench 80.2%, 200K context)
- glm-5 (Zhipu AI): Flagship 745B MoE model for complex systems (200K context)
- qwen3-coder (Qwen): Coding-specialized model (262K context)
- kimi-k2.5 (Moonshot AI): Large context processing model (262K context)
- deepseek-r1-0528 (DeepSeek): Reasoning-specialized model (163K context)
- deepseek-v3-0324 (DeepSeek): Advanced conversation model (163K context)
Free tier models available: qwen3-coder-free, kimi-k2.5-free, deepseek-v3-0324-free, deepseek-r1-0528-free
๐ Review Result Storage Format
Review results are saved as structured files simultaneously with terminal output:
- ๐ Markdown Format: Clean structure that's easy for humans to read, including summary, issue list, and improvement suggestions
- ๐ง JSON Format: For programmatic processing and integration with other tools
๐ก Advanced Settings (For Developers/Contributors)
Development and Advanced Settings OptionsDevelopment Version Installation
Using uv (recommended)
git clone https://github.com/selvage-lab/selvage.git
cd selvage
# Install all development dependencies automatically
uv sync --dev --extra e2e
# Run
uv run selvage --help
Using pip
git clone https://github.com/selvage-lab/selvage.git
cd selvage
pip install -e .
Development Environment Installation
Using uv (recommended)
# Development dependencies only
uv sync --dev
# E2E test environment included
uv sync --dev --extra e2e
# Run tests
uv run pytest tests/
Using pip
# Install with development dependencies (pytest, build, etc.)
pip install -e .[dev]
# Install with development + E2E test environment (testcontainers, docker, etc.)
pip install -e .[dev,e2e]
Individual Provider API Key Usage
You can also set individual provider API keys instead of OpenRouter:
export OPENAI_API_KEY="your_openai_api_key_here"
export ANTHROPIC_API_KEY="your_anthropic_api_key_here"
export GEMINI_API_KEY="your_gemini_api_key_here"
Development and Debugging Settings
# Set default model to use (for advanced users)
selvage config model claude-sonnet-4-thinking
# Check configuration
selvage config list
# Enable debug mode (for troubleshooting and development)
selvage config debug-mode on
๐ง Troubleshooting
Installation Errors
externally-managed-environment Error (macOS/Linux)
# Solution 1: Use uv (recommended)
curl -LsSf https://astral.sh/uv/install.sh | sh
uv tool install selvage
# Solution 2: Use pipx
brew install pipx # macOS
pipx install selvage
# Solution 3: Use virtual environment
python3 -m venv ~/.selvage-env
source ~/.selvage-env/bin/activate
pip install selvage
API Key Errors
# Check environment variable
echo $OPENROUTER_API_KEY
# Permanent setup (Linux/macOS)
echo 'export OPENROUTER_API_KEY="your_key_here"' >> ~/.bashrc
source ~/.bashrc
Model not found Error
# Check available model list
selvage models
# Use correct model name
selvage review --model claude-sonnet-4-thinking
Network Connection Error
# Retry ignoring cache
selvage review --skip-cache
# Check detailed info with debug mode
selvage config debug-mode on
selvage review
๐ค Contributing
Selvage is an open-source project and we always welcome your contributions! Bug reports, feature suggestions, documentation improvements, code contributions - any form of contribution is appreciated.
How to Contribute:
- ๐ Bug reports or feature suggestions on GitHub Issues
- ๐ง Code contributions through Pull Requests
- ๐ Documentation improvements and translations
Detailed contribution guidelines can be found in CONTRIBUTING.md.
๐ License
Selvage is distributed under the Apache License 2.0. This license permits commercial use, modification, and distribution, with comprehensive patent protection and trademark restrictions included.
๐ Change Log
Check out all version changes and new features of Selvage.
๐ View Complete Change Log โ
You can find detailed changes for each version, including new features, bug fixes, and performance improvements.
๐ Contact and Community
- ๐ Bug Reports and Feature Requests: GitHub Issues
- ๐ง Direct Contact: [email protected]
Write better code with Selvage! ๐ โญ If this project helped you, please give us a Star on GitHub!