Exercise: Auto Reviewer¶
Level: Advanced Time: 45-60 minutes Skills: Custom commands, hooks, automation, MCP concepts
Objective¶
Build an automated code review system using Claude Code's advanced features.
What You'll Build¶
A set of tools that automatically: 1. Review code changes before commits 2. Check for common issues 3. Generate review reports 4. Enforce team standards
Setup¶
-
Navigate to this exercise directory:
-
Start Claude Code:
Tasks¶
Task 1: Create Review Commands¶
Create slash commands for different types of reviews.
1.1: Security Review Command¶
Create .claude/commands/security.md:
Ask Claude to write a command that checks for: - Hardcoded secrets or API keys - SQL injection vulnerabilities - XSS risks - Unsafe eval() usage - Exposed debug endpoints
Hint: "Create a slash command for security code review that checks for common vulnerabilities"
1.2: Performance Review Command¶
Create .claude/commands/performance.md:
Ask Claude to write a command that checks for: - N+1 query patterns - Large loop operations - Unnecessary re-renders (React) - Missing database indexes - Synchronous operations that could be async
1.3: Style Review Command¶
Create .claude/commands/style.md:
Ask Claude to write a command that checks: - Consistent naming conventions - Function length and complexity - Missing error handling - Code duplication - Documentation gaps
Task 2: Create Pre-Commit Hook¶
Create a hook that runs before commits to catch issues.
2.1: Create the Hook Script¶
Ask Claude to create scripts/pre-commit-review.sh:
# This script should:
# 1. Get the list of staged files
# 2. Run quick checks (linting, formatting)
# 3. Block commit if critical issues found
# 4. Allow commit with warnings for minor issues
2.2: Configure the Hook¶
Add to .claude/settings.json:
Ask Claude to complete this configuration.
Task 3: Create a Review Report Generator¶
Create a command that generates a markdown report of all findings.
3.1: Report Command¶
Create .claude/commands/full-review.md:
The command should: 1. Run security, performance, and style reviews 2. Compile findings into a structured report 3. Prioritize by severity (critical, warning, info) 4. Include file locations and line numbers
Hint: "Create a command that runs all review types and generates a markdown report"
Task 4: Test Your System¶
4.1: Create Test Files¶
Ask Claude to create sample files with intentional issues:
samples/
├── vulnerable.js # Security issues
├── slow.js # Performance issues
└── messy.js # Style issues
4.2: Run Reviews¶
Test each command:
> /security samples/vulnerable.js
> /performance samples/slow.js
> /style samples/messy.js
> /full-review samples/
4.3: Verify Hook¶
Make a change and try to commit. The hook should catch issues.
Task 5: Advanced - MCP Integration (Optional)¶
If you want to extend further, create an MCP server that: - Connects to your CI/CD system - Posts review comments on PRs - Tracks review metrics over time
Project Structure¶
By the end, you should have:
03-auto-reviewer/
├── .claude/
│ ├── settings.json
│ └── commands/
│ ├── security.md
│ ├── performance.md
│ ├── style.md
│ └── full-review.md
├── scripts/
│ └── pre-commit-review.sh
├── samples/
│ ├── vulnerable.js
│ ├── slow.js
│ └── messy.js
└── README.md
Success Criteria¶
- [ ]
/securitycommand detects security issues - [ ]
/performancecommand detects performance issues - [ ]
/stylecommand detects style issues - [ ]
/full-reviewgenerates a comprehensive report - [ ] Pre-commit hook blocks commits with critical issues
- [ ] Sample files demonstrate each issue type
Bonus Challenges¶
- Add a
/fixcommand that auto-fixes simple issues - Create a scoring system (A-F grade for code quality)
- Add git integration to review only changed files
- Create a dashboard command showing historical trends
Reflection Questions¶
- How would this system scale to a large codebase?
- What false positives might occur? How would you handle them?
- How could this integrate with existing CI/CD pipelines?
- What other review types would be valuable?
Tips¶
- Start simple and iterate
- Test each component before moving to the next
- Use Claude's exploration to understand your sample files
- Don't try to catch everything - focus on high-value checks