This exercise demonstrates the evolution of code quality improvement tools, from automated libraries to AI agents. Students will learn how to use different approaches to improve code quality and understand when to use each tool.
- Understand code quality metrics and their importance
- Use automated tools (Black, Ruff, MyPy, etc.) to fix code issues
- Compare results before and after applying fixes
- Integrate AI agents for complex code improvements
- Develop a hybrid workflow combining automation and AI
- Run comprehensive code quality analysis
- Document all errors found
- Understand different types of issues
- Apply automated tools to fix simple issues
- Observe changes made by tools
- Calculate improvement percentage
- Compare results before and after
- Analyze what was fixed automatically
- Identify remaining complex issues
- Use AI agents for complex errors
- Apply and validate AI suggestions
- Complete the improvement process
| Tool | Purpose | Phase |
|---|---|---|
| Ruff | Fast linting and basic fixes | 1, 2 |
| Black | Code formatting | 2 |
| MyPy | Type checking | 1, 4 |
| Bandit | Security analysis | 1 |
| Coverage | Test coverage analysis | 1 |
| AI Agent | Complex logic and architecture | 4 |
| Benchmark Tool | Automated comparison and metrics | All phases |
EXERCISE_SCRIPT.md- Step-by-step guideEXERCISE_README.md- Complete exercise documentation
EXERCISE_SCRIPT.md- Detailed instructionsEXERCISE_README.md- Exercise overview and objectives
# Install dependencies
make install
# Verify setup
make help# Phase 1: Record initial benchmark
make benchmark-initial
# Phase 2: Automated fixes
make fix
# Phase 3: Record post-fix benchmark and compare
make benchmark-post-autofix
make benchmark-compare
# Phase 4: AI integration (manual)
# Use AI agent to fix remaining complex issues
make benchmark-post-ai
make benchmark-report- Initial errors: ~500-600 issues
- After auto-fixer: ~100-200 issues (60-80% improvement)
- After AI: ~50-100 issues (75-95% total improvement)
- Reduced at least 70% of initial errors
- Understand what each tool does
- Can explain the difference between auto-fixer and AI
- Applied at least 3 changes suggested by AI
- Code still works after all changes
- Know code quality tools
- Can automate repetitive tasks
- Know when to use AI vs automated tools
- Have a workflow to improve code
make help # See all available commands
make install # Install dependencies
make test # Run tests
make lint # Check code quality
make format # Format code
make analyze # Complete quality analysis
make fix # Auto-fixer
make clean # Clean temporary filesmake benchmark-initial # Record initial state
make benchmark-post-autofix # Record after auto-fixer
make benchmark-post-ai # Record after AI
make benchmark-compare # Compare all stages
make benchmark-report # Generate full report
make benchmark-full # Run complete workflow- Advantages: Fast, consistent, no human errors
- Limitations: Only simple and style errors
- When to use: Routine maintenance, formatting
- Advantages: Understand context, can fix complex logic
- Limitations: Can make mistakes, require review
- When to use: Complex refactoring, optimization
- Auto-Fixer for basics
- AI for complex issues
- Human review for validation
- What percentage of errors were fixed automatically?
- What types of errors were most difficult to fix?
- In what cases do you prefer automated tools vs AI?
- How would you integrate this workflow into your daily work?
- What additional tools would you like to explore?
- What was most surprising about the exercise?
- Which tool did you find most useful?
- How would you change your development process after this?
- SonarQube for enterprise analysis
- CodeClimate for maintainability metrics
- Pre-commit hooks for automation
- GitHub Actions for CI/CD
After completing this exercise, students will have:
- Practical experience with code quality tools
- Understanding of when to use different approaches
- A complete workflow for improving code quality
- Skills to integrate AI into their development process
- Knowledge of professional development practices
- Worksheet completion
- Class participation in discussions
- Reflection questions
- Final project: Set up quality pipeline for real project
- Presentation of results and learnings
- Peer review of improvements made
This exercise provides a comprehensive introduction to modern code quality practices, combining traditional automated tools with cutting-edge AI assistance.