📝 Instructions: Execute each command in order, one by one. Wait for each command to complete before running the next one.
make helpmake installls -lamake benchmark-initialls -la *report*.txt issues_to_fix*.txt benchmark_results.jsoncat issues_to_fix_*.txtpython tools/benchmark_code_quality.py --reportmake fixgit statusmake testmake benchmark-post-autofixmake benchmark-comparemake benchmark-reportcat issues_to_fix_*.txt💬 Copy and paste this prompt in your AI agent:
Analyze the files in this project and fix the remaining code quality issues
that automated tools cannot fix. Focus on:
1. Type errors (MyPy issues)
2. Logic improvements
3. Performance optimizations
4. Code structure improvements
Explain each change you make and why it improves the code.
make testmake benchmark-post-aimake benchmark-comparemake benchmark-report🤔 Answer these questions:
-
What percentage of errors were fixed automatically?
- Answer: ___ (check benchmark report)
-
What types of errors were most difficult to fix?
- Answer: ___
-
Which tool did you find most useful?
- Answer: ___
-
How would you change your development process after this?
- Answer: ___
# Check if dependencies are installed
pip list | grep -E "(black|ruff|mypy|bandit)"
# Reinstall if needed
make install# See specific errors
make test-verbose
# Verify code works
python main.py- Go back to the previous step and make sure it completed successfully
- Check the output of each command for error messages
- Ask for help if you're stuck
- Reduced at least 70% of initial errors
- Understand what each tool does
- Can explain the difference between auto-fixer and AI
- Applied at least 3 changes suggested by AI
- Code still works after all changes
- Know code quality tools
- Can automate repetitive tasks
- Know when to use AI vs automated tools
- Have a workflow to improve code
Congratulations on completing the exercise! 🎉
Remember: Code quality is a continuous process, not a destination.