Problem: GitHub MCP returns snake_case keys but orchestrator used camelCase
- GitHub MCP returns:
total_files,pull_requests - Orchestrator was looking for:
totalFiles,pullRequests - Result: Agents received 0 files, 0 PRs even though data was fetched
Fix:
- Updated all references in
orchestrator.pyto use snake_case - Lines affected: 126-127, 236-238
- Now correctly shows: 500 files, 1 PR, 1 Issue
Impact: Agents now receive full repository data for analysis
Problem: CLI showed "N/A" for all problem details
- Orchestrator returns:
result['assessment']['problem'] - CLI was accessing:
result['problem']
Fix:
- Updated
cli_runner.pylines 206-208 - Now extracts from correct nested structure:
assessment = result.get('assessment', {}) problem = assessment.get('problem', {}) validation = assessment.get('validation', {})
Impact: Now displays actual problem details correctly
Problem: 401 Unauthorized errors when fetching from GitHub
- Missing required headers:
User-Agent,X-GitHub-Api-Version
Fix:
- Added proper headers to all GitHub API requests in
utils/github_mcp.py - Now includes:
User-Agent: ActualCode-CLI/1.0X-GitHub-Api-Version: 2022-11-28- Proper
Authorization: Bearer {token}
Impact: GitHub API now works perfectly, fetches all data
Problem: No way to see detailed agent inputs/outputs for debugging
Fix:
- Added automatic log file generation:
DETAILED_RUN_{timestamp}.txt - Includes:
- Complete repository data fetched from GitHub
- All 3-loop analysis iterations
- Generated problem (full details)
- QA validation results
- Complete JSON result
Location: Generated in same directory as assessment_*.json
Impact: Full transparency into what each agent receives and produces
Problem: ModuleNotFoundError: No module named 'aiohttp'
Fix:
- Installed
aiohttpin virtual environment - Created
requirements.txtfor future reference
Impact: GitHub API calls now work
1. GitHub MCP Fetch
✅ 500 files
✅ 1 PR
✅ 1 Issue
✅ 7 commits
✅ README (6869 chars)
2. Pass to Agents
✅ Code Analyzer receives full file list
✅ PR Analyzer receives actual PR data
✅ Issue Analyzer receives real issues
✅ Dependency Analyzer gets tech stack
3. Problem Generation
✅ Based on actual repository (AI-Investigator)
✅ Uses real tech stack (LangChain, Anthropic, Python)
✅ References actual patterns from code
4. QA Validation
✅ Scores properly (71/100 displayed correctly)
✅ Provides specific feedback
✅ Triggers refinement
5. Final Output
✅ Displays all problem details
✅ Shows validation scores
✅ Saves JSON file
✅ Saves detailed log file
-
hackathon_code/utils/github_mcp.py- Added User-Agent headers
- Added API version headers
- Better error logging
-
hackathon_code/orchestrator.py- Fixed snake_case data access (totalFiles → total_files)
- Fixed snake_case data access (pullRequests → pull_requests)
-
hackathon_code/cli_runner.py- Fixed result structure access
- Added comprehensive logging to TXT file
- Better error messages
-
hackathon_code/requirements.txt(Created)- Listed all dependencies
-
Various test files created:
test_github_connection.py- Verify API workstest_my_repo.py- Test with user's specific repoverify_setup.sh- Check all prerequisites
Run the full CLI and check:
cd /Users/muratcankoylan/ActualCode/hackathon_code
export GITHUB_TOKEN=your_github_token_here
source venv/bin/activate
python cli_runner.pyExpected Results:
- ✅ Fetches 500 files from AI-Investigator
- ✅ Agents analyze with real data
- ✅ Problem is about LangChain/AI/Python (not generic To-Do app)
- ✅ QA scores display correctly (not 0/100)
- ✅ Final output shows actual problem details
- ✅ Creates DETAILED_RUN_*.txt with full logs
- Repository fetch: ~10s
- 3-loop analysis: ~180s (3 minutes)
- Problem creation: ~27s
- QA validation + refinement: ~39s
- Total: ~4 minutes
- File tree limited to 500 files (GitHub API constraint)
- PR/Issue data limited to last 20 (configurable)
- Dependency file content truncated to 1000 chars each
- Some repositories may have private files that can't be accessed
All are expected GitHub API limitations and handled gracefully.