Astraeus AI is a modular, production-oriented multi-agent research engine that performs structured research, verification, writing, critique, and quality evaluation using large language models.
Built as a personal AI systems engineering project to explore autonomous agent orchestration, iterative refinement loops, and research quality scoring.
Astraeus AI simulates a structured research workflow using multiple AI agents:
- Research Analyst
- Fact Verification Specialist
- Technical Writer
- AI Output Critic
The system:
- Generates structured research
- Verifies factual consistency
- Produces a formatted technical report
- Critiques and scores the output
- Iteratively refines until quality threshold is met
It exposes:
- 🖥 FastAPI backend (LLM orchestration)
- 🌐 Streamlit dashboard (interactive UI)
- 🔁 Iterative evaluation loop
- 📊 Score extraction and weakness detection
flowchart TD
A([🧑 User Topic]) --> B
B[🔍 Research Agent\nGathers relevant information\nfrom multiple sources]
B --> C
C[✅ Verification Agent\nFact-checks and validates\nresearch accuracy]
C --> D
D[✍️ Writer Agent\nSynthesizes research into\na structured article]
D --> E
E[🎯 Critic Agent\nEvaluates quality, clarity,\nand depth of the output]
E --> F
F{📊 Score Extraction\n+ Weakness Analysis}
F -- Score ≥ Threshold --> G([✅ Final Output])
F -- Score < Threshold --> H[🔄 Iterative Refinement]
H --> D
style A fill:#6366f1,color:#fff,stroke:none
style G fill:#22c55e,color:#fff,stroke:none
style F fill:#f59e0b,color:#fff,stroke:none
style H fill:#ef4444,color:#fff,stroke:none
style B fill:#1e293b,color:#94a3b8,stroke:#334155
style C fill:#1e293b,color:#94a3b8,stroke:#334155
style D fill:#1e293b,color:#94a3b8,stroke:#334155
style E fill:#1e293b,color:#94a3b8,stroke:#334155
- FastAPI REST API
- CrewAI multi-agent orchestration
- Local LLM via Ollama
- Score parsing & weakness detection modules
- Streamlit dashboard
- Glassmorphism UI
- Sectioned report visualization
- Export (Markdown / JSON)
- Python 3.10+
- CrewAI
- Ollama (Local LLMs)
- FastAPI
- Uvicorn
- Streamlit
- Pydantic
User Topic
↓
Research Agent → Gathers information from multiple sources
↓
Verification Agent → Fact-checks and validates accuracy
↓
Writer Agent → Synthesizes into a structured report
↓
Critic Agent → Evaluates quality, clarity, and depth
↓
Score Extraction + Weakness Analysis
↓
Iterative Refinement → Loops back to Writer if score < threshold
↓
Final Output
git clone https://github.com/YOUR_USERNAME/astraeus-ai.git
cd astraeus-aipython -m venv .venv source .venv/bin/activate # Linux / Mac .venv\Scripts\activate # Windows
pip install -r requirements.txt
Install Ollama and pull a model: ollama pull llama3.1:8b ollama pull llama3.1:8b
From project root: uvicorn app.api:app --reload
API docs: http://127.0.0.1:8000/docs
In a new terminal: streamlit run dashboard/app.py
Access: Access: http://localhost:8501
SCORE: 8
STRENGTHS:
- ...
- ...
WEAKNESSES:
- ...
- ...
IMPROVEMENTS:
- ...
- ...
- Modular multi-agent architecture
- Iterative quality refinement loop
- Automatic score extraction
- Weakness counting engine
- Research critique system
- Exportable reports
- Production-style backend/frontend separation
- Explore autonomous AI agent collaboration
- Simulate structured research workflows
- Implement measurable output quality evaluation
- Build system-level AI engineering skills
- RAG integration
- Vector database memory
- Multi-model ensemble evaluation
- Advanced metric scoring (precision/recall scoring engine)
- Deployment to cloud infrastructure
- CI/CD integration
👨💻 Author
Devdutt S
Personal upskilling project focused on building real-world AI systems architecture.
MIT License