An AI platform that combines multi-source knowledge retrieval with LLM-powered content generation. Acts as an intelligent assistant capable of answering questions, retrieving accurate information from diverse data sources, and generating context-aware content.
Built with FastAPI for APIs, Streamlit for UI, and LangChain for managing advanced LLM workflows, featuring integrated RAG pipelines for document-aware interactions.
Key Technologies: FastAPI • Streamlit • LangChain • OpenAI GPT-3.5 • Ollama Llama2 • Vector Databases
Use Cases: Multi-source chatbots, document-based Q&A, enterprise knowledge assistants, AI-driven content generation, educational and research tools
| Feature | Implementation | Purpose |
|---|---|---|
| LLM-based Chatbot | FastAPI APIs + Streamlit UI | Conversational interface with contextual understanding |
| RAG Pipeline | Vector databases + LangChain | Accurate document-based Q&A with source context |
| Multi-source Agents | Wikipedia + JSON/Text/Web/PDF | Pulls information from multiple data sources |
| Content Generation | LLM APIs (OpenAI, Llama2) | Generates high-quality content and summaries |
| Scalable Architecture | FastAPI backend + modular design | Flexible for enterprise or product integration |
Professional web interface for generating essays and poems with AI
├── api/
│ ├── app.py # FastAPI server with essay/poem endpoints
│ └── client.py # Streamlit web interface
├── agents/
│ └── agents.ipynb # Multi-source RAG agents with Wikipedia integration
├── rag/
│ ├── simplerag.ipynb # Basic RAG pipeline with vector database
│ ├── chain.ipynb # Advanced RAG chain implementations
│ └── *.pdf, *.json # Sample documents and data sources
├── chatbot/
│ └── app.py # Experimental chatbot interface
├── requirements.txt # Python dependencies
└── .env # API keys (create this file)
Key Components:
- API: Production-ready FastAPI endpoints with web interface
- Agents: Advanced AI agents with multi-source data retrieval
- RAG: Retrieval-Augmented Generation experiments and implementations
- Chatbot: Interactive conversational AI prototypes
Software Requirements:
- Python 3.8+ (
python --version) - Git (optional, for cloning)
API Keys Required:
- OpenAI API Key - ~$0.002 per essay, $5 free credit
- LangChain API Key - Free for monitoring
- Ollama - Free local AI model
-
Clone and Install
git clone <repository-url> cd <repository-directory> pip install -r requirements.txt
-
Setup Environment Create
.envfile:LANGCHAIN_API_KEY=your-langchain-key OPENAI_API_KEY=your-openai-key LANGCHAIN_PROJECT=langchain_project LANGCHAIN_TRACING_V2=true
-
Install Ollama (for poems)
# Download from https://ollama.ai/ ollama pull llama2 -
Run the Application
# Terminal 1: Start API server uvicorn api.app:app --reload # Terminal 2: Start web interface streamlit run api/client.py
-
Access the App
- Web Interface: http://localhost:8501
- API Documentation: http://localhost:8000/docs
Visit http://localhost:8501 and use the intuitive forms to generate content.
Essay Generation:
curl -X POST "http://localhost:8000/essay/invoke" \
-H "Content-Type: application/json" \
-d '{"topic": "Renewable Energy", "word_count": 150}'Poem Generation:
curl -X POST "http://localhost:8000/poem/invoke" \
-H "Content-Type: application/json" \
-d '{"topic": "Autumn Leaves", "audience": "young adult"}'| Issue | Solution |
|---|---|
| API Key Error | Verify .env file format, check OpenAI credits, restart server |
| Ollama Not Found | Run ollama serve and ollama pull llama2 |
| Port in Use | Use different ports: --port 8001 or --server.port 8502 |
| Module Not Found | Run pip install -r requirements.txt |
Debug Steps:
- Check terminal logs for detailed errors
- Verify you're in the correct directory
- Test API endpoints at http://localhost:8000/docs
Restart Checklist:
- Python installed (
python --version) - API keys valid (check OpenAI billing)
- Ollama running (
ollama serve) - Dependencies installed (
pip install -r requirements.txt)
Extension Ideas:
- Additional AI models (Claude, Gemini)
- User authentication and content saving
- Multiple writing styles and formats
- Batch processing capabilities
MIT License - see LICENSE file for details.