A beautiful React-based chat interface for your RAG (Retrieval-Augmented Generation) system powered by ChromaDB, Gemini embeddings, and FastAPI.
- Beautiful Chat UI: Modern, responsive React interface with chat history
- RAG Integration: Uses ChromaDB for vector storage and Gemini for embeddings
- Real-time Chat: Ask questions and get AI-powered answers from your documents
- Document Indexing: Automatically processes
.txtfiles frommy_data/folder - Error Handling: Graceful error handling and loading states
- Mobile Responsive: Works great on desktop and mobile devices
- Python 3.9+
- Node.js 18+ (for React development)
- A Gemini API key
# Clone or navigate to the project directory
cd "simple rag"
# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install Python dependencies
pip install -r requirements.txt
# Set up your Gemini API key
echo "GEMINI_API_KEY=your_actual_api_key_here" > .envPlace your .txt files in the my_data/ folder. The system will automatically index them.
Option A: Use the startup script (recommended)
./start.shOption B: Manual startup
Terminal 1 (Backend):
source .venv/bin/activate
uvicorn api:app --host 127.0.0.1 --port 8000Terminal 2 (Frontend):
cd ui
npm install
npm run dev -- --host 127.0.0.1 --port 5173Visit http://127.0.0.1:5173 in your browser.
simple rag/
├── api.py # FastAPI server with /ask endpoint
├── rag_core.py # Core RAG logic and ChromaDB integration
├── rag.py # CLI interface (original script)
├── requirements.txt # Python dependencies
├── start.sh # Startup script for both servers
├── .env # Environment variables (create this)
├── my_data/ # Your documents go here
│ ├── doc.txt
│ └── mamun.txt
└── ui/ # React frontend
├── src/
│ ├── App.jsx # Main chat component
│ ├── App.css # Beautiful styling
│ └── index.css # Base styles
└── package.json
POST /ask- Ask a question and get an AI-generated answer{ "question": "Tell me about Samrat", "top_k": 5 }GET /docs- Interactive API documentation
- Chat History: See all your previous questions and answers
- Loading States: Visual feedback while processing
- Error Handling: Clear error messages if something goes wrong
- Responsive Design: Works on all screen sizes
- Clear History: Reset the conversation anytime
source .venv/bin/activate
uvicorn api:app --host 127.0.0.1 --port 8000 --reloadcd ui
npm run devJust add .txt files to the my_data/ folder and restart the backend. The system will automatically re-index if the collection is empty.
- Document Processing: Text files are chunked and embedded using Gemini's text-embedding-004 model
- Vector Storage: Embeddings are stored in ChromaDB for fast similarity search
- Query Processing: User questions are embedded and matched against stored chunks
- Answer Generation: Relevant context is sent to Gemini 1.5 Flash for answer generation
- Response Display: Answers are displayed in the beautiful chat interface
If you get Node.js version errors, the project uses Vite 4.5.3 which is compatible with Node 18+.
Make sure both servers are running:
- Backend: http://127.0.0.1:8000
- Frontend: http://127.0.0.1:5173
# Backend
pip install -r requirements.txt
# Frontend
cd ui && npm installThis project is open source and available under the MIT License.
Feel free to submit issues and enhancement requests!