An AI-powered medical assistant capable of analyzing medical images and answering health-related queries.
The chatbot is built on Large Language Models (LLMs) with integration of image understanding to identify possible medical conditions based on uploaded images.
It provides explanations with reasoning, supports multi-modal inputs (text + images), and can detect potential issues like skin cancer or chest anomalies.
- 🧠 Medical Q&A – Ask any medical question and get contextually relevant, evidence-based responses.
- 🩻 Image Analysis – Upload medical images (X-rays, skin lesion photos, etc.) for AI-powered analysis.
- 🔍 Condition Detection – Identifies possible symptoms or abnormalities with explanations.
- 💬 Conversational Interface – Natural, human-like interaction for non-technical users.
- 📊 Explainable Results – Includes reasoning and possible causes for predictions.
Medical-Chatbot-LLM/
- ├── app.py # Main Streamlit application
- ├── model/ # Trained model files
- ├── utils/ # Helper functions for image/text processing
- ├── Demo Images/ # Example input images for testing
- ├── requirements.txt # Python dependencies
- └── README.md # Project documentation
- Upload an image (X-ray, skin lesion, etc.) or type a medical query.
- The LLM model processes the text and/or image.
- If an image is provided, the vision module extracts features and detects possible abnormalities.
- The chatbot generates a diagnosis suggestion with reasoning.
- The final output is displayed in an interactive chat interface.
- Python
- Streamlit – Web app interface
- OpenAI GPT / LLaVA / BLIP – LLM + image understanding
- PyTorch – Model training & inference
- Pillow / OpenCV – Image preprocessing
# Clone the repository
git clone https://github.com/Ridit07/Medical-Chatbot-LLM.git
cd Medical-Chatbot-LLM
# Install dependencies
pip install -r requirements.txt
# Run the app
streamlit run app.py

