π Medical Chatbot Fine-Tuning with LoRA & MTL-LoRA π₯π‘
π Overview
This project pushes the boundaries of medical AI by fine-tuning the LLaMA 1B model using LoRA and our custom-built Multi-Task LoRA (MTL-LoRA) framework, designed from scratch in PyTorch.
π Phase 1: Fine-Tuning with PubMedQA
We enhance the modelβs medical expertise with PubMedQA, leveraging:
β‘ LoRA: Efficient low-rank adaptation for faster, lightweight fine-tuning.
π οΈ Traditional Fine-Tuning: Updating only the last layer for controlled training.
π How We Evaluate
We compare three configurations: Base Model, LoRA-Tuned Model, and Traditionally Fine-Tuned Model using:
π― Perplexity
π BLEU Score
π ROUGE Score
π€ Phase 2: Custom Multi-Task LoRA (MTL-LoRA)
We built MTL-LoRA from scratch in PyTorch, allowing efficient multi-task learning across various medical NLP tasks in a single training pipeline. Inspired by cutting-edge research (arXiv 2410.09437), this approach ensures:
π Seamless multi-task adaptation without retraining per task.
π¬ Enhanced generalization across diverse medical datasets.
π° Reduced computational cost compared to full fine-tuning.
π Join the Future of Medical AI!
Contribute, experiment, and push the boundaries of whatβs possible in AI-driven healthcare! π₯π