Skip to content

nikisetti01/MTL-LORA-for-PubMedQA-and-Riddle

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 

Repository files navigation

Lora and MTL-Lora a new frontier of Multi-task fine-tuning for LM

πŸš€ Medical Chatbot Fine-Tuning with LoRA & MTL-LoRA πŸ₯πŸ’‘

πŸ” Overview

This project pushes the boundaries of medical AI by fine-tuning the LLaMA 1B model using LoRA and our custom-built Multi-Task LoRA (MTL-LoRA) framework, designed from scratch in PyTorch.

πŸ“š Phase 1: Fine-Tuning with PubMedQA

We enhance the model’s medical expertise with PubMedQA, leveraging:

⚑ LoRA: Efficient low-rank adaptation for faster, lightweight fine-tuning.

πŸ› οΈ Traditional Fine-Tuning: Updating only the last layer for controlled training.

πŸ“Š How We Evaluate

We compare three configurations: Base Model, LoRA-Tuned Model, and Traditionally Fine-Tuned Model using:

🎯 Perplexity

πŸ† BLEU Score

πŸ“ˆ ROUGE Score

πŸ€– Phase 2: Custom Multi-Task LoRA (MTL-LoRA)

We built MTL-LoRA from scratch in PyTorch, allowing efficient multi-task learning across various medical NLP tasks in a single training pipeline. Inspired by cutting-edge research (arXiv 2410.09437), this approach ensures:

πŸš€ Seamless multi-task adaptation without retraining per task.

πŸ”¬ Enhanced generalization across diverse medical datasets.

πŸ’° Reduced computational cost compared to full fine-tuning.

🌍 Join the Future of Medical AI!

Contribute, experiment, and push the boundaries of what’s possible in AI-driven healthcare! πŸ₯πŸ’™

About

πŸš€ Fine-tuning LLaMA 1B for a medical chatbot using LoRA and a custom MTL-LoRA framework in PyTorch, enabling efficient multi-task learning for medical NLP! πŸ₯πŸ’‘

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors