[ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning
-
Updated
Aug 8, 2025 - Python
[ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning
Domain-specialized Gemma 2 27B + Gemma 4 31B for SEC filings — fine-tuned on TPU v6e-8 with PyTorch/XLA FSDPv2, plus a Vertex AI Vector Search RAG demo (69 tickers × 381 filings). Same LoRA recipe, +3.5% / +5.8% BERTScore F1.
Fine-tuning of Gemma 2 model in Google Competition using a dataset of Chinese poetry. The goal is to adapt the model to generate Chinese poetry in a classical style by training it on a subset of poems. The fine-tuning process leverages LoRA (Low-Rank Adaptation) for efficient model adaptation.
Variance-stable routing for 2-bit quantized MoE models. Features dynamic phase correction (Armen Guard), syntactic stabilization layer, and recursive residual quantization for efficient inference.
⚗️ Gemma 2 9B model instruct repository
Add a description, image, and links to the gemma-2 topic page so that developers can more easily learn about it.
To associate your repository with the gemma-2 topic, visit your repo's landing page and select "manage topics."