Video Content Customization Using First Frame
-
Updated
Mar 17, 2026 - Python
Video Content Customization Using First Frame
Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"
The official implementation of the paper "Rethinking Pruning for Vision-Language Models: Strategies for Effective Sparsity".
Transform your documents into intelligent conversations. This open-source RAG chatbot combines semantic search with fine-tuned language models (LLaMA, Qwen2.5VL-3B) to deliver accurate, context-aware responses from your own knowledge base. Join our community!
🔧 Modular pipeline for generating high-quality, domain-specific datasets for LLM fine-tuning — from PDFs and web scraping to synthetic Q&A generation, quality filtering, and training-ready formatting.
The retrieval stack matters — but ragweld’s differentiator is the engineering surface area around it: benchmarking, evaluation, diagnosis, and operations. It’s built for senior engineers who need answers like: what changed, why did it change, and what should we try next — without guessing.
MedGemma 1.5 Instruct LoRA fine-tuning + Gradio app for Real-time Interaction on CT-Scan, MRI, X-RAY images
A small web app that generates Naruto-inspired anime ninja images using a fine-tuned model on Replicate. Enter a prompt; the app returns an image via a server-side API route. The UI is dark, cinematic, and ninja-themed.
LoRA fine tuning of Flan-T5-Base HuggingFace Transformer to generate Pinterest specific keywords for a historical personality
A banking intent router powered by the a2a (Agent-to-Agent) framework. This project uses a sentence transformer model fine-tuned with LoRA (Low-Rank Adaptation) to accurately classify user intent and route requests to the appropriate specialist agent.
Fine-tuning Qwen2-VL on Apple Silicon (MLX) for structured JSON document extraction.
Persian gender-neutralization pipeline: dataset collection/labeling (X/Twitter + ChatGPT), plus PEFT (LoRA + int8) fine-tuning of Llama 3 for rewriting gender-biased Persian text into gender-neutral form.
Kaggle Competition for MAP Charting Student Math Mistunderstanding
Identity-preserving image-to-video generation: vision-grounded prompt simplification via Qwen3-VL, Lightning LoRA 4-step inference, and SAM3-masked DINOv3 candidate reranking for fluid 720p video from a single reference image.
Autonomous Multi-Agent AI Framework — control 1000+ apps via natural language using LangGraph + MCP + Zapier
A compact transformer-based spam classifier using DistilBERT and LoRA, built for resource-efficient fine-tuning and effective email filtering.
How close can LoRA get to full fine-tuning (FullFT) in terms of learning speed, performance, and compute tradeoffs? And under what conditions?
High-performance LPR system optimized for Indian license plates, achieving 97% character accuracy. Features a hybrid pipeline using YOLOv11 and Mamba-SSM (State Space Models) with built-in regex correction and Beam Search decoding.
Introduction To OpenLoRA: Revolutionizing the Operational Training for Large Language Models
Add a description, image, and links to the lora-fine-tuning topic page so that developers can more easily learn about it.
To associate your repository with the lora-fine-tuning topic, visit your repo's landing page and select "manage topics."