SECourses Musubi Tuner - 1-Click to Install App for LoRA Training and Full Fine Tuning Qwen Image, Qwen Image Edit, Wan 2.1 and Wan 2.2 Models with Musubi Tuner with Ready Presets
APP Download Link : https://www.patreon.com/posts/137551634
- This is an interface app based on the famous Kohya Musubi Tuner, enhanced with extra features, a high level of detail, and an easy-to-use interface.
- Patreon: Exclusive Posts Index | Scripts Update History | Special Generative Scripts List
- GitHub: Stable Diffusion & Generative AI Repository (Please Star, Watch, and Fork!)
- Community: SECourses Discord | Reddit Subreddit
- Connect: LinkedIn
Latest Zip File: SECourses_Musubi_Trainer_v3.zip
- App Screenshots Gallery: View on Reddit
- Current Focus: Full research is underway to prepare the very best presets for Qwen Image LoRA training.
- The goal is to enable Qwen Image LoRA training on GPUs with as low as 6 or 8 GB of VRAM using block swapping and other optimizations.
- Easy Installation: 1-click installers are available for Windows, RunPod, and Massed Compute.
- Includes a 1-click model downloader script for necessary models (
qwen_2.5_vl_7b_fp16.safetensors,qwen_image_bf16.safetensors,qwen_train_vae.safetensors). - Important: Please use the provided model downloader to avoid issues with incorrect model versions.
- Moreover the downloader script verifies SHA 256 hash of the models and prevents any possibly corrupted model downloads
- The model downloader uses a UGET-like method for ultra-fast and robust downloads, replacing the standard Hugging Face downloader.
- The Musubi Tuner automatically handles FP8 and FP8 scaled conversion when loading the BF16 model into RAM so BF16 models are used.
- Includes a 1-click model downloader script for necessary models (
- Technical Foundation: This app is an interface based on the official Kohya Musubi Tuner, incorporating all its features plus additional enhancements.
- Modern Tech Stack: The installer comes with Torch 2.7, CUDA 12.8, and pre-compiled libraries for xFormers, Triton, Flash Attention, and Sage Attention.
- Broad GPU Support: Supports a wide range of GPUs including RTX 3000, 4000, 5000 series, A40, L40, A100, H100, B200, etc.
- Note: Flash Attention and Sage Attention may not work on RTX 2000 or 1000 series GPUs, but a solution is planned.
- Current Capabilities:
- Fully supports Qwen Image model LoRA training.
- Supports Qwen2.5-VL image captioning
- Important Note: The original
Musubi Tunertab from the fork is not tested or supported. Please use the Qwen Image LoRA and Image Captioning tabs. - The included
test1.tomlis a basic test file to confirm functionality and is not an optimal configuration.
- Qwen Image Edit model LoRA training.
- Qwen Image model full Fine Tuning / DreamBooth.
- Wan 2.2 LoRA training and Fine Tuning / DreamBooth.
- Wan 2.1 LoRA training and Fine Tuning / DreamBooth.
- Python: 3.10.11
- NVIDIA: CUDA 12.8, cuDNN 9.7 or above
- Tools: FFmpeg, C++ tools, MSVC, and Git
- Note: CUDA 12.8 is compatible with all modern GPUs. If you encounter issues, follow this tutorial precisely: https://youtu.be/DrhUHnYfwC0.
- A full tutorial is coming soon.
- Use the
Windows_Install_and_Update.batscript for installation and updates. - Follow the same folder logic as Kohya's trainer (e.g.,
Parent Folder > 1_ohwx man). Use the Generate Dataset Configuration button to handle the setup.
- Sign Up: Register via this link.
- Use coupon code
SECoursesfor a discount on all GPUs. - For more details on GPUs and pricing, read this post.
- Use coupon code
- Select GPU: Choose an RTX A6000 or better (e.g., L40S, A6000 ADA, A100, H100, RTX 6000 PRO).
- Select Image: Choose SECourses from the "Creator" dropdown menu.
- Follow Instructions: Refer to the
Massed_Compute_Instructions_READ.txtfile in the repository. - Video Tutorial: How to use Massed Compute (starts at 12:58)
- Sign Up: Register via this link.
- Follow Instructions: Refer to the
Runpod_Instructions_READ.txtfile and use the template provided within it. - Video Tutorial: How to use RunPod (starts at 22:03)
30+ examples shared here : https://medium.com/@furkangozukara/qwen-image-lora-trainings-stage-1-results-and-pre-made-configs-published-as-low-as-training-with-ba0d41d76a05

