Wan 2.1 LoRA training locally ?
Thanks for making the VisoMaster portable version for RTX 5000 series.
WAN 2.1 LoRA training LOCALLY on Windows:
Will you please consider to make a portable version for Training Wan 2.1 LoRA using video dataset for motion?
I'm not sure what is the most user-friendly trainer is, but I've heard of "Misubi-Tuner" which I failed trying to install on my RTX 5090, a portable version will be amazing to test:
https://github.com/kohya-ss/musubi-tuner
Maybe there are a more user-friendly Wan 2.1 LoRA trainers for local training but I'm not sure.
Thanks ahead for your upcoming portable versions, they rocks! π
Hi. Well, I can try to do it closer to the weekend, I guess. But I've seen another repository, I think.
Thanks for the kind reply,
I'm not sure if Musubi-Tuner even have a GUI or some sort of User Interface, I was looking for anything user friendly such as FluxGym to allow to train Wan 2.1 LoRA locally but I'm noob into this.
I'm not 100% sure but I think there is also a GUI for Musubi-Tuner, and another one maybe but I remember tried to install it on my RTX 5090 and I had no luck so I gave up...
Anyhow,
If you know ANY other user-friendly way to train locally Wan 2.1 LoRA locally please consider to make a portable version it will be so great to try.
Thanks ahead! π
I took a look at this repository... It's not as scary as it seems to me at first glance lol. i'll probably make the first test version of webui (simple enough) this weekend just to practice on the pictures for now
That will be awesome!
I still have no clue how to actually start training based on some VIDEO files, people say they're using 1-2 seconds of videos to train a LoRA for WAN 2.1 so I'm curious how to START training with it because it looks overwhelming with options and properties.
If by any chance you'll give it a try (even just for START training) please consider to drop some tips & tricks how you did it?
It will be probably much easier to follow than try to understand any repository's confusing advanced explanations.
Also, once it will work (with your upcoming portable version) I'll be happy to try it myself and share if I learn something.
Thanks ahead, please keep up the good work! π
Hm. Recently i train some flux LORAs... Maybe its help to train wan
I've trained my first Flux LoRA with FluxGym it was easy (unless you'll look on the ADVANCED options which is overwhelming)
But I don't think it's simple as WAN... I may be wrong, we gotta give it a chance.
I didn't find a tutorial for LoRA training, people are mentioning properties and stuff like it's obvious... I'm just a noob when it comes to training.
I don't know if training WAN 2.1 LoRA is the same, maybe it is? it's just that the trainer options looks scary...
but it seems like many people already using it so maybe there are some "Defaults" we can use to train some tests first, from what I read people are using 3-4 videos each 1-2 seconds and they get nice LoRAs, but some do much more than that.. more videos or epochs, etc.. I guess we'll have to try it first.
There is another trainer people are using but I heard it's HELL to install on windows, "Diffusion Pipe" maybe this will help:
https://www.stablediffusiontutorials.com/2025/03/wan-lora-train.html
From my understanding Musubi-Tuner supposed to be easier to install, but training... that's a different thing of course.
Please feel free to share your experience, of course I will do the same so we can help each other but I first need a Wan 2.1 trainer that works on my RTX 5090, maybe whatever you'll decide to make a portable version will be great so we can use the same exact TOOL and share our experience and learn how to train Wan 2.1 LoRA baed on tests we'll do πͺ
fluxgym is literally for people who want quick and easy. i used kohya-ss.
Right now, by the way, i'm making portable version of lllyasviel framepack with sageattention and rtx 50xx support. and i'll try to make misubi-trainer closer to the weekend
Everything is easy enough to install, except for the BORING flash! I tried to build it on my computer, but I only have 32gb RAM and during the build process it increased my swap file to an abnormal 110gb. maybe when I buy myself 128gb RAM then I'll build it.... but I don't know when that will be.
I've trained my first Flux LoRA with FluxGym it was easy (unless you'll look on the ADVANCED options which is overwhelming)
But I don't think it's simple as WAN... I may be wrong, we gotta give it a chance.
I didn't find a tutorial for LoRA training, people are mentioning properties and stuff like it's obvious... I'm just a noob when it comes to training.I don't know if training WAN 2.1 LoRA is the same, maybe it is? it's just that the trainer options looks scary...
but it seems like many people already using it so maybe there are some "Defaults" we can use to train some tests first, from what I read people are using 3-4 videos each 1-2 seconds and they get nice LoRAs, but some do much more than that.. more videos or epochs, etc.. I guess we'll have to try it first.There is another trainer people are using but I heard it's HELL to install on windows, "Diffusion Pipe" maybe this will help:
https://www.stablediffusiontutorials.com/2025/03/wan-lora-train.htmlFrom my understanding Musubi-Tuner supposed to be easier to install, but training... that's a different thing of course.
Please feel free to share your experience, of course I will do the same so we can help each other but I first need a Wan 2.1 trainer that works on my RTX 5090, maybe whatever you'll decide to make a portable version will be great so we can use the same exact TOOL and share our experience and learn how to train Wan 2.1 LoRA baed on tests we'll do πͺ
I have considered diffusion pipe. there is nothing super special for windows except deepspeed and flash-attn, perhaps. you will have to build them yourself.... not recommended for beginners. and there's no gui (kind of, right?)
Yeah, I gave up on diffusion pipe because I never tried it before and it looks confusing.
Also... no GUI sucks, look how amazing and easy things are with FluxGym for example, maybe Musubi-Tuner with the GUI gradio will be more likely FluxGym (again, I can't say since I never got it to work yet also I don't really know how to train Wan 2.1 LoRA as well yet).
There is another Musubi-Tuner GUI I found which was NOT Gradio based, but again.. I couldn't make it work, also... it looks SO CONFUSING I wouldn't even know how to start if it worked π€
There is a "guide" but it's not very detailed, not step-by-step and also aiming mostly for Hunyuna (which is crap compare to the KING: Wan 2.1)
Here is it:
https://github.com/Kvento/musubi-tuner-wan-gui
Even that it RUNS it's not working on my side, probably you know what MAGIC to do in order to make it TRUE portable with all the environment needs and such for the RTX 5090 but like I said... look how confusing it looks, maybe the GRADIO version is easier to follow?
Please keep on update and huge thanks for your patience, I hope we can learn how to train locally and share the experience π
Well! I've done it! for now it's a nightly build to learn the basic bugs of training (and so on). gradually debugging it I'll release a stable version
CONGRATS! you rocks! β€οΈ
I'll give it a try just to see if it works on my machine.
Since I have no clue how to train with Musubi-Tuner, or how to train a Wan 2.1 LoRA in general... please share some guide or anything that can help STARTING since nobody explain it well, especially not for noobs who never trained before.
Keep on update and Thanks ahead for your next portables! π
Well, what can I say? You'll have to fine-tune your workout parameters, I guess
There are sooo many parameters, are there any specific I should even leave alone for first test?
any that you recommend to tweak?
Did you successfully train anything with it?
Also, what do you recommend on a VERY SHORT training test to just SEE if it works? like... 2-3 short videos?
I'm very curious, I also just downloaded your Nightly Portable, I'll have to see first if it runs on my machine (probably it will)
Of course once I'll get the hang of it, I'll share my experience but at the moment I don't even know what to start with for a SHORT simple training so I won't let my PC run for hours with wrong dataset or parameters I shouldn't even touch for testing.
Any tip & tricks will help for start, thanks ahead!
I'm going to try run your portable version first... π€
What are you going to train? Movement?
Yes, my main goal is to train movement.
Whats a movement, btw?
I mean, it could be any movement of a character or animal or whatever, the way something moves.
For example let's say you want to train a Person sitting on a chair.
From my understanding we need datasets of 2-3 seconds of some videos that shows Man or Woman sitting on a chair on different angles, different zoom (far away or close to the camera) etc..
If you want to train a movement of a human JUMP up and down, you need dataset of 2-3 seconds videos jumping up and down (only) so your LoRA will be very focus on the movement you want to train.
Remember, that's only my understanding based on what I read on the web, I'm a total NOOB I have no idea how to actually train Wan 2.1 LoRA that's why YOUR HELP in this portable and probably both of us testing and share our experience could help understand how it works.
I think it will be somewhat similar to FLUX training
I REALLY hope so.. I'm still overwhelmed with the zillion parameters nobody explain, all I want to know is how to test something but... I can't say what should I even touch or ignore and maybe there are some files needed, .toml or whatever... I don't know how to even create these files properly for the trainer, if needed maybe it's automatic? maybe it's already exist? I have no clue.
I kinda did an automatic dataset creation in webui
Well the good news, like so far 100% of the Portable versions you made... Musubi-Tuner WORKS!
The not so good news... I'm super confused with all the properties and even the first tab related to .toml
Did you successfully trained anything with it already? (I know it may take time to "cook") but maybe you did a test already so I'm curious about it so I can try myself... for start I'll try to find a short dataset for testing.
I'm also wondering how long it will take to train with 5090... obviously each case is different, but for start I want to try training the most simple possible just to see if it works and the LoRA will actually do something.
Basically it's still that brainfuck and I was a bit lazy to download all the WAN models and do any dataset with my internet speed (100 mbps for like 8 years now, s-stability). But ok, I'll check and tweak the bugs I find
I guess that will make the next Portable version larger but with ALL the needed WAN models it will be SWEAT! π
I won't include the WAN models in the portable Version because then it would weigh about 40 gigabytes πΏ
That make sense π
Do I need to download them manually and place them... in what directory?
I tried to follow they're confusing document, they say where to download from hugging face... not in what directory to put each model.
I've made it in webui so that you can set any path you want
Oh! that's cool and I need to save it, or once I'm exiting it will "remember" the paths?
So in a NEW session I won't need to set the paths again?
Yes, he write a toml file - config. Just save it one time and use everyone. Simple and smart
Thanks!
I'm trying to figure out what to do on that first TAB with the toml file, I guess that's another word for "defaults" that I can re-use.
I'll try to find or edit some simple dataset videos, but damn it's hard when most guides either not for Musubi-Tuner specific or for Hunyuan (which sucks compare to Wan 2.1) I hope I'll understand how to start training something.
At first glance, it's simple
Yeah it's less scary than most of the advanced options in FluxGym (hundreds of them which make no sense to a noob like me)
I guess I'll just have to try something... shoot in the air, the problem is it may take hours for not making sense LoRA result that's why I prefer look for some tips, but it's hard to find even on reddit.
All right. I'll try it tomorrow then. Now it's time for sleep
Thank you for the hard work, I appreciate it!
G'night my friend π
I don't think the update batch file works, I've added a "pause" to see what happens because it was closing too quick:
I think it's a path think but I'm not sure, also I have no idea how to make it work.
The main 'start_nvidia.bat' works fine π
Can you please tell me how to fix that or upload a fixed update file so I will overwrite?
Thanks ahead! π
Hm. ITS a strange bug
Oh, so I can't update to latest version maybe it's fixed some things.
I think there are few other bugs, not sure if it's the Portable or not because most people don't use the from what I understand.
Anyhow
I'm trying to train a test Wan 2.1 LoRA with the Portable version I downloaded from here.
I'm on the very first TAB done entering all it needs to create a .toml file, but when I try to save I get this error:
So I just saved the file manually (with the arrow button) and renamed it to 'dataset_config.toml'
I think it was supposed to be saved in the dataset directory, I noticed it didn't so I will have to do that manually, no big deal.
I may ran into other issues since I'm clueless about the whole process, I'll keep on update in case it's something you can tweak on the Portable.
π UPDATE:
Now I'm on the 2nd TAB, setting up all the correct models and path.
Unfortunately I get the EXACT same error when I try to save the SETTINGS (json file) this time there is no way to export .txt file like in the 1st tab so I guess this will be hell to setup every time again and again instead of just Load the settings via .json file.
Again, I have no idea if it's error related to the Portable or the GUI repo, but I'm sharing with you incase you can solve this?
Lastly I've tried to TRAIN,
pressing the START TRAINING BUTTON:
But I get errors so it's impossible to train, I guess to many bugs.
on CMD it shows the EXACT same error as I post above,
and in the Gradio it show the settings but at the final end it said that it's finished with errors.
Starting training process via subprocess with settings: { "DATASET_CONFIG": "dataset/dataset_config.toml", "VAE_MODEL": "Models/Wan/Wan2.1_VAE.pth", "CLIP_MODEL": "Models/Wan/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth", "T5_MODEL": "Models/Wan/models_t5_umt5-xxl-enc-bf16.pth", "DIT_MODEL": "Models/Wan/wan2.1_i2v_720p_14B_fp16", "LORA_OUTPUT_DIR": "Output_LoRAs/", "LORA_NAME": "My_Best_Lora_v1", "RESUME_TRAINING": "", "MODEL_TYPE": "i2v-14B", "LEARNING_RATE": 2e-05, "LORA_LR_RATIO": 4, "NETWORK_DIM": 32, "NETWORK_ALPHA": 4, "MAX_TRAIN_EPOCHS": 30, "SAVE_EVERY_N_EPOCHS": 10, "SEED": 1234, "BLOCKS_SWAP": 16, "OPTIMIZER_TYPE": "adamw8bit", "OPTIMIZER_ARGS": "", "FP8": true, "SCALED": false, "ATTENTION_MECHANISM": "none", "IMG_IN_TXT_IN_OFFLOADING": false, "gradient_checkpointing": true, "persistent_data_loader_workers": true, "save_state": true, "LR_SCHEDULER": "constant", "LR_WARMUP_STEPS": 0, "LR_DECAY_STEPS": 0, "TIMESTEP_SAMPLING": "shift", "DISCRETE_FLOW_SHIFT": 3, "WEIGHTING_SCHEME": "none", "LOGGING_DIR": "", "LOG_WITH": "none", "LOG_PREFIX": "", "METADATA_TITLE": "", "METADATA_AUTHOR": "", "METADATA_DESCRIPTION": "", "METADATA_LICENSE": "", "METADATA_TAGS": "", "ENABLE_CACHE": true, "fp8_t5": true, "INPUT_LORA": "", "OUTPUT_DIR": "Output_LoRAs/", "CONVERTED_LORA_NAME": "My_Best_Lora_v1_converted", "mixed_precision": "bf16", "num_cpu_threads_per_process": 2, "max_data_loader_n_workers": 2 } --- Cache enabled. Starting caching steps... --- --- ERROR: Script for caching_latents not found at None. Skipping. --- --- Training process finished with errors.
HMMMMM
Okay, this day im test this trainer maybe
Thanks! I will re-download and will try again just to see if it even start training.
I'll update if I found any issues. π
Well, it's a long enough night, so why not
Oh, the problem in update.bat... so, its didnt make sense yet, cause original repo doesnt update, sooo
Btw, wouldnt it be more convenient to go to telegram group, for example?
I don't use Telegram but I do have Discord installed it will be easier for us to chit-chat :)
If you already in Discord you can add me, my nickname is: 'virtualwishx'
Mm, discord is blocked in my country, lol
Oh! well I guess we're stuck in here for now? LOL
Unless there is another way? I mean we can talk via Gmail Chat (built in to the Gmail account)
I'm slowly trying to do the TABS one by one to see if everything works until the point it start Training.
Not that I understand much about the setup properties, I read some posts to follow but let's see if the Portable works on my side.
For now I'm doing step by step slowly because I'm at the same time eating so I'm in slow-motion mode π
I'll keep on update.
i already have notifications on my gmail π«‘
Private space? I dont have perm
Updating:
The good news:
It starts training!
The bad news:
After a minute or so it stops with errors, so I'm not sure what I'm doing wrong...
--- training process finished with errors or non-zero exit code (1). ---
--- Training sequence finished with errors.
I'm not sure if it's related to bugs in the Portable or if I did something wrong,
Notice the error at the end log, something with specific lines related to module or path, I have no idea:
--- Starting training --- Script: D:/AI/Musubi Tuner/musubi_portable/musubi-tuner/wan_train_network.py Working Directory: D:\AI\Musubi Tuner\musubi_portable Running command: D:\AI\Musubi Tuner\musubi_portable\python\python.exe D:/AI/Musubi Tuner/musubi_portable/musubi-tuner/wan_train_network.py --persistent_data_loader_workers --optimizer_type adamw8bit --output_name My_Best_Lora_v1 --lr_scheduler constant --weighting_scheme none --log_with tensorboard --logging_dir D:/AI/Musubi Tuner/musubi_portable/logs --max_train_epochs 20 --metadata_title --save_every_n_epochs 10 --network_dim 32 --timestep_sampling shift --lr_warmup_steps 0 --learning_rate 2e-05 --metadata_author --max_data_loader_n_workers 2 --vae D:/AI/Musubi Tuner/musubi_portable/Models/Wan/Wan2.1_VAE.pth --save_state --t5 D:/AI/Musubi Tuner/musubi_portable/Models/Wan/models_t5_umt5-xxl-enc-bf16.pth --discrete_flow_shift 3 --dit D:/AI/Musubi Tuner/musubi_portable/Models/Wan/wan2.1_i2v_720p_14B_fp16.safetensors --clip D:/AI/Musubi Tuner/musubi_portable/Models/Wan/models_clip_open-clip-xlm-roberta-large-vit-huge-14.pth --dataset_config D:/AI/Musubi Tuner/musubi_portable/dataset_config.toml --fp8_t5 --fp8_scaled --network_alpha 4 --gradient_checkpointing --output_dir D:/AI/Musubi Tuner/musubi_portable/Output_LoRAs --seed 1234 --lr_decay_steps 0 --fp8_base --mixed_precision fp16 --task i2v-14B --blocks_to_swap 16 --network_args loraplus_lr_ratio=4.0 conv_dim=4 conv_alpha=1 target_modules=Attention --network_module networks.lora [training STDERR] INFO:wan.modules.model:Detected DiT dtype: torch.float16 [training STDERR] INFO:hv_train_network:Load dataset config from D:\AI\Musubi Tuner\musubi_portable\dataset_config.toml [training STDERR] INFO:dataset.image_video_dataset:glob videos in D:/AI/Musubi Tuner/musubi_portable/dataset/videos [training STDERR] INFO:dataset.image_video_dataset:found 8 videos [training STDERR] INFO:dataset.config_utils:[Dataset 0] [training STDERR] is_image_dataset: False [training STDERR] resolution: (960, 544) [training STDERR] batch_size: 1 [training STDERR] num_repeats: 1 [training STDERR] caption_extension: ".txt" [training STDERR] enable_bucket: False [training STDERR] bucket_no_upscale: False [training STDERR] cache_directory: "D:/AI/Musubi Tuner/musubi_portable/cache" [training STDERR] debug_dataset: False [training STDERR] video_directory: "D:/AI/Musubi Tuner/musubi_portable/dataset/videos" [training STDERR] video_jsonl_file: "None" [training STDERR] control_directory: "None" [training STDERR] target_frames: (1, 25, 49) [training STDERR] frame_extraction: head [training STDERR] frame_stride: 1 [training STDERR] frame_sample: 1 [training STDERR] max_frames: 129 [training STDERR] source_fps: 16.0 [training STDERR] [training STDERR] [training STDERR] INFO:dataset.image_video_dataset:bucket: (960, 544, 1), count: 8 [training STDERR] INFO:dataset.image_video_dataset:bucket: (960, 544, 25), count: 8 [training STDERR] INFO:dataset.image_video_dataset:total batches: 16 [training STDERR] INFO:hv_train_network:preparing accelerator [training STDERR] INFO:hv_train_network:DiT precision: torch.float16, weight precision: None [training STDERR] INFO:hv_train_network:Loading DiT model from D:/AI/Musubi Tuner/musubi_portable/Models/Wan/wan2.1_i2v_720p_14B_fp16.safetensors [training STDOUT] Trying to import sageattention [training STDOUT] Failed to import sageattention [training STDOUT] accelerator device: cuda [training STDERR] Traceback (most recent call last): [training STDERR] File "D:\AI\Musubi Tuner\musubi_portable\musubi-tuner\wan_train_network.py", line 442, in <module> [training STDERR] trainer.train(args) [training STDERR] File "D:\AI\Musubi Tuner\musubi_portable\musubi-tuner\hv_train_network.py", line 1464, in train [training STDERR] raise ValueError( [training STDERR] ValueError: either --sdpa, --flash-attn, --flash3, --sage-attn or --xformers must be specified / --sdpa, --flash-attn, --flash3, --sage-attn, --xformers\u306e\u3044\u305a\u308c\u304b\u3092\u6307\u5b9a\u3057\u3066\u304f\u3060\u3055\u3044 --- STDOUT reader thread for training finished. --- --- STDERR reader thread for training finished. --- --- training process finished with errors or non-zero exit code (1). --- --- Training sequence finished with errors.
Oh, Arguments Error. I'll fix it
Any Luck fixing it?
I'll test it again once you'll upload the new version π
This version of Musubi trainer is also good and it uses scripts but i cant get it to work with my RTX5090 :-(
https://github.com/sdbds/musubi-tuner-scripts
Any Luck fixing it?
I'll test it again once you'll upload the new version π
damn, I thought I answered you and forgot about it. no, I haven't put in the fixes yet. closer to the weekend....
PyTorch 2.7.0 - just released if it helps?
https://github.com/pytorch/pytorch/releases/tag/v2.7.0
https://pytorch.org/get-started/locally/
so, why? i use beta 2.8
Oh... interesting, until now I used 2.8 Nightly... not that it helped in all cases,
As long as 2.7.0. also support CUDA 12.8 I think things should (theoretically) works fine?
BTW - I tried other properties and such on the Musubi-Tuner, but nothing helped and I always got the same error, I guess it's something on the code after all.
Maybe. I use 2.8 only if 2.6 dont support cu128
Based on your other Portables I tested, it works fine so you can keep using the 2.8 π
Since the current Musubi-Tuner can't train (code errors)
Do you think it's possible to make a Portable 1-click with one of these other GUI for Musubi-Tuner ?
https://github.com/kohya-ss/musubi-tuner/issues/162
This looks a bit overwhelming but at the same time you can train ANY model:
https://github.com/maybleMyers/h1111
But maybe the one above who focus on Musubi-Tuner for Wan 2.1 LoRA should be easier to setup for test if it works at all?
Based on your other Portables I tested, it works fine so you can keep using the 2.8 π
Its nice
dude, I move DFLive to a new version of the code faster than I fixed musubi π
Musubi-Tuner seems much cleaner than Diffusion Pipe which most people are using, but I understand more and more people start using Musubi-Tuner for training their wan 2.1 LoRa, I can't even test it yet but I think it was "ALMOST" start training hehe...
For now I keep on looking for other training solutions for Wan 2.1, so far there are not many and most of them doesn't even support GUI which sucks because the amount of scripts to use without GUI sucks for every training.
But if you find some way that allow to train Wan 2.1 LoRA locally with the 5090 please share, I appreciate your hard work thanks ahead! π
Okay
Hello again, I hope you're doing well β€οΈ
Since Musubi-Tuner is not allow to train (like I mentioned above, some errors so it can't train)
Any chance you can have a look on this:
https://github.com/alisson-anjos/diffusion-pipe-ui
EDIT: (the updated one)
https://github.com/alisson-anjos/diffusion-pipe-ui/tree/new-ui
It's a Diffusion-Pipe UI, so it's supposed to be a bit easier to follow, probably harder to install because of WSL but it also can train other types and keeps update when new models are updated, Probably Musubi-Tuner is much easier to follow but sadly it's not possible to use it because of the errors.
If you know any other Wan 2.1 LoRA trainer, please share I would love to try train something locally and share what I learn once I can train something π
Thanks for the suggestion, currently i'm using this fork and it works with the rtx5000 series
Wow, ui for Diffusion Pipe? So interested
Thanks for the suggestion, currently I'm using this fork and it works with the rtx5000 series
Is it complected without UI?
I still couldn't train anything successfully via the portable version because of the errors, but it was like 99% all setup worked.. until training itself of course.
Will you make a portable version for the Musubi-Tuner-Scripts?
Also, will you consider to make Batch files (.bat) as template to Train based on your experience with it, if it's all working with text?
I can't imagine how complicated it is compare to UI
Any tips will be sure helpful, thanks ahead!
Wow, ui for Diffusion Pipe? So interested
Yeah, it sounds like a GLOBAL trainer because it usually updates with the latest Models, maybe with the UI it will be easier to install / use
I'll try to finish webui musubi before the end of May
That will be great, thanks ahead! π
Hi again, I'm just sharing my experience so far (nothing much but maybe you can help)
So I keep looking on the internet to find a way to train locally a Wan 2.1 lora as you know and from what I've read and tried (and failed) so far: most trainers for Wan 2.1 LoRA needs the most annoying impossible almost to install: SageAttention / Triton with specific version to fit to the RTX 5090 and that may be reason why many errors appears.
In example this one is really clean: (native Windows GUI, none-Gradio)
https://github.com/Kvento/musubi-tuner-wan-gui
I had to create venv and the other requirements procedure, had some issues but I had issues when I tried to train, related to Sageattention/Trition ...that's where I stopped trying, I had too many hours even trying to install these before, when I see it I run away...The portable Gradio one you made looked easy enough to follow,
I got errors but also it didn't SAVE all settings but most of them so it was a bit weird but still nice UI look.Also the GRADIO one is interesting: (I'm not sure if this is the portable one you made? or a NEWER one)
https://github.com/kohya-ss/musubi-tuner
seems promising but I didn't understand how to install the NEWEST version
installing it was hell (I bet if you'll make a portable with built-in environment it will be 1 click and done) but, I also couldn't train on it because I failed installing Sage/Triton many times before and the only EASY way was with the ComfyUI that had a script to install it for me, I don't remember where I grab it from but RTX 5090 users used to download it and ONE CLICK to follow a guide that installs also Sageattention/Triton with ComfyUI (that's the version I'm actually using).Diffusion pipe:
was tempting to try on windows with all the WSL/UBUNTO etc.. and is more complicated and confusing to install and setup compare to anything I've tried so after spend too much time on it I just uninstalled everything related to it because I got stuck before I even could get anything to work.
So far Musubi-Tuner seems to be the best friend for Windows users, but...
I couldn't start TRAINING successfully even a small test with any of the versions of the GUI, and the one with the script you mentioned I didn't understand how to install at all it seems super confusing.
So I wonder if you can look around Musubi-Tuner (any of the versions) and see if you can make a ONE-CLICK from one of them like you did with the portable but I'm not sure what the error was all about maybe it was almost ready to train I can't tell but it RUNS easy thanks to you without extra installations!
I wrote this to share my experience so far in the last few days so you won't spend time on Diffusion Pipe unless for you it will be easy to install sure thing go for itπ
But if you can make ANY of the Musubi-Tuner to successfully allow me to TRAIN it will be awesome!
As always, I'll be happy to try your next portable and give feedabck or share tips & tricks if I'll ever be able to train locally on my RTX 5090 of course.
Oh, man. I honestly don't want to do musubi anymore. I'd rather do diffusion-pipe than him
Oh my! that was hell for me to even trying Diffusion-Pipe on Windows
With musubi I was "almost" able to train something... also many people who train LoRA move to Musubi-Tuner because they like Kohya-SS work and I guess it just works for them.
With Diffusion-Pipe (which sounds AWESOME because you can train almost any MODEL type!) I wasted like 2 days fighting trying to make it work with so many guides and tutorials... and even CoPilot but I kept fail I guess it's not gonna be working on Windows unless wasting days trying, but if you made it work or ANY other way I can train my first Wan 2.1 LoRA on Windows, please do share π
Hi @NeuroDonu . Is there any update on the WAN Lora trainer? It would be awesome. Especially with things like VACE not available.
Thanks.




