MonsterMMORPG's picture
Update README.md
5e2cbee verified
|
raw
history blame
1.78 kB

Purely for science Model Trainings

Installers and config files : https://www.patreon.com/posts/112099700

Fine Tunings : https://youtu.be/FvpWy1x5etM

Used config name : 48GB_GPU_28200MB_6.4_second_it_Tier_1.json

Trained up to 200 epochs with exactly same config

Captions : ohwx man - nothing else

Activation token - trigger word : ohwx man

Dataset - 1024x1024 - 28 images : https://www.patreon.com/posts/114972274

The dataset post above contains grid testing prompts, full used configs, and many more info

Grid testings for Fine-Tuning / DreamBooth

I will like epoch 170 is the best

Dwayne_Fine_Tune_Realism_Test_Part1.jpg Dwayne_Fine_Tune_Realism_Test_Part2.jpg

LoRA : https://youtu.be/nySGu12Y05k

Used config name : Rank_1_29500MB_8_85_Second_IT.json

Rest are same as above

Used Kohya GUI : 021c6f5ae3055320a56967284e759620c349aa56

Torch : 2.5.1 , xFormers 0.0.28.post3 : https://www.patreon.com/posts/112099700

Model File Name Meanings

Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors - 10 epochs FLUX Fine Tuning / DreamBooth training = 28 * 10 = 280 steps - Batch size 1, 1024x1024

Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors - 20 epochs FLUX Fine Tuning / DreamBooth training = 28 * 20 = 560 steps - Batch size 1, 1024x1024

Dwayne_Johnson_FLUX_LoRA-000010.safetensors - 10 epochs FLUX LoRA Training = 28 * 10 = 280 steps - Batch size 1, 1024x1024

Dwayne_Johnson_FLUX_LoRA-000010.safetensors - 20 epochs FLUX LoRA Training = 28 * 20 = 560 steps - Batch size 1, 1024x1024