Update README.md
Browse files
README.md
CHANGED
|
@@ -1,63 +1,80 @@
|
|
| 1 |
-
|
| 2 |
|
| 3 |
-
|
| 4 |
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
| 20 |
|
| 21 |
-
|
| 22 |
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
| 26 |
|
|
|
|
| 27 |
|
| 28 |
-
LoRA
|
| 29 |
|
| 30 |
-
|
|
|
|
| 31 |
|
| 32 |
-
|
| 33 |
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
[LoRA_90_Epoch_vs_LoRA_160_Epoch_vs_Fine_Tuning_170_Epoch.jpg](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/LoRA_90_Epoch_vs_LoRA_160_Epoch_vs_Fine_Tuning_170_Epoch.jpg)
|
| 45 |
-
|
| 46 |
-
LoRA does really good job for realism, but when stylized images generated its overfitting way more obvious
|
| 47 |
-
|
| 48 |
-
Used Kohya GUI : 021c6f5ae3055320a56967284e759620c349aa56
|
| 49 |
-
|
| 50 |
-
Torch : 2.5.1 , xFormers 0.0.28.post3 : https://www.patreon.com/posts/112099700
|
| 51 |
-
|
| 52 |
-
### Model File Name Meanings
|
| 53 |
-
|
| 54 |
-
Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors - 10 epochs FLUX Fine Tuning / DreamBooth training = 28 * 10 = 280 steps - Batch size 1, 1024x1024
|
| 55 |
-
|
| 56 |
-
Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors - 20 epochs FLUX Fine Tuning / DreamBooth training = 28 * 20 = 560 steps - Batch size 1, 1024x1024
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
Dwayne_Johnson_FLUX_LoRA-000010.safetensors - 10 epochs FLUX LoRA Training = 28 * 10 = 280 steps - Batch size 1, 1024x1024
|
| 60 |
-
|
| 61 |
-
Dwayne_Johnson_FLUX_LoRA-000010.safetensors - 20 epochs FLUX LoRA Training = 28 * 20 = 560 steps - Batch size 1, 1024x1024
|
| 62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 63 |
|
|
|
|
| 1 |
+
# Model Training Experiments: Fine-Tuning vs LoRA Comparison
|
| 2 |
|
| 3 |
+
This repository contains experimental results comparing Fine-Tuning/DreamBooth and LoRA training approaches.
|
| 4 |
|
| 5 |
+
## Additional Resources
|
| 6 |
+
- [Installers and Config Files](https://www.patreon.com/posts/112099700)
|
| 7 |
+
- [Fine Tuning Tutorial](https://youtu.be/FvpWy1x5etM)
|
| 8 |
+
- [LoRA Tutorial](https://youtu.be/nySGu12Y05k)
|
| 9 |
+
- [Complete Dataset and Testing Prompts](https://www.patreon.com/posts/114972274)
|
| 10 |
|
| 11 |
+
## Environment Setup
|
| 12 |
|
| 13 |
+
- Kohya GUI Version: `021c6f5ae3055320a56967284e759620c349aa56`
|
| 14 |
+
- Torch: 2.5.1
|
| 15 |
+
- xFormers: 0.0.28.post3
|
| 16 |
|
| 17 |
+
## Dataset Information
|
| 18 |
|
| 19 |
+
- Resolution: 1024x1024
|
| 20 |
+
- Dataset Size: 28 images
|
| 21 |
+
- Captions: "ohwx man" (nothing else)
|
| 22 |
+
- Activation Token/Trigger Word: "ohwx man"
|
| 23 |
|
| 24 |
+
## Fine-Tuning / DreamBooth Experiment
|
| 25 |
|
| 26 |
+
### Configuration
|
| 27 |
+
- Config File: `48GB_GPU_28200MB_6.4_second_it_Tier_1.json`
|
| 28 |
+
- Training: Up to 200 epochs with consistent config
|
| 29 |
+
- Optimal Result: Epoch 170 (subjective assessment)
|
| 30 |
|
| 31 |
+
### Results
|
| 32 |
+
- [Realism Test Part 1](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_Fine_Tune_Realism_Test_Part1.jpg)
|
| 33 |
+
- [Realism Test Part 2](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_Fine_Tune_Realism_Test_Part2.jpg)
|
| 34 |
|
| 35 |
+
## LoRA Experiment
|
| 36 |
|
| 37 |
+
### Configuration
|
| 38 |
+
- Config File: `Rank_1_29500MB_8_85_Second_IT.json`
|
| 39 |
+
- Training: Up to 200 epochs
|
| 40 |
+
- Optimal Result: Epoch 160 (subjective assessment)
|
| 41 |
|
| 42 |
+
### Results
|
| 43 |
+
- [LoRA Realism Test Part 1](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_LoRA_Realism_Test_Part1.jpg)
|
| 44 |
+
- [LoRA Realism Test Part 2](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_LoRA_Realism_Test_Part2.jpg)
|
| 45 |
|
| 46 |
+
## Comparison Results
|
| 47 |
|
| 48 |
+
- [LoRA 90 vs 160 vs Fine-Tuning 170 Comparison](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/LoRA_90_Epoch_vs_LoRA_160_Epoch_vs_Fine_Tuning_170_Epoch.jpg)
|
| 49 |
|
| 50 |
+
### Key Observations
|
| 51 |
+
LoRA demonstrates excellent realism but shows more obvious overfitting when generating stylized images.
|
| 52 |
|
| 53 |
+
## Model Naming Convention
|
| 54 |
|
| 55 |
+
### Fine-Tuning Models
|
| 56 |
+
- `Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors`
|
| 57 |
+
- 10 epochs
|
| 58 |
+
- 280 steps (28 images × 10 epochs)
|
| 59 |
+
- Batch size: 1
|
| 60 |
+
- Resolution: 1024x1024
|
| 61 |
|
| 62 |
+
- `Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors`
|
| 63 |
+
- 20 epochs
|
| 64 |
+
- 560 steps (28 images × 20 epochs)
|
| 65 |
+
- Batch size: 1
|
| 66 |
+
- Resolution: 1024x1024
|
| 67 |
|
| 68 |
+
### LoRA Models
|
| 69 |
+
- `Dwayne_Johnson_FLUX_LoRA-000010.safetensors`
|
| 70 |
+
- 10 epochs
|
| 71 |
+
- 280 steps (28 images × 10 epochs)
|
| 72 |
+
- Batch size: 1
|
| 73 |
+
- Resolution: 1024x1024
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 74 |
|
| 75 |
+
- `Dwayne_Johnson_FLUX_LoRA-000020.safetensors`
|
| 76 |
+
- 20 epochs
|
| 77 |
+
- 560 steps (28 images × 20 epochs)
|
| 78 |
+
- Batch size: 1
|
| 79 |
+
- Resolution: 1024x1024
|
| 80 |
|