Update README.md
Browse files
README.md
CHANGED
|
@@ -79,8 +79,24 @@ A newest fully public tutorial here for how to use : https://youtu.be/-zOKhoO9a5
|
|
| 79 |
|
| 80 |
## Comparison Results
|
| 81 |
|
|
|
|
| 82 |
- [LoRA 90 vs 160 vs Fine-Tuning 170 Comparison](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/LoRA_90_Epoch_vs_LoRA_160_Epoch_vs_Fine_Tuning_170_Epoch.jpg)
|
| 83 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 84 |
### Key Observations
|
| 85 |
- LoRA demonstrates excellent realism but shows more obvious overfitting when generating stylized images.
|
| 86 |
- Fine-Tuning / DreamBooth is better than LoRA as expected.
|
|
|
|
| 79 |
|
| 80 |
## Comparison Results
|
| 81 |
|
| 82 |
+
### LoRA Epochs Comparison
|
| 83 |
- [LoRA 90 vs 160 vs Fine-Tuning 170 Comparison](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/LoRA_90_Epoch_vs_LoRA_160_Epoch_vs_Fine_Tuning_170_Epoch.jpg)
|
| 84 |
|
| 85 |
+
### Precision Testing
|
| 86 |
+
Compared different precision formats in LoRA training:
|
| 87 |
+
- FP8 vs FP16 vs FP32 LoRA configurations
|
| 88 |
+
- [View Precision Comparison Grid](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/LoRA_Precision_FP8_vs_FP16_vs_FP32_Grid.jpg)
|
| 89 |
+
|
| 90 |
+
### Model Variant Analysis
|
| 91 |
+
Tested various model variants with LoRA (FP32 Version):
|
| 92 |
+
- FP8 FLUX DEV Base
|
| 93 |
+
- FP8 Scaled
|
| 94 |
+
- GGUF 8
|
| 95 |
+
- FLUX DEV
|
| 96 |
+
|
| 97 |
+
[View Model Variants Comparison Grid](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Model_Variants_Tests_Grid.jpg)
|
| 98 |
+
-
|
| 99 |
+
|
| 100 |
### Key Observations
|
| 101 |
- LoRA demonstrates excellent realism but shows more obvious overfitting when generating stylized images.
|
| 102 |
- Fine-Tuning / DreamBooth is better than LoRA as expected.
|