MonsterMMORPG commited on
Commit
0b741fd
·
verified ·
1 Parent(s): c57e82e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -42
README.md CHANGED
@@ -1,63 +1,80 @@
1
- Purely for science Model Trainings
2
 
3
- Installers and config files : https://www.patreon.com/posts/112099700
4
 
5
- Fine Tunings : https://youtu.be/FvpWy1x5etM
 
 
 
 
6
 
7
- Used config name : 48GB_GPU_28200MB_6.4_second_it_Tier_1.json
8
 
9
- Trained up to 200 epochs with exactly same config
 
 
10
 
11
- Captions : ohwx man - nothing else
12
 
13
- Activation token - trigger word : ohwx man
 
 
 
14
 
15
- Dataset - 1024x1024 - 28 images : https://www.patreon.com/posts/114972274
16
 
17
- The dataset post above contains grid testing prompts, full used configs, and many more info
 
 
 
18
 
19
- Grid testings for Fine-Tuning / DreamBooth
 
 
20
 
21
- I think epoch 170 is the best - this is subjective
22
 
23
- [Dwayne_Fine_Tune_Realism_Test_Part1.jpg](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_Fine_Tune_Realism_Test_Part1.jpg)
 
 
 
24
 
25
- [Dwayne_Fine_Tune_Realism_Test_Part2.jpg](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_Fine_Tune_Realism_Test_Part2.jpg)
 
 
26
 
 
27
 
28
- LoRA : https://youtu.be/nySGu12Y05k
29
 
30
- Used config name : Rank_1_29500MB_8_85_Second_IT.json
 
31
 
32
- Rest are same as above
33
 
34
- Grid testings for LoRA
 
 
 
 
 
35
 
36
- I think epoch 160 is the best - this is subjective
 
 
 
 
37
 
38
- [Dwayne_LoRA_Realism_Test_Part1.jpg](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_LoRA_Realism_Test_Part1.jpg)
39
-
40
- [Dwayne_LoRA_Realism_Test_Part2.jpg](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_LoRA_Realism_Test_Part2.jpg)
41
-
42
- LoRA 90 Epoch vs LoRA 160 Epoch vs Fine-Tuning / DreamBooth 170 Epoch
43
-
44
- [LoRA_90_Epoch_vs_LoRA_160_Epoch_vs_Fine_Tuning_170_Epoch.jpg](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/LoRA_90_Epoch_vs_LoRA_160_Epoch_vs_Fine_Tuning_170_Epoch.jpg)
45
-
46
- LoRA does really good job for realism, but when stylized images generated its overfitting way more obvious
47
-
48
- Used Kohya GUI : 021c6f5ae3055320a56967284e759620c349aa56
49
-
50
- Torch : 2.5.1 , xFormers 0.0.28.post3 : https://www.patreon.com/posts/112099700
51
-
52
- ### Model File Name Meanings
53
-
54
- Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors - 10 epochs FLUX Fine Tuning / DreamBooth training = 28 * 10 = 280 steps - Batch size 1, 1024x1024
55
-
56
- Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors - 20 epochs FLUX Fine Tuning / DreamBooth training = 28 * 20 = 560 steps - Batch size 1, 1024x1024
57
-
58
-
59
- Dwayne_Johnson_FLUX_LoRA-000010.safetensors - 10 epochs FLUX LoRA Training = 28 * 10 = 280 steps - Batch size 1, 1024x1024
60
-
61
- Dwayne_Johnson_FLUX_LoRA-000010.safetensors - 20 epochs FLUX LoRA Training = 28 * 20 = 560 steps - Batch size 1, 1024x1024
62
 
 
 
 
 
 
63
 
 
1
+ # Model Training Experiments: Fine-Tuning vs LoRA Comparison
2
 
3
+ This repository contains experimental results comparing Fine-Tuning/DreamBooth and LoRA training approaches.
4
 
5
+ ## Additional Resources
6
+ - [Installers and Config Files](https://www.patreon.com/posts/112099700)
7
+ - [Fine Tuning Tutorial](https://youtu.be/FvpWy1x5etM)
8
+ - [LoRA Tutorial](https://youtu.be/nySGu12Y05k)
9
+ - [Complete Dataset and Testing Prompts](https://www.patreon.com/posts/114972274)
10
 
11
+ ## Environment Setup
12
 
13
+ - Kohya GUI Version: `021c6f5ae3055320a56967284e759620c349aa56`
14
+ - Torch: 2.5.1
15
+ - xFormers: 0.0.28.post3
16
 
17
+ ## Dataset Information
18
 
19
+ - Resolution: 1024x1024
20
+ - Dataset Size: 28 images
21
+ - Captions: "ohwx man" (nothing else)
22
+ - Activation Token/Trigger Word: "ohwx man"
23
 
24
+ ## Fine-Tuning / DreamBooth Experiment
25
 
26
+ ### Configuration
27
+ - Config File: `48GB_GPU_28200MB_6.4_second_it_Tier_1.json`
28
+ - Training: Up to 200 epochs with consistent config
29
+ - Optimal Result: Epoch 170 (subjective assessment)
30
 
31
+ ### Results
32
+ - [Realism Test Part 1](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_Fine_Tune_Realism_Test_Part1.jpg)
33
+ - [Realism Test Part 2](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_Fine_Tune_Realism_Test_Part2.jpg)
34
 
35
+ ## LoRA Experiment
36
 
37
+ ### Configuration
38
+ - Config File: `Rank_1_29500MB_8_85_Second_IT.json`
39
+ - Training: Up to 200 epochs
40
+ - Optimal Result: Epoch 160 (subjective assessment)
41
 
42
+ ### Results
43
+ - [LoRA Realism Test Part 1](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_LoRA_Realism_Test_Part1.jpg)
44
+ - [LoRA Realism Test Part 2](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/Dwayne_LoRA_Realism_Test_Part2.jpg)
45
 
46
+ ## Comparison Results
47
 
48
+ - [LoRA 90 vs 160 vs Fine-Tuning 170 Comparison](https://huggingface.co/MonsterMMORPG/Model_Training_Experiments_As_A_Baseline/blob/main/LoRA_90_Epoch_vs_LoRA_160_Epoch_vs_Fine_Tuning_170_Epoch.jpg)
49
 
50
+ ### Key Observations
51
+ LoRA demonstrates excellent realism but shows more obvious overfitting when generating stylized images.
52
 
53
+ ## Model Naming Convention
54
 
55
+ ### Fine-Tuning Models
56
+ - `Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors`
57
+ - 10 epochs
58
+ - 280 steps (28 images × 10 epochs)
59
+ - Batch size: 1
60
+ - Resolution: 1024x1024
61
 
62
+ - `Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors`
63
+ - 20 epochs
64
+ - 560 steps (28 images × 20 epochs)
65
+ - Batch size: 1
66
+ - Resolution: 1024x1024
67
 
68
+ ### LoRA Models
69
+ - `Dwayne_Johnson_FLUX_LoRA-000010.safetensors`
70
+ - 10 epochs
71
+ - 280 steps (28 images × 10 epochs)
72
+ - Batch size: 1
73
+ - Resolution: 1024x1024
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
+ - `Dwayne_Johnson_FLUX_LoRA-000020.safetensors`
76
+ - 20 epochs
77
+ - 560 steps (28 images × 20 epochs)
78
+ - Batch size: 1
79
+ - Resolution: 1024x1024
80