Model Training Experiments: Fine-Tuning vs LoRA Comparison
This repository contains experimental results comparing Fine-Tuning/DreamBooth and LoRA training approaches.
Additional Resources
Environment Setup
- Kohya GUI Version:
021c6f5ae3055320a56967284e759620c349aa56 - Torch: 2.5.1
- xFormers: 0.0.28.post3
Dataset Information
- Resolution: 1024x1024
- Dataset Size: 28 images
- Captions: "ohwx man" (nothing else)
- Activation Token/Trigger Word: "ohwx man"
Fine-Tuning / DreamBooth Experiment
Configuration
- Config File:
48GB_GPU_28200MB_6.4_second_it_Tier_1.json - Training: Up to 200 epochs with consistent config
- Optimal Result: Epoch 170 (subjective assessment)
Results
LoRA Experiment
Configuration
- Config File:
Rank_1_29500MB_8_85_Second_IT.json - Training: Up to 200 epochs
- Optimal Result: Epoch 160 (subjective assessment)
Results
Comparison Results
Key Observations
LoRA demonstrates excellent realism but shows more obvious overfitting when generating stylized images.
Model Naming Convention
Fine-Tuning Models
Dwayne_Johnson_FLUX_Fine_Tuning-000010.safetensors- 10 epochs
- 280 steps (28 images × 10 epochs)
- Batch size: 1
- Resolution: 1024x1024
Dwayne_Johnson_FLUX_Fine_Tuning-000020.safetensors- 20 epochs
- 560 steps (28 images × 20 epochs)
- Batch size: 1
- Resolution: 1024x1024
LoRA Models
Dwayne_Johnson_FLUX_LoRA-000010.safetensors- 10 epochs
- 280 steps (28 images × 10 epochs)
- Batch size: 1
- Resolution: 1024x1024
Dwayne_Johnson_FLUX_LoRA-000020.safetensors- 20 epochs
- 560 steps (28 images × 20 epochs)
- Batch size: 1
- Resolution: 1024x1024