Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
---
|
| 2 |
license: other
|
| 3 |
-
base_model: "black-forest-labs/FLUX.1-
|
| 4 |
tags:
|
| 5 |
- flux
|
| 6 |
- flux-diffusers
|
|
@@ -25,7 +25,7 @@ widget:
|
|
| 25 |
|
| 26 |
# lora-training
|
| 27 |
|
| 28 |
-
This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-
|
| 29 |
|
| 30 |
|
| 31 |
The main validation prompt used during training was:
|
|
@@ -58,7 +58,7 @@ You may reuse the base model text encoder for inference.
|
|
| 58 |
## Training settings
|
| 59 |
|
| 60 |
- Training epochs: 0
|
| 61 |
-
- Training steps:
|
| 62 |
- Learning rate: 0.0008
|
| 63 |
- Effective batch size: 1
|
| 64 |
- Micro-batch size: 1
|
|
@@ -70,7 +70,7 @@ You may reuse the base model text encoder for inference.
|
|
| 70 |
- Precision: bf16
|
| 71 |
- Quantised: Yes: int8-quanto
|
| 72 |
- Xformers: Not used
|
| 73 |
-
- LoRA Rank:
|
| 74 |
- LoRA Alpha: None
|
| 75 |
- LoRA Dropout: 0.1
|
| 76 |
- LoRA initialisation style: default
|
|
@@ -80,7 +80,7 @@ You may reuse the base model text encoder for inference.
|
|
| 80 |
|
| 81 |
### right-triangles
|
| 82 |
- Repeats: 0
|
| 83 |
-
- Total number of images:
|
| 84 |
- Total number of aspect buckets: 1
|
| 85 |
- Resolution: 512 px
|
| 86 |
- Cropped: True
|
|
@@ -95,7 +95,7 @@ You may reuse the base model text encoder for inference.
|
|
| 95 |
import torch
|
| 96 |
from diffusers import DiffusionPipeline
|
| 97 |
|
| 98 |
-
model_id = 'black-forest-labs/FLUX.1-
|
| 99 |
adapter_id = 'Mujeeb603/lora-training'
|
| 100 |
pipeline = DiffusionPipeline.from_pretrained(model_id)
|
| 101 |
pipeline.load_lora_weights(adapter_id)
|
|
|
|
| 1 |
---
|
| 2 |
license: other
|
| 3 |
+
base_model: "black-forest-labs/FLUX.1-dev"
|
| 4 |
tags:
|
| 5 |
- flux
|
| 6 |
- flux-diffusers
|
|
|
|
| 25 |
|
| 26 |
# lora-training
|
| 27 |
|
| 28 |
+
This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
|
| 29 |
|
| 30 |
|
| 31 |
The main validation prompt used during training was:
|
|
|
|
| 58 |
## Training settings
|
| 59 |
|
| 60 |
- Training epochs: 0
|
| 61 |
+
- Training steps: 100
|
| 62 |
- Learning rate: 0.0008
|
| 63 |
- Effective batch size: 1
|
| 64 |
- Micro-batch size: 1
|
|
|
|
| 70 |
- Precision: bf16
|
| 71 |
- Quantised: Yes: int8-quanto
|
| 72 |
- Xformers: Not used
|
| 73 |
+
- LoRA Rank: 8
|
| 74 |
- LoRA Alpha: None
|
| 75 |
- LoRA Dropout: 0.1
|
| 76 |
- LoRA initialisation style: default
|
|
|
|
| 80 |
|
| 81 |
### right-triangles
|
| 82 |
- Repeats: 0
|
| 83 |
+
- Total number of images: 380
|
| 84 |
- Total number of aspect buckets: 1
|
| 85 |
- Resolution: 512 px
|
| 86 |
- Cropped: True
|
|
|
|
| 95 |
import torch
|
| 96 |
from diffusers import DiffusionPipeline
|
| 97 |
|
| 98 |
+
model_id = 'black-forest-labs/FLUX.1-dev'
|
| 99 |
adapter_id = 'Mujeeb603/lora-training'
|
| 100 |
pipeline = DiffusionPipeline.from_pretrained(model_id)
|
| 101 |
pipeline.load_lora_weights(adapter_id)
|