Trained for 130 epochs and 3000 steps.
Browse filesTrained with datasets ['text-embeds', 'bartek-mizak-flux']
Learning rate 0.0001, batch size 2, and 1 gradient accumulation steps.
Used DDPM noise scheduler for training with epsilon prediction type and rescaled_betas_zero_snr=False
Using 'trailing' timestep spacing.
Base model: black-forest-labs/FLUX.1-dev
VAE: None
pytorch_lora_weights.safetensors
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 718117968
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:6dcf8076b7f125f1a469c226b882b1efe07a6f2a0ed4a1fa26d0dc265c2ebfa8
|
| 3 |
size 718117968
|