Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
|
@@ -55,16 +55,16 @@ You may reuse the base model text encoder for inference.
|
|
| 55 |
|
| 56 |
## Training settings
|
| 57 |
|
| 58 |
-
- Training epochs:
|
| 59 |
-
- Training steps:
|
| 60 |
- Learning rate: 0.0001
|
| 61 |
- Learning rate schedule: constant
|
| 62 |
- Warmup steps: 500
|
| 63 |
- Max grad value: 2.0
|
| 64 |
-
- Effective batch size:
|
| 65 |
- Micro-batch size: 1
|
| 66 |
- Gradient accumulation steps: 1
|
| 67 |
-
- Number of GPUs:
|
| 68 |
- Gradient checkpointing: True
|
| 69 |
- Prediction type: flow_matching (extra parameters=['shift=3.0', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flux_lora_target=controlnet'])
|
| 70 |
- Optimizer: adamw_bf16
|
|
@@ -83,7 +83,7 @@ You may reuse the base model text encoder for inference.
|
|
| 83 |
|
| 84 |
### antelope-data-256
|
| 85 |
- Repeats: 0
|
| 86 |
-
- Total number of images:
|
| 87 |
- Total number of aspect buckets: 1
|
| 88 |
- Resolution: 0.065536 megapixels
|
| 89 |
- Cropped: True
|
|
|
|
| 55 |
|
| 56 |
## Training settings
|
| 57 |
|
| 58 |
+
- Training epochs: 1
|
| 59 |
+
- Training steps: 50
|
| 60 |
- Learning rate: 0.0001
|
| 61 |
- Learning rate schedule: constant
|
| 62 |
- Warmup steps: 500
|
| 63 |
- Max grad value: 2.0
|
| 64 |
+
- Effective batch size: 1
|
| 65 |
- Micro-batch size: 1
|
| 66 |
- Gradient accumulation steps: 1
|
| 67 |
+
- Number of GPUs: 1
|
| 68 |
- Gradient checkpointing: True
|
| 69 |
- Prediction type: flow_matching (extra parameters=['shift=3.0', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flux_lora_target=controlnet'])
|
| 70 |
- Optimizer: adamw_bf16
|
|
|
|
| 83 |
|
| 84 |
### antelope-data-256
|
| 85 |
- Repeats: 0
|
| 86 |
+
- Total number of images: 29
|
| 87 |
- Total number of aspect buckets: 1
|
| 88 |
- Resolution: 0.065536 megapixels
|
| 89 |
- Cropped: True
|