Model card auto-generated by SimpleTuner
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ widget:
|
|
| 20 |
negative_prompt: 'blurry, cropped, ugly'
|
| 21 |
output:
|
| 22 |
url: ./assets/image_0_0.png
|
| 23 |
-
- text: 'A
|
| 24 |
parameters:
|
| 25 |
negative_prompt: 'blurry, cropped, ugly'
|
| 26 |
output:
|
|
@@ -33,7 +33,7 @@ This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https:/
|
|
| 33 |
|
| 34 |
The main validation prompt used during training was:
|
| 35 |
```
|
| 36 |
-
A
|
| 37 |
```
|
| 38 |
|
| 39 |
|
|
@@ -59,8 +59,8 @@ You may reuse the base model text encoder for inference.
|
|
| 59 |
|
| 60 |
## Training settings
|
| 61 |
|
| 62 |
-
- Training epochs:
|
| 63 |
-
- Training steps:
|
| 64 |
- Learning rate: 0.0001
|
| 65 |
- Learning rate schedule: polynomial
|
| 66 |
- Warmup steps: 100
|
|
@@ -85,9 +85,9 @@ You may reuse the base model text encoder for inference.
|
|
| 85 |
|
| 86 |
## Datasets
|
| 87 |
|
| 88 |
-
###
|
| 89 |
- Repeats: 5
|
| 90 |
-
- Total number of images: ~
|
| 91 |
- Total number of aspect buckets: 1
|
| 92 |
- Resolution: 1.0 megapixels
|
| 93 |
- Cropped: False
|
|
@@ -108,7 +108,7 @@ adapter_id = 'binarydaddy/simpletuner-lora'
|
|
| 108 |
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
|
| 109 |
pipeline.load_lora_weights(adapter_id)
|
| 110 |
|
| 111 |
-
prompt = "A
|
| 112 |
|
| 113 |
|
| 114 |
## Optional: quantise the model to save on vram.
|
|
|
|
| 20 |
negative_prompt: 'blurry, cropped, ugly'
|
| 21 |
output:
|
| 22 |
url: ./assets/image_0_0.png
|
| 23 |
+
- text: 'A 2D vfx of flame effect in red and yellow, glazing against black background'
|
| 24 |
parameters:
|
| 25 |
negative_prompt: 'blurry, cropped, ugly'
|
| 26 |
output:
|
|
|
|
| 33 |
|
| 34 |
The main validation prompt used during training was:
|
| 35 |
```
|
| 36 |
+
A 2D vfx of flame effect in red and yellow, glazing against black background
|
| 37 |
```
|
| 38 |
|
| 39 |
|
|
|
|
| 59 |
|
| 60 |
## Training settings
|
| 61 |
|
| 62 |
+
- Training epochs: 0
|
| 63 |
+
- Training steps: 1000
|
| 64 |
- Learning rate: 0.0001
|
| 65 |
- Learning rate schedule: polynomial
|
| 66 |
- Warmup steps: 100
|
|
|
|
| 85 |
|
| 86 |
## Datasets
|
| 87 |
|
| 88 |
+
### mapledata_2D
|
| 89 |
- Repeats: 5
|
| 90 |
+
- Total number of images: ~2208
|
| 91 |
- Total number of aspect buckets: 1
|
| 92 |
- Resolution: 1.0 megapixels
|
| 93 |
- Cropped: False
|
|
|
|
| 108 |
pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
|
| 109 |
pipeline.load_lora_weights(adapter_id)
|
| 110 |
|
| 111 |
+
prompt = "A 2D vfx of flame effect in red and yellow, glazing against black background"
|
| 112 |
|
| 113 |
|
| 114 |
## Optional: quantise the model to save on vram.
|