quzo commited on
Commit
fe3776c
·
verified ·
1 Parent(s): c2604ee

Model card auto-generated by SimpleTuner

Browse files
Files changed (1) hide show
  1. README.md +216 -0
README.md ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ base_model: "black-forest-labs/FLUX.1-dev"
4
+ tags:
5
+ - flux
6
+ - flux-diffusers
7
+ - text-to-image
8
+ - image-to-image
9
+ - diffusers
10
+ - simpletuner
11
+ - safe-for-work
12
+ - lora
13
+ - template:sd-lora
14
+ - standard
15
+ pipeline_tag: text-to-image
16
+ inference: true
17
+ widget:
18
+ - text: 'unconditional (blank prompt)'
19
+ parameters:
20
+ negative_prompt: 'blurry, cropped, ugly'
21
+ output:
22
+ url: ./assets/image_0_0.png
23
+ - text: 'photo of @w4h'
24
+ parameters:
25
+ negative_prompt: 'blurry, cropped, ugly'
26
+ output:
27
+ url: ./assets/image_1_0.png
28
+ ---
29
+
30
+ # iwatch3
31
+
32
+ This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
33
+
34
+ The main validation prompt used during training was:
35
+ ```
36
+ photo of @w4h
37
+ ```
38
+
39
+
40
+ ## Validation settings
41
+ - CFG: `3.0`
42
+ - CFG Rescale: `0.0`
43
+ - Steps: `20`
44
+ - Sampler: `FlowMatchEulerDiscreteScheduler`
45
+ - Seed: `42`
46
+ - Resolution: `1024x1024`
47
+ - Skip-layer guidance:
48
+
49
+ Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
50
+
51
+ You can find some example images in the following gallery:
52
+
53
+
54
+ <Gallery />
55
+
56
+ The text encoder **was not** trained.
57
+ You may reuse the base model text encoder for inference.
58
+
59
+
60
+ ## Training settings
61
+
62
+ - Training epochs: 0
63
+ - Training steps: 250
64
+ - Learning rate: 8e-05
65
+ - Learning rate schedule: polynomial
66
+ - Warmup steps: 100
67
+ - Max grad value: 2.0
68
+ - Effective batch size: 2
69
+ - Micro-batch size: 2
70
+ - Gradient accumulation steps: 1
71
+ - Number of GPUs: 1
72
+ - Gradient checkpointing: True
73
+ - Prediction type: flow_matching (extra parameters=['shift=3', 'flux_guidance_mode=constant', 'flux_guidance_value=1.0', 'flux_lora_target=all'])
74
+ - Optimizer: adamw_bf16
75
+ - Trainable parameter precision: Pure BF16
76
+ - Base model precision: `no_change`
77
+ - Caption dropout probability: 0.05%
78
+
79
+
80
+ - LoRA Rank: 64
81
+ - LoRA Alpha: None
82
+ - LoRA Dropout: 0.1
83
+ - LoRA initialisation style: default
84
+
85
+
86
+ ## Datasets
87
+
88
+ ### iwatch-256
89
+ - Repeats: 10
90
+ - Total number of images: 10
91
+ - Total number of aspect buckets: 1
92
+ - Resolution: 0.065536 megapixels
93
+ - Cropped: False
94
+ - Crop style: None
95
+ - Crop aspect: None
96
+ - Used for regularisation data: No
97
+ ### iwatch-crop-256
98
+ - Repeats: 10
99
+ - Total number of images: 10
100
+ - Total number of aspect buckets: 1
101
+ - Resolution: 0.065536 megapixels
102
+ - Cropped: True
103
+ - Crop style: center
104
+ - Crop aspect: square
105
+ - Used for regularisation data: No
106
+ ### iwatch-512
107
+ - Repeats: 10
108
+ - Total number of images: 10
109
+ - Total number of aspect buckets: 1
110
+ - Resolution: 0.262144 megapixels
111
+ - Cropped: False
112
+ - Crop style: None
113
+ - Crop aspect: None
114
+ - Used for regularisation data: No
115
+ ### iwatch-crop-512
116
+ - Repeats: 10
117
+ - Total number of images: 10
118
+ - Total number of aspect buckets: 1
119
+ - Resolution: 0.262144 megapixels
120
+ - Cropped: True
121
+ - Crop style: center
122
+ - Crop aspect: square
123
+ - Used for regularisation data: No
124
+ ### iwatch-768
125
+ - Repeats: 10
126
+ - Total number of images: 10
127
+ - Total number of aspect buckets: 1
128
+ - Resolution: 0.589824 megapixels
129
+ - Cropped: False
130
+ - Crop style: None
131
+ - Crop aspect: None
132
+ - Used for regularisation data: No
133
+ ### iwatch-crop-768
134
+ - Repeats: 10
135
+ - Total number of images: 10
136
+ - Total number of aspect buckets: 1
137
+ - Resolution: 0.589824 megapixels
138
+ - Cropped: True
139
+ - Crop style: center
140
+ - Crop aspect: square
141
+ - Used for regularisation data: No
142
+ ### iwatch-1024
143
+ - Repeats: 10
144
+ - Total number of images: 10
145
+ - Total number of aspect buckets: 1
146
+ - Resolution: 1.048576 megapixels
147
+ - Cropped: False
148
+ - Crop style: None
149
+ - Crop aspect: None
150
+ - Used for regularisation data: No
151
+ ### iwatch-crop-1024
152
+ - Repeats: 10
153
+ - Total number of images: 10
154
+ - Total number of aspect buckets: 1
155
+ - Resolution: 1.048576 megapixels
156
+ - Cropped: True
157
+ - Crop style: center
158
+ - Crop aspect: square
159
+ - Used for regularisation data: No
160
+ ### iwatch-1440
161
+ - Repeats: 10
162
+ - Total number of images: 10
163
+ - Total number of aspect buckets: 1
164
+ - Resolution: 2.0736 megapixels
165
+ - Cropped: False
166
+ - Crop style: None
167
+ - Crop aspect: None
168
+ - Used for regularisation data: No
169
+ ### iwatch-crop-1440
170
+ - Repeats: 10
171
+ - Total number of images: 10
172
+ - Total number of aspect buckets: 1
173
+ - Resolution: 2.0736 megapixels
174
+ - Cropped: True
175
+ - Crop style: center
176
+ - Crop aspect: square
177
+ - Used for regularisation data: No
178
+
179
+
180
+ ## Inference
181
+
182
+
183
+ ```python
184
+ import torch
185
+ from diffusers import DiffusionPipeline
186
+
187
+ model_id = 'black-forest-labs/FLUX.1-dev'
188
+ adapter_id = 'quzo/iwatch3'
189
+ pipeline = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.bfloat16) # loading directly in bf16
190
+ pipeline.load_lora_weights(adapter_id)
191
+
192
+ prompt = "photo of @w4h"
193
+
194
+
195
+ ## Optional: quantise the model to save on vram.
196
+ ## Note: The model was not quantised during training, so it is not necessary to quantise it during inference time.
197
+ #from optimum.quanto import quantize, freeze, qint8
198
+ #quantize(pipeline.transformer, weights=qint8)
199
+ #freeze(pipeline.transformer)
200
+
201
+ pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu') # the pipeline is already in its target precision level
202
+ model_output = pipeline(
203
+ prompt=prompt,
204
+ num_inference_steps=20,
205
+ generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(42),
206
+ width=1024,
207
+ height=1024,
208
+ guidance_scale=3.0,
209
+ ).images[0]
210
+
211
+ model_output.save("output.png", format="PNG")
212
+
213
+ ```
214
+
215
+
216
+