Commit
·
85ca4e5
verified
·
0
Parent(s):
Duplicate from CabalResearch/NoobAI-Flux2VAE-RectifiedFlow-0.3
Browse filesCo-authored-by: Anzhc <Anzhc@users.noreply.huggingface.co>
- .gitattributes +35 -0
- NoobAI Flux2VAE RF Basic Workflow.json +457 -0
- NoobAI Flux2VAE RF v0.3 - Aesthetic.safetensors +3 -0
- NoobAI Flux2VAE RF v0.3 - Base.safetensors +3 -0
- README.md +259 -0
.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
NoobAI Flux2VAE RF Basic Workflow.json
ADDED
|
@@ -0,0 +1,457 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"id": "a4a1f988-221c-41e4-aaf1-674cec119bc1",
|
| 3 |
+
"revision": 0,
|
| 4 |
+
"last_node_id": 14,
|
| 5 |
+
"last_link_id": 15,
|
| 6 |
+
"nodes": [
|
| 7 |
+
{
|
| 8 |
+
"id": 6,
|
| 9 |
+
"type": "VAEDecodeTiled",
|
| 10 |
+
"pos": [
|
| 11 |
+
900,
|
| 12 |
+
240
|
| 13 |
+
],
|
| 14 |
+
"size": [
|
| 15 |
+
220,
|
| 16 |
+
160
|
| 17 |
+
],
|
| 18 |
+
"flags": {},
|
| 19 |
+
"order": 6,
|
| 20 |
+
"mode": 0,
|
| 21 |
+
"inputs": [
|
| 22 |
+
{
|
| 23 |
+
"name": "samples",
|
| 24 |
+
"type": "LATENT",
|
| 25 |
+
"link": 2
|
| 26 |
+
},
|
| 27 |
+
{
|
| 28 |
+
"name": "vae",
|
| 29 |
+
"type": "VAE",
|
| 30 |
+
"link": 15
|
| 31 |
+
}
|
| 32 |
+
],
|
| 33 |
+
"outputs": [
|
| 34 |
+
{
|
| 35 |
+
"name": "IMAGE",
|
| 36 |
+
"type": "IMAGE",
|
| 37 |
+
"slot_index": 0,
|
| 38 |
+
"links": [
|
| 39 |
+
8
|
| 40 |
+
]
|
| 41 |
+
}
|
| 42 |
+
],
|
| 43 |
+
"properties": {
|
| 44 |
+
"cnr_id": "comfy-core",
|
| 45 |
+
"ver": "0.3.26",
|
| 46 |
+
"Node name for S&R": "VAEDecodeTiled"
|
| 47 |
+
},
|
| 48 |
+
"widgets_values": [
|
| 49 |
+
512,
|
| 50 |
+
64,
|
| 51 |
+
64,
|
| 52 |
+
8
|
| 53 |
+
],
|
| 54 |
+
"shape": 1
|
| 55 |
+
},
|
| 56 |
+
{
|
| 57 |
+
"id": 11,
|
| 58 |
+
"type": "PreviewImage",
|
| 59 |
+
"pos": [
|
| 60 |
+
1130,
|
| 61 |
+
240
|
| 62 |
+
],
|
| 63 |
+
"size": [
|
| 64 |
+
460,
|
| 65 |
+
700
|
| 66 |
+
],
|
| 67 |
+
"flags": {},
|
| 68 |
+
"order": 7,
|
| 69 |
+
"mode": 0,
|
| 70 |
+
"inputs": [
|
| 71 |
+
{
|
| 72 |
+
"name": "images",
|
| 73 |
+
"type": "IMAGE",
|
| 74 |
+
"link": 8
|
| 75 |
+
}
|
| 76 |
+
],
|
| 77 |
+
"outputs": [],
|
| 78 |
+
"title": "Image Preview",
|
| 79 |
+
"properties": {
|
| 80 |
+
"cnr_id": "comfy-core",
|
| 81 |
+
"ver": "0.3.26",
|
| 82 |
+
"Node name for S&R": "Tuned"
|
| 83 |
+
},
|
| 84 |
+
"widgets_values": [],
|
| 85 |
+
"color": "#223",
|
| 86 |
+
"bgcolor": "#335",
|
| 87 |
+
"shape": 1
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"id": 3,
|
| 91 |
+
"type": "KSampler",
|
| 92 |
+
"pos": [
|
| 93 |
+
650,
|
| 94 |
+
240
|
| 95 |
+
],
|
| 96 |
+
"size": [
|
| 97 |
+
240,
|
| 98 |
+
474
|
| 99 |
+
],
|
| 100 |
+
"flags": {},
|
| 101 |
+
"order": 5,
|
| 102 |
+
"mode": 0,
|
| 103 |
+
"inputs": [
|
| 104 |
+
{
|
| 105 |
+
"name": "model",
|
| 106 |
+
"type": "MODEL",
|
| 107 |
+
"link": 13
|
| 108 |
+
},
|
| 109 |
+
{
|
| 110 |
+
"name": "positive",
|
| 111 |
+
"type": "CONDITIONING",
|
| 112 |
+
"link": 10
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"name": "negative",
|
| 116 |
+
"type": "CONDITIONING",
|
| 117 |
+
"link": 11
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"name": "latent_image",
|
| 121 |
+
"type": "LATENT",
|
| 122 |
+
"link": 14
|
| 123 |
+
}
|
| 124 |
+
],
|
| 125 |
+
"outputs": [
|
| 126 |
+
{
|
| 127 |
+
"name": "LATENT",
|
| 128 |
+
"type": "LATENT",
|
| 129 |
+
"slot_index": 0,
|
| 130 |
+
"links": [
|
| 131 |
+
2
|
| 132 |
+
]
|
| 133 |
+
}
|
| 134 |
+
],
|
| 135 |
+
"properties": {
|
| 136 |
+
"cnr_id": "comfy-core",
|
| 137 |
+
"ver": "0.3.26",
|
| 138 |
+
"Node name for S&R": "KSampler"
|
| 139 |
+
},
|
| 140 |
+
"widgets_values": [
|
| 141 |
+
236456494,
|
| 142 |
+
"fixed",
|
| 143 |
+
24,
|
| 144 |
+
7,
|
| 145 |
+
"euler",
|
| 146 |
+
"normal",
|
| 147 |
+
1
|
| 148 |
+
],
|
| 149 |
+
"shape": 1
|
| 150 |
+
},
|
| 151 |
+
{
|
| 152 |
+
"id": 9,
|
| 153 |
+
"type": "CLIPTextEncode",
|
| 154 |
+
"pos": [
|
| 155 |
+
10,
|
| 156 |
+
540
|
| 157 |
+
],
|
| 158 |
+
"size": [
|
| 159 |
+
410,
|
| 160 |
+
110
|
| 161 |
+
],
|
| 162 |
+
"flags": {},
|
| 163 |
+
"order": 4,
|
| 164 |
+
"mode": 0,
|
| 165 |
+
"inputs": [
|
| 166 |
+
{
|
| 167 |
+
"name": "clip",
|
| 168 |
+
"type": "CLIP",
|
| 169 |
+
"link": 6
|
| 170 |
+
}
|
| 171 |
+
],
|
| 172 |
+
"outputs": [
|
| 173 |
+
{
|
| 174 |
+
"name": "CONDITIONING",
|
| 175 |
+
"type": "CONDITIONING",
|
| 176 |
+
"slot_index": 0,
|
| 177 |
+
"links": [
|
| 178 |
+
11
|
| 179 |
+
]
|
| 180 |
+
}
|
| 181 |
+
],
|
| 182 |
+
"properties": {
|
| 183 |
+
"cnr_id": "comfy-core",
|
| 184 |
+
"ver": "0.3.26",
|
| 185 |
+
"Node name for S&R": "CLIPTextEncode"
|
| 186 |
+
},
|
| 187 |
+
"widgets_values": [
|
| 188 |
+
"worst quality, normal quality, bad anatomy"
|
| 189 |
+
],
|
| 190 |
+
"shape": 1
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"id": 10,
|
| 194 |
+
"type": "CLIPTextEncode",
|
| 195 |
+
"pos": [
|
| 196 |
+
10,
|
| 197 |
+
380
|
| 198 |
+
],
|
| 199 |
+
"size": [
|
| 200 |
+
410,
|
| 201 |
+
120
|
| 202 |
+
],
|
| 203 |
+
"flags": {},
|
| 204 |
+
"order": 3,
|
| 205 |
+
"mode": 0,
|
| 206 |
+
"inputs": [
|
| 207 |
+
{
|
| 208 |
+
"name": "clip",
|
| 209 |
+
"type": "CLIP",
|
| 210 |
+
"link": 5
|
| 211 |
+
}
|
| 212 |
+
],
|
| 213 |
+
"outputs": [
|
| 214 |
+
{
|
| 215 |
+
"name": "CONDITIONING",
|
| 216 |
+
"type": "CONDITIONING",
|
| 217 |
+
"slot_index": 0,
|
| 218 |
+
"links": [
|
| 219 |
+
10
|
| 220 |
+
]
|
| 221 |
+
}
|
| 222 |
+
],
|
| 223 |
+
"properties": {
|
| 224 |
+
"cnr_id": "comfy-core",
|
| 225 |
+
"ver": "0.3.26",
|
| 226 |
+
"Node name for S&R": "CLIPTextEncode"
|
| 227 |
+
},
|
| 228 |
+
"widgets_values": [
|
| 229 |
+
"masterpiece, best quality, 1girl, upper body"
|
| 230 |
+
],
|
| 231 |
+
"shape": 1
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"id": 1,
|
| 235 |
+
"type": "CheckpointLoaderSimple",
|
| 236 |
+
"pos": [
|
| 237 |
+
10,
|
| 238 |
+
240
|
| 239 |
+
],
|
| 240 |
+
"size": [
|
| 241 |
+
410,
|
| 242 |
+
98
|
| 243 |
+
],
|
| 244 |
+
"flags": {},
|
| 245 |
+
"order": 0,
|
| 246 |
+
"mode": 0,
|
| 247 |
+
"inputs": [],
|
| 248 |
+
"outputs": [
|
| 249 |
+
{
|
| 250 |
+
"name": "MODEL",
|
| 251 |
+
"type": "MODEL",
|
| 252 |
+
"links": [
|
| 253 |
+
12
|
| 254 |
+
]
|
| 255 |
+
},
|
| 256 |
+
{
|
| 257 |
+
"name": "CLIP",
|
| 258 |
+
"type": "CLIP",
|
| 259 |
+
"links": [
|
| 260 |
+
5,
|
| 261 |
+
6
|
| 262 |
+
]
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"name": "VAE",
|
| 266 |
+
"type": "VAE",
|
| 267 |
+
"links": [
|
| 268 |
+
15
|
| 269 |
+
]
|
| 270 |
+
}
|
| 271 |
+
],
|
| 272 |
+
"properties": {
|
| 273 |
+
"cnr_id": "comfy-core",
|
| 274 |
+
"ver": "0.3.51",
|
| 275 |
+
"Node name for S&R": "CheckpointLoaderSimple"
|
| 276 |
+
},
|
| 277 |
+
"widgets_values": [
|
| 278 |
+
"NoobAI-RF-FLUX2VAE-v0.3-resumed-6e-5-000002.safetensors"
|
| 279 |
+
],
|
| 280 |
+
"shape": 1
|
| 281 |
+
},
|
| 282 |
+
{
|
| 283 |
+
"id": 2,
|
| 284 |
+
"type": "ModelSamplingSD3",
|
| 285 |
+
"pos": [
|
| 286 |
+
430,
|
| 287 |
+
240
|
| 288 |
+
],
|
| 289 |
+
"size": [
|
| 290 |
+
210,
|
| 291 |
+
58
|
| 292 |
+
],
|
| 293 |
+
"flags": {},
|
| 294 |
+
"order": 2,
|
| 295 |
+
"mode": 0,
|
| 296 |
+
"inputs": [
|
| 297 |
+
{
|
| 298 |
+
"name": "model",
|
| 299 |
+
"type": "MODEL",
|
| 300 |
+
"link": 12
|
| 301 |
+
}
|
| 302 |
+
],
|
| 303 |
+
"outputs": [
|
| 304 |
+
{
|
| 305 |
+
"name": "MODEL",
|
| 306 |
+
"type": "MODEL",
|
| 307 |
+
"links": [
|
| 308 |
+
13
|
| 309 |
+
]
|
| 310 |
+
}
|
| 311 |
+
],
|
| 312 |
+
"properties": {
|
| 313 |
+
"cnr_id": "comfy-core",
|
| 314 |
+
"ver": "0.3.51",
|
| 315 |
+
"Node name for S&R": "ModelSamplingSD3"
|
| 316 |
+
},
|
| 317 |
+
"widgets_values": [
|
| 318 |
+
12
|
| 319 |
+
],
|
| 320 |
+
"shape": 1
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"id": 14,
|
| 324 |
+
"type": "EmptySDXLFlux2LatentImage",
|
| 325 |
+
"pos": [
|
| 326 |
+
430,
|
| 327 |
+
340
|
| 328 |
+
],
|
| 329 |
+
"size": [
|
| 330 |
+
210,
|
| 331 |
+
106
|
| 332 |
+
],
|
| 333 |
+
"flags": {},
|
| 334 |
+
"order": 1,
|
| 335 |
+
"mode": 0,
|
| 336 |
+
"inputs": [],
|
| 337 |
+
"outputs": [
|
| 338 |
+
{
|
| 339 |
+
"name": "LATENT",
|
| 340 |
+
"type": "LATENT",
|
| 341 |
+
"links": [
|
| 342 |
+
14
|
| 343 |
+
]
|
| 344 |
+
}
|
| 345 |
+
],
|
| 346 |
+
"properties": {
|
| 347 |
+
"Node name for S&R": "EmptySDXLFlux2LatentImage"
|
| 348 |
+
},
|
| 349 |
+
"widgets_values": [
|
| 350 |
+
832,
|
| 351 |
+
1216,
|
| 352 |
+
1
|
| 353 |
+
],
|
| 354 |
+
"shape": 1
|
| 355 |
+
}
|
| 356 |
+
],
|
| 357 |
+
"links": [
|
| 358 |
+
[
|
| 359 |
+
2,
|
| 360 |
+
3,
|
| 361 |
+
0,
|
| 362 |
+
6,
|
| 363 |
+
0,
|
| 364 |
+
"LATENT"
|
| 365 |
+
],
|
| 366 |
+
[
|
| 367 |
+
5,
|
| 368 |
+
1,
|
| 369 |
+
1,
|
| 370 |
+
10,
|
| 371 |
+
0,
|
| 372 |
+
"CLIP"
|
| 373 |
+
],
|
| 374 |
+
[
|
| 375 |
+
6,
|
| 376 |
+
1,
|
| 377 |
+
1,
|
| 378 |
+
9,
|
| 379 |
+
0,
|
| 380 |
+
"CLIP"
|
| 381 |
+
],
|
| 382 |
+
[
|
| 383 |
+
8,
|
| 384 |
+
6,
|
| 385 |
+
0,
|
| 386 |
+
11,
|
| 387 |
+
0,
|
| 388 |
+
"IMAGE"
|
| 389 |
+
],
|
| 390 |
+
[
|
| 391 |
+
10,
|
| 392 |
+
10,
|
| 393 |
+
0,
|
| 394 |
+
3,
|
| 395 |
+
1,
|
| 396 |
+
"CONDITIONING"
|
| 397 |
+
],
|
| 398 |
+
[
|
| 399 |
+
11,
|
| 400 |
+
9,
|
| 401 |
+
0,
|
| 402 |
+
3,
|
| 403 |
+
2,
|
| 404 |
+
"CONDITIONING"
|
| 405 |
+
],
|
| 406 |
+
[
|
| 407 |
+
12,
|
| 408 |
+
1,
|
| 409 |
+
0,
|
| 410 |
+
2,
|
| 411 |
+
0,
|
| 412 |
+
"MODEL"
|
| 413 |
+
],
|
| 414 |
+
[
|
| 415 |
+
13,
|
| 416 |
+
2,
|
| 417 |
+
0,
|
| 418 |
+
3,
|
| 419 |
+
0,
|
| 420 |
+
"MODEL"
|
| 421 |
+
],
|
| 422 |
+
[
|
| 423 |
+
14,
|
| 424 |
+
14,
|
| 425 |
+
0,
|
| 426 |
+
3,
|
| 427 |
+
3,
|
| 428 |
+
"LATENT"
|
| 429 |
+
],
|
| 430 |
+
[
|
| 431 |
+
15,
|
| 432 |
+
1,
|
| 433 |
+
2,
|
| 434 |
+
6,
|
| 435 |
+
1,
|
| 436 |
+
"VAE"
|
| 437 |
+
]
|
| 438 |
+
],
|
| 439 |
+
"groups": [],
|
| 440 |
+
"config": {},
|
| 441 |
+
"extra": {
|
| 442 |
+
"ds": {
|
| 443 |
+
"scale": 0.9229599817706534,
|
| 444 |
+
"offset": [
|
| 445 |
+
541.4328725112438,
|
| 446 |
+
286.4892025504186
|
| 447 |
+
]
|
| 448 |
+
},
|
| 449 |
+
"frontendVersion": "1.32.9",
|
| 450 |
+
"VHS_latentpreview": false,
|
| 451 |
+
"VHS_latentpreviewrate": 0,
|
| 452 |
+
"VHS_MetadataImage": true,
|
| 453 |
+
"VHS_KeepIntermediate": true,
|
| 454 |
+
"workflowRendererVersion": "LG"
|
| 455 |
+
},
|
| 456 |
+
"version": 0.4
|
| 457 |
+
}
|
NoobAI Flux2VAE RF v0.3 - Aesthetic.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b40460acab7c8dc79eb9ef7c3ea41a3886c92c390e39bb744cf542aa33a9829c
|
| 3 |
+
size 6939151202
|
NoobAI Flux2VAE RF v0.3 - Base.safetensors
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:00b3c576917dfc8bf38743179311761533c33097593203a3713cde973ccb02e2
|
| 3 |
+
size 6939151234
|
README.md
ADDED
|
@@ -0,0 +1,259 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: fair-ai-public-license-1.0-sd
|
| 4 |
+
license_link: https://freedevproject.org/faipl-1.0-sd/
|
| 5 |
+
base_model:
|
| 6 |
+
- CabalResearch/NoobAI-Flux2VAE-RectifiedFlow
|
| 7 |
+
library_name: diffusers
|
| 8 |
+
---
|
| 9 |
+
## Model Details
|
| 10 |
+
|
| 11 |
+
A continuation of the [NoobAI Flux2 VAE experiment](https://huggingface.co/CabalResearch/NoobAI-Flux2VAE-RectifiedFlow)
|
| 12 |
+
|
| 13 |
+
More info on supporting us: [click me](https://huggingface.co/CabalResearch/NoobAI-Flux2VAE-RectifiedFlow-0.3#potential-future)
|
| 14 |
+
|
| 15 |
+
### Model Description
|
| 16 |
+
|
| 17 |
+
Resumed for 4 more epochs, model has shown a nice improvement. We observe good convergence to new details, that were hard to achieve on prior arch. Compositions and stability are strongly improved relative to Epoch 2, as well as downstream trainability (like LoRAs).
|
| 18 |
+
|
| 19 |
+
Current state is usable for normal generations, so we encourage you to try it. We will provide an [easy node for ComfyUI](https://github.com/Anzhc/SDXL-Flux2VAE-ComfyUI-Node), as well as basic workflow. If you are an A1111 user, please use [ReForge](https://github.com/Panchovix/stable-diffusion-webui-reForge), it has native support, instructions will be below.
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+
Once again, we are working with limited compute, but are quite happy with the result so far, and hope to continue working on the model.
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
- **Developed by:** Cabal Research (Bluvoll, Anzhc)
|
| 28 |
+
- **Funded by:** Community
|
| 29 |
+
- **License:** [fair-ai-public-license-1.0-sd](https://freedevproject.org/faipl-1.0-sd/)
|
| 30 |
+
- **Finetuned from model:** [NoobAI Flux2 VAE experiment](https://huggingface.co/CabalResearch/NoobAI-Flux2VAE-RectifiedFlow)
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
## Bias and Limitations
|
| 34 |
+
|
| 35 |
+
While we are seeing new level of details, it is still early to call it a day. Complex intersections, extremely small details, abstract and wide shots will pose a significant challenge to model and result in noise-like patterns, but we see steady progression in resolving that noise through epochs.
|
| 36 |
+
|
| 37 |
+
Most biases of official dataset will apply(Blue Archive, etc.).
|
| 38 |
+
|
| 39 |
+
We are yet to get to steady composition and anatomy, but good LoRAs help drastically with this at current stage.
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
## Model Output Examples
|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
|
| 48 |
+

|
| 49 |
+
|
| 50 |
+

|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
|
| 54 |
+

|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
|
| 58 |
+

|
| 59 |
+
|
| 60 |
+

|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
|
| 64 |
+

|
| 65 |
+
|
| 66 |
+
P.S. We are pretty bad at generating images, on Epoch 2 we've seen quite a few examples of much better generations that what we've shown, wonder if this time it will also be the case.
|
| 67 |
+
|
| 68 |
+
# Recommendations
|
| 69 |
+
|
| 70 |
+
### Inference
|
| 71 |
+
|
| 72 |
+
#### Comfy
|
| 73 |
+
|
| 74 |
+

|
| 75 |
+
|
| 76 |
+
(Workflow is available alongside model in repo)
|
| 77 |
+
We will provide a Node, and hope it will be adapted natively in main repo eventually:
|
| 78 |
+
**https://github.com/Anzhc/SDXL-Flux2VAE-ComfyUI-Node**
|
| 79 |
+
|
| 80 |
+
Seems like you don't need to use the node itself, patch is applied without it.
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
Apparently works in [SwarmUI](https://github.com/mcmonkeyprojects/SwarmUI) as is.
|
| 84 |
+
|
| 85 |
+
Same as your normal inference, but with addition of SD3 sampling node, as this model is Flow-based.
|
| 86 |
+
|
| 87 |
+
Recommended Parameters:
|
| 88 |
+
**Sampler**: Euler, Euler A, DPM++ SDE, etc.
|
| 89 |
+
**Steps**: 20-28
|
| 90 |
+
**CFG**: 6-9
|
| 91 |
+
**Shift**: 3-12
|
| 92 |
+
**Schedule**: Normal/Simple/SGM Uniform/Quadratic
|
| 93 |
+
**Positive Quality Tags**: `masterpiece, best quality`
|
| 94 |
+
**Negative Tags**: `worst quality, normal quality, bad anatomy`
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
#### A1111 WebUI
|
| 98 |
+
|
| 99 |
+
(All screenshots are repeating our RF release, as there is no difference in setup)
|
| 100 |
+
|
| 101 |
+
|
| 102 |
+
Recommended WebUI: [ReForge](https://github.com/Panchovix/stable-diffusion-webui-reForge) - has native support for Flow models, and we've PR'd our native support for Flux2vae-based SDXL modification.
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
**How to use in ReForge**:
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
(ignore Sigma max field at the top, this is not used in RF)
|
| 109 |
+
|
| 110 |
+
Support for RF in ReForge is being implemented through a built-in extension:
|
| 111 |
+
|
| 112 |
+

|
| 113 |
+
|
| 114 |
+
|
| 115 |
+

|
| 116 |
+
|
| 117 |
+
Set parameters to that, and you're good to go.
|
| 118 |
+
|
| 119 |
+
Flux2VAE does not currently have an appropriate high quality preview method, please use Approx Cheap option, which would allow you to see simple PCA projection(ReForge).
|
| 120 |
+
|
| 121 |
+
Recommended Parameters:
|
| 122 |
+
**Sampler**: Euler A Comfy RF, Euler A2, Euler, DPM++ SDE Comfy, etc. **ALL VARIANTS MUST BE RF OR COMFY, IF AVAILABLE. In ComfyUI routing is automatic, but not in the case of WebUI.**
|
| 123 |
+
**Steps**: 20-28
|
| 124 |
+
**CFG**: 6-9
|
| 125 |
+
**Shift**: 3-12
|
| 126 |
+
**Schedule**: Normal/Simple/SGM Uniform
|
| 127 |
+
**Positive Quality Tags**: `masterpiece, best quality`
|
| 128 |
+
**Negative Tags**: `worst quality, normal quality, bad anatomy`
|
| 129 |
+
|
| 130 |
+
**ADETAILER FIX FOR RF**:
|
| 131 |
+
By default, Adetailer discards Advanced Model Sampling extension, which breaks RF. You need to add AMS to this part of settings:
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+
Add: `advanced_model_sampling_script,advanced_model_sampling_script_backported` to there.
|
| 136 |
+
|
| 137 |
+
If that does not work, go into adetailer extension, find args.py, open it, replace _builtin_scripts like this:
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+
Here is a copypaste for easy copy:
|
| 142 |
+
```
|
| 143 |
+
_builtin_script = (
|
| 144 |
+
"advanced_model_sampling_script",
|
| 145 |
+
"advanced_model_sampling_script_backported",
|
| 146 |
+
"hypertile_script",
|
| 147 |
+
"soft_inpainting",
|
| 148 |
+
)
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
Or use my fork of Adetailer - https://github.com/Anzhc/aadetailer-reforge
|
| 152 |
+
|
| 153 |
+
## Training
|
| 154 |
+
|
| 155 |
+
### Model Composition
|
| 156 |
+
(Relative to base it's trained from)
|
| 157 |
+
|
| 158 |
+
Unet: Same
|
| 159 |
+
CLIP L: Same, Frozen
|
| 160 |
+
CLIP G: Same, Frozen
|
| 161 |
+
VAE: [Flux2 VAE](https://huggingface.co/black-forest-labs/FLUX.2-dev/tree/main/vae)
|
| 162 |
+
|
| 163 |
+
|
| 164 |
+
### Training Details
|
| 165 |
+
(Main Stage Training)
|
| 166 |
+
|
| 167 |
+
**Samples seen**(unbatched steps): ~50 million samples seen
|
| 168 |
+
**Learning Rate**: 6e-5 (General Training) and 3e-5 (Aesthetic)
|
| 169 |
+
**Effective Batch size**: ~1400 (86x8 Batch Size, Accumulation 2 )
|
| 170 |
+
**Precision**: Mixed BF16
|
| 171 |
+
**Optimizer**: AdamW8bit with Kahan Summation
|
| 172 |
+
**Weight Decay**: 0.01
|
| 173 |
+
**Schedule**: Constant with warmup
|
| 174 |
+
**Timestep Sampling Strategy**: Logit-Normal -0.2 1.5 (sometimes referred to as Lognorm), Shift 2.5
|
| 175 |
+
**Text Encoders**: Frozen
|
| 176 |
+
**Keep Token**: False
|
| 177 |
+
**Tag Dropout**: 10%
|
| 178 |
+
**Uncond Dropout**: 10%
|
| 179 |
+
**Shuffle**: True
|
| 180 |
+
|
| 181 |
+
**VAE Conv Padding**: False
|
| 182 |
+
**VAE Shift**: 0.0760
|
| 183 |
+
**VAE Scale**: 0.6043
|
| 184 |
+
|
| 185 |
+
**Additional Features used**: Protected Tags, Cosine Optimal Transport.
|
| 186 |
+
|
| 187 |
+
#### Training Data
|
| 188 |
+
|
| 189 |
+
6 epochs of the original NoobAI dataset, including images up to October 2024, minus screencap data(was not shared).
|
| 190 |
+
|
| 191 |
+
|
| 192 |
+
### LoRA Training
|
| 193 |
+
|
| 194 |
+
Current state of the model provides adequate trainability, but expect the need to train for a bit longer, as we are still undertrained.
|
| 195 |
+
My current style training settings (Anzhc):
|
| 196 |
+
|
| 197 |
+
**Learning Rate**: tested up to **7.5e-4**
|
| 198 |
+
**Batch Size**: 144 (6 real * 24 accum), using SGA(Stochastic Gradient Accumulation) - without SGA I probably would lower accum to 4-8.
|
| 199 |
+
**Optimizer**: Adamw8bit with Kahan summation
|
| 200 |
+
**Schedule**: ReREX (Use REX for simplicity, or Cosine annealing)
|
| 201 |
+
**Precision**: Full BF16
|
| 202 |
+
**Weight Decay**: 0.02
|
| 203 |
+
**Timestep Sampling Strategy**: Logit-Normal(either 0.0 1.0, or -0.2 1.5), Shift 2.5-4.5
|
| 204 |
+
|
| 205 |
+
**Dim/Alpha/Conv/Alpha**: 24/24/24/24 (Lycoris/Locon)
|
| 206 |
+
|
| 207 |
+
**Text Encoders**: Frozen
|
| 208 |
+
|
| 209 |
+
**Optimal Transport**: True
|
| 210 |
+
|
| 211 |
+
**Expected Dataset Size**: 100-200 images (Can be even 10, but balance with repeats to roughly this target.)
|
| 212 |
+
**Epochs**: 50
|
| 213 |
+
|
| 214 |
+
Concepts seem to train at similar speed to prior NoobAI models, but have not tested explicitly.
|
| 215 |
+
|
| 216 |
+
### Hardware
|
| 217 |
+
|
| 218 |
+
Model was trained on cloud 8xH200 node.
|
| 219 |
+
|
| 220 |
+
### Software
|
| 221 |
+
|
| 222 |
+
Custom fork of [SD-Scripts](https://github.com/bluvoll/sd-scripts)(maintained by Bluvoll)
|
| 223 |
+
|
| 224 |
+
## Acknowledgements
|
| 225 |
+
|
| 226 |
+
### Special Thanks
|
| 227 |
+
|
| 228 |
+
**To a special supporter who singlehandidly sponsored whole run and preferred to stay anonymous**
|
| 229 |
+
|
| 230 |
+
**Additional donators**
|
| 231 |
+
-mfcg
|
| 232 |
+
-holo
|
| 233 |
+
-dyshidrosis
|
| 234 |
+
-remix
|
| 235 |
+
-edf
|
| 236 |
+
|
| 237 |
+
---
|
| 238 |
+
|
| 239 |
+
# Support
|
| 240 |
+
If you wish to support our continuous effort of making waifus 0.2% better, you can do it here:
|
| 241 |
+
|
| 242 |
+
### **https://ko-fi.com/bluvoll** (Blu, donate here to support training)
|
| 243 |
+
|
| 244 |
+
https://ko-fi.com/anzhc (Anzhc, non-training, just survival)
|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
|
| 248 |
+
BTC: `37fLcfxX5ewhJXnb3T9Qzu9jiSLjVtoUJX`
|
| 249 |
+
ETH: `0xfdF54655796bf2F5bf75192AeB562F8656c1C39E`
|
| 250 |
+
|
| 251 |
+
Send DM to Blu if you want to donate on another network.
|
| 252 |
+
|
| 253 |
+
# Potential future
|
| 254 |
+
|
| 255 |
+
**Expected Compute Needed**: We still consider full run to be in range of 20+ epochs, but no longer think that it is the bare minimum for stable model, as progress with just current 6 epochs has been quite drastic in that regard. 10 epochs are likely a good marker for that.
|
| 256 |
+
|
| 257 |
+
**Dataset**: We would love to start processing of the booru data with our in-house classification models to fix some of the glaring issues with the default Danbooru dataset, as well as thorough processing to some of the concepts, but as of now we don't have budget to rent a dedicated server for persistent storage.
|
| 258 |
+
|
| 259 |
+
**Future Training**: We have confirmation from Sponsor that we would continue training of the model beyond Epoch 6, but it will resume after a short break.
|