Update README.md
Browse files
README.md
CHANGED
|
@@ -5,6 +5,13 @@ tags:
|
|
| 5 |
---
|
| 6 |
# AuraFlow v0.2
|
| 7 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |

|
| 9 |
|
| 10 |
|
|
@@ -30,7 +37,7 @@ from diffusers import AuraFlowPipeline
|
|
| 30 |
import torch
|
| 31 |
|
| 32 |
pipeline = AuraFlowPipeline.from_pretrained(
|
| 33 |
-
"
|
| 34 |
torch_dtype=torch.float16,
|
| 35 |
variant="fp16",
|
| 36 |
).to("cuda")
|
|
|
|
| 5 |
---
|
| 6 |
# AuraFlow v0.2
|
| 7 |
|
| 8 |
+
This is a copy of `fal/AuraFlow-v0.2` but with the transformer converted to float16 in order to save disk space,
|
| 9 |
+
allow faster loading, and download if using something without static storage, like Colab.
|
| 10 |
+
|
| 11 |
+
You may get black images, if so use madebyollin's fp16 fixed SDXL VAE, I don't seem to need it when using MPS
|
| 12 |
+
but I did have some issues on Colab.
|
| 13 |
+
|
| 14 |
+
|
| 15 |

|
| 16 |
|
| 17 |
|
|
|
|
| 37 |
import torch
|
| 38 |
|
| 39 |
pipeline = AuraFlowPipeline.from_pretrained(
|
| 40 |
+
"Vargol/auraflow0.2-fp16-diffusers",
|
| 41 |
torch_dtype=torch.float16,
|
| 42 |
variant="fp16",
|
| 43 |
).to("cuda")
|