Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,31 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# 🍰 Tiny AutoEncoder for Stable Diffusion 3
|
| 6 |
+
|
| 7 |
+
[TAESD3](https://github.com/madebyollin/taesd) is very tiny autoencoder which uses the same "latent API" as Stable Diffusion 3's VAE.
|
| 8 |
+
TAESD3 is useful for real-time previewing of the SD3 generation process.
|
| 9 |
+
|
| 10 |
+
This repo contains `.safetensors` versions of the TAESD3 weights.
|
| 11 |
+
|
| 12 |
+
## Using in 🧨 diffusers
|
| 13 |
+
|
| 14 |
+
```python
|
| 15 |
+
import torch
|
| 16 |
+
from diffusers import StableDiffusion3Pipeline, AutoencoderTiny
|
| 17 |
+
|
| 18 |
+
pipe = StableDiffusion3Pipeline.from_pretrained(
|
| 19 |
+
"stabilityai/stable-diffusion-3-medium-diffusers", torch_dtype=torch.float16
|
| 20 |
+
)
|
| 21 |
+
pipe.vae = AutoencoderTiny.from_pretrained("madebyollin/taesd3", torch_dtype=torch.float16)
|
| 22 |
+
pipe.vae.config.shift_factor = 0.0
|
| 23 |
+
pipe = pipe.to("cuda")
|
| 24 |
+
|
| 25 |
+
prompt = "slice of delicious New York-style berry cheesecake"
|
| 26 |
+
image = pipe(prompt, num_inference_steps=25).images[0]
|
| 27 |
+
image.save("cheesecake.png")
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
<img width=512 src=https://cdn-uploads.huggingface.co/production/uploads/630447d40547362a22a969a2/vxm-Ek_N9eMVurl5yf5Jz.png />
|