Instructions to use hakurei/waifu-diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use hakurei/waifu-diffusion with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("hakurei/waifu-diffusion", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Updating example code in README.md
Browse filesAccording to this [issue](https://github.com/CompVis/stable-diffusion/issues/402) we should now use `.images` field instead of the `["sample"]` key.
README.md
CHANGED
|
@@ -57,7 +57,7 @@ pipe = StableDiffusionPipeline.from_pretrained(
|
|
| 57 |
|
| 58 |
prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt"
|
| 59 |
with autocast("cuda"):
|
| 60 |
-
image = pipe(prompt, guidance_scale=6)[
|
| 61 |
|
| 62 |
image.save("test.png")
|
| 63 |
```
|
|
|
|
| 57 |
|
| 58 |
prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt"
|
| 59 |
with autocast("cuda"):
|
| 60 |
+
image = pipe(prompt, guidance_scale=6).images[0]
|
| 61 |
|
| 62 |
image.save("test.png")
|
| 63 |
```
|