Instructions to use stabilityai/stable-diffusion-xl-base-0.9 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use stabilityai/stable-diffusion-xl-base-0.9 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-0.9", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
is this the model behind the demo https://clipdrop.co/stable-diffusion ?
#14
by zengxianyu - opened
The results generated on the demo link seem to be much better than this model
Same question, does the demo use xl-1.0?
Though comfy with my high Res node I am getting even better results u might check for reference:
https://civitai.com/user/sikasolutionsworldwide709
It is not same demo and sdxl 0.9. You can test generate an image in demo. Then try it sdxl 0.9 with same seed. Results are different.
The results depends also on the generating procedure...means if u use SDXL refiner or not. I have tested this