Instructions to use stabilityai/stable-diffusion-xl-base-1.0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use stabilityai/stable-diffusion-xl-base-1.0 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
hands detail
#88
by DayiTokat - opened
Use "neagtive_prompts"
I use it in my code, how can i arrange negative prompts?
I have recently started using them like this and seem to work pretty good.
negative_prompt = "poorly drawn hands"
image = pipe(prompt=prompt, negative_prompt=negative_prompt, height=512, width=512, num_inference_steps=75).images[0]
You are amazing. Thanks a lot.
It has worked for me, but not every time....it's almost like the model continues to learn, then relapses and throws out some disturbing results...then goes back to normal looking body parts.
