Instructions to use stabilityai/stable-cascade with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use stabilityai/stable-cascade with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-cascade", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Results are horrible using MPS
As title suggests. The image output results are really bad when I use MPS hardware accelerator. Is there a reason as to why this is the case?
They end up looking really grainy. Is the model here the big-big one? I think I may have been using the small-small one which resulted in poor outputs.
An additional issue I noticed is that if I also try to pass get a smaller resolution they become crops. Is this just a result of the really small latent spaces so anything below 1024x1024 won't give good outputs?
Yep, big model but using bfloat16 which needs a fairly recent pytorch nightly.
bfloat16 is not supported on MPS though right? Unless this is what you mean by needs a fairly recent pytorch nightly?
Yep, the recent (3 or 4 weeks or so) nightly pytorchs support enough bfloat16 to do SD and Cascade. Doesn't look like the changes will be in PyTorch 2.2.x so I'm hoping they'll be in the 2.3 releases.



