Instructions to use UCLA-AGI/SPIN-Diffusion-iter3 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use UCLA-AGI/SPIN-Diffusion-iter3 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("UCLA-AGI/SPIN-Diffusion-iter3", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Not working in Automatic1111 and WebUI Forge
Hello,
Is there any way to use the model with Automatic1111 and WebUI Forge?
Or Do I need to use the commandline only?
Thanks!
I would like to know this too!
Thanks! So does that mean that Spine Diffusion Iter 3 is for SDXL, or is the above diagram just to show what needs to be done.
(The model card says that Spin Diffusion is on SD1.5 Base)
oops I forgot to post a link to my saved model here: https://huggingface.co/otterpupp/spin-diffusion-v3-sd15/
@Zabin it's definitely trained on SD 1.5, just had an image for SDXL floating around. I ended up putting the ComfyUI workflow JSON file in my repo. Didn't make a screenshot of it though.
@otterpupp Thanks a lot! I tried the model you posted, and it worked just fine, thanks!
@Zabin I have also extracted some Loras https://huggingface.co/otterpupp/spin-diffusion-v3-sd15/tree/main/loras
