Text-to-Image
Diffusers
English
Text-to-Image
IP-Adapter
StableDiffusion3Pipeline
image-generation
Stable Diffusion
Instructions to use InstantX/SD3.5-Large-IP-Adapter with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use InstantX/SD3.5-Large-IP-Adapter with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("InstantX/SD3.5-Large-IP-Adapter", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
How much GPU memory does SD3.5-Large-IP-Adapter use?
#2
by SOT1k - opened
How much GPU memory does SD3.5-Large-IP-Adapter use?
I get an out of memory error with 2x3090 (48GB total)
I got max utilization near 31Gb on A100 40GB using bfloat16.
You can use pipe.enable_sequential_cpu_offload() to greatly reduce VRAM requirements, as parameters are moved to GPU only before being used, and are offloaded to CPU right after. It does take a bit longer to get the output, but still orders of magnitude faster than CPU (and infinitely faster compared to OOM GPU :) ). If you have 32GB of RAM, should work without no issues, and should work even with 8GB GPUs. If you need help getting inference code running let me know and I can help you out!
SOT1k changed discussion status to closed