Instructions to use fal/AuraFlow with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use fal/AuraFlow with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("fal/AuraFlow", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Version without T5? included
#7
by s0me-0ne - opened
I have no problems running this on my machine but I think for much of the community it would be a welcome change to have a version of the main model that doesn't include the text encoder. It could then be loaded separately and perhaps even ran on the CPU as was the case with SD3. Didn't see a clear indication if you are using T5 or another encoder but are there any plans to release such a version?