Instructions to use fal/AuraFlow with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use fal/AuraFlow with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("fal/AuraFlow", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
SDXL VAE
#6
by FormerlyChuckSneedenberg - opened
Any reason for using the SDXL VAE instead of a more modern one like SD3 or a custom one? I think this really affects the model, as the amount of detail we can get is capped by the VAE rather than the training of the model.
Any plans on changing it?
We worked on building the 16ch-vae https://huggingface.co/AuraDiffusion/16ch-vae when we were in the middle of v0.1 pre-training, hoping to leverage it for v0.2!
isidentical changed discussion status to closed