Text-to-Image
Diffusers
Safetensors
stable-diffusion
sdxl
ssd-1b
flash
sdxl-flash
sdxl-flash-mini
distilled
lightning
turbo
lcm
hyper
fast
fast-sdxl
sd-community
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("sd-community/sdxl-flash-mini", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]SDXL Flash Mini in collaboration with Project Fluently
Introducing the new fast model SDXL Flash (Mini), we learned that all fast XL models work fast, but the quality decreases, and we also made a fast model, but it is not as fast as LCM, Turbo, Lightning and Hyper, but the quality is higher. Below you will see the study with steps and cfg.
It weighs less, consumes less video memory and other resources, and the quality has not dropped much.
Steps and CFG (Guidance)
Optimal settings
- Steps: 6-9
- CFG Scale: 2.5-3.5
- Sampler: DPM++ SDE
Usage
We can use this model only in Auto111 or ComfyUI or Fooocus.
- Downloads last month
- 23
Model tree for sd-community/sdxl-flash-mini
Base model
stabilityai/stable-diffusion-xl-base-1.0
