FLUX.1-schnell Mirror (A.I.M.I)

Mirror of FLUX.1-schnell plus companion GGUF quantizations, re-hosted for stable URLs inside the A.I.M.I desktop product. Contents are unmodified.

FLUX.1-schnell is A.I.M.I's default image generation model โ€” fast, commercial-safe (Apache 2.0), 1024ร—1024 native, typically 4 steps for a good image on RTX 5090.

Files

File Upstream Size Purpose
flux1-schnell-Q5_0.gguf city96/FLUX.1-schnell-gguf ~7.7 GB FLUX.1-schnell base weights (Q5_0 GGUF)
ae.safetensors second-state/FLUX.1-schnell-GGUF ~320 MB Autoencoder (VAE) โ€” decodes latent to pixels
clip_l.safetensors comfyanonymous/flux_text_encoders ~240 MB CLIP-L text encoder
t5-v1_1-xxl-encoder-Q4_K_M.gguf city96/t5-v1_1-xxl-encoder-gguf ~2.7 GB T5-XXL text encoder (Q4_K_M, default for 16 GB tier)
t5-v1_1-xxl-encoder-Q8_0.gguf city96/t5-v1_1-xxl-encoder-gguf ~4.8 GB T5-XXL text encoder (Q8_0, higher-tier quality upgrade)

Total: ~16 GB. Download either the Q4 or the Q8 T5 encoder depending on tier โ€” both present for flexibility.

License

All files Apache 2.0. Redistributed unchanged. See upstream repos for license texts. FLUX.1-schnell itself (the base model) is Apache 2.0 from Black Forest Labs.

Attribution

  • FLUX.1-schnell: Black Forest Labs, 2024.
  • GGUF quantizations: city96 (city96/FLUX.1-schnell-gguf, city96/t5-v1_1-xxl-encoder-gguf).
  • ae.safetensors re-host: Second State community mirror of BFL's autoencoder.
  • CLIP-L: Originally OpenAI's CLIP; redistributed with the flux_text_encoders bundle by comfyanonymous.
  • T5-v1.1-XXL: Google Research, 2019-2020; redistributed as GGUF by city96.
Downloads last month
116
GGUF
Model size
12B params
Architecture
flux
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support