How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("hum-ma/Flex.1-alpha-GGUF", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

This is a direct GGUF conversion of flex.1 alpha 8B to be used with ComfyUI-GGUF by city96

Model creator: ostris
Original model: Flex.1-alpha
GGUF quantization: based on llama.cpp b3962 patched with ComfyUI-GGUF/tools/lcpp_sd3.patch

Downloads last month
100
GGUF
Model size
8B params
Architecture
flux
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for hum-ma/Flex.1-alpha-GGUF

Quantized
(3)
this model