Q4_K quantized version of FLUX.2-klein-4B using stable-diffusion.cpp
sd-cli --mode convert --model "flux-2-klein-4b.safetensors" --output "flux-2-klein-4b-Q4_K.gguf" --tensor-type-rules "^.*(_mlp\.(0|2)|_attn\.(proj|qkv)|\.linear(1|2))\.weight=q4_K"
- Downloads last month
- 90
Hardware compatibility
Log In to add your hardware
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for n-Arno/FLUX.2-klein-4B-gguf
Base model
black-forest-labs/FLUX.2-klein-4B