FLUX.2-klein-base-4B GGUF quantized files

The license of the quantized files follows the license of the original model:

  • FLUX.2-klein-base-4B: apache-2.0

These files are converted using https://github.com/leejet/stable-diffusion.cpp

You can use these weights with stable-diffusion.cpp to generate images.

Downloads last month
218
GGUF
Model size
4B params
Architecture
undefined
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for leejet/FLUX.2-klein-base-4B-GGUF

Quantized
(8)
this model