works in comfyUI but not fully supported yet, vram/ram spikes. most likely fixed within a day

Works with normal klein 9b loras

allowed txt attn to quant flux2-klein-9b-kv-nvfp4.safetensors

txt attn kept at bf16 flux2-klein-9b-kv-nvfp4_txtattnBF16.safetensors

Downloads last month
80
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ApacheOne/FLUX.2-klein-9b-kv-nvfp4_mixed

Quantized
(2)
this model

Collection including ApacheOne/FLUX.2-klein-9b-kv-nvfp4_mixed