Note: It appears to me that the DF11 FLUX.2 [klein] models are bugged, often producing outputs that are slightly different from BF16 outputs. I have confirmed that the DF11 weights are indeed 100% identical to BF16 weights when decompressed, so this should be some bug with inferencing that I am struggling to diagnose. In the meantime, I recommend using the this model with the default KSampler node, instead of the CustomSamplerAdvanced node. Also if you have the VRAM to run this model in BF16, I recommend to use the newly added DFloat11Decompressor node to load this model instead. This node fully decompresses the DF11 weights into BF16 weights, and outputs a model that is 100% identical to the Load Diffusion Model node. I apologize for the inconvenience, as I investigate this issue.

Downloads last month
65
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mingyi456/FLUX.2-klein-base-4B-DF11-ComfyUI

Quantized
(3)
this model

Collection including mingyi456/FLUX.2-klein-base-4B-DF11-ComfyUI