Please tell me how to use this model correctly

#1
by yu24 - opened

In ComfyUI 0.12.2, it will still be automatically converted to float32.

When I run the model locally, ComfyUI reportsmodel weight dtype torch.bfloat16, manual cast: None. I see the same output when running the FP32 model, which indicates that my version of PyTorch is defaulting all models to BF16 at runtime.

The only launch flags I am using is --disable-api-nodeson a system with Nvidia 590.48.01, CUDA 13.1, Torch 2.9.1+cu130 on Blackwell architecture.

Using KJNodes - "Diffusion Model Loader KJ" you can manually select weight_dtype = FP16 and it should be reflected the in the console output.

This model was converted with the z_image_convert_original_to_comfy script found at https://huggingface.co/Comfy-Org/z_image_turbo/blob/main/z_image_convert_original_to_comfy.py

Sign up or log in to comment