Thanks
been testing it on wan2gp, pretty fast on my 5060ti and looks good, also it being a good merged nsfw makes me save up from some loras
comfyui
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x256 and 128x15360)
comfyui
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x256 and 128x15360)
check if the the clip model is qwen3 4b and set to flux, that is normally the fix .
There is no flux option
flux2,
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\ComfyUI\comfy\ldm\lumina\model.py", line 804, in forward
return comfy.patcher_extension.WrapperExecutor.new_class_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\ComfyUI\comfy\ldm\lumina\model.py", line 849, in _forward
img = layer(img, mask, freqs_cis, adaln_input, timestep_zero_index=timestep_zero_index, transformer_options=transformer_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\ComfyUI\comfy\ldm\lumina\model.py", line 322, in forward
scale_msa, gate_msa, scale_mlp, gate_mlp = self.adaLN_modulation(adaln_input).chunk(4, dim=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\ComfyUI\comfy\ops.py", line 392, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\ComfyUI\comfy\ops.py", line 385, in forward_comfy_cast_weights
x = torch.nn.functional.linear(input, weight, bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x256 and 128x15360)
flux2
File "I:\aidraw\ComfyUI-Nunchaku-ver\ComfyUI\comfy\ops.py", line 509, in forward_comfy_cast_weights
x = torch.nn.functional.rms_norm(input, self.normalized_shape, weight, self.eps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "I:\aidraw\ComfyUI-Nunchaku-ver\python\Lib\site-packages\torch\nn\functional.py", line 2920, in rms_norm
return torch.rms_norm(input, normalized_shape, weight, eps)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Given normalized_shape=[2560], expected input with shape [*, 2560], but got input of size[1, 512, 7680]
try lumina2, that worked for me just now on the newest comfy update.
sd3 was also the other one that works for zimage's text encoder.
The previous version of the pull using git pull was v0.18.1
Just used
git checkout v0.18.3
works fine
2602SY_ZImageTurbo-nvfp4.safetensors
This one can run now
ZImageTurbo-nvfp4_FP32.safetensors
This model still can't run
RuntimeError: mat1 and mat2 shapes cannot be multiplied (3648x3840 and 1920x11520)
ZImageTurbo-nvfp4_FP32.safetensors
This model still can't run
RuntimeError: mat1 and mat2 shapes cannot be multiplied (3648x3840 and 1920x11520)
Yeah thats an issue with how comfy loads the mixed quants , fp32 mixed quants aren't supported in comfy even full nvfp4 quants aren't yet. Comfy did middle of the road support with requiring layers at bf16.

