22th February 2026: Model loading change - CLIP error
Hello! Thank you for your work!
I've changed model loaders nodes as you shows and replace embedding connectors from text encoders folder to diffusion models. If I understand correctly now we just need one gemma clip and connector is only for diffusion model. And now I have this error with Clip Load node. With Dual clip with embedding connector it works. So we need now copy embedding connector in text encoders and diffusion models folders at the same time or did I do something wrong?
My config:
pytorch version: 2.10.0+cu130
Python version: 3.13.11 (tags/v3.13.11:6278944, Dec 5 2025, 16:26:58) [MSC v.1944 64 bit (AMD64)]
ComfyUI version: 0.15.0
ComfyUI frontend version: 1.39.16
!!! Exception during processing !!! Tensors must have same number of dimensions: got 4 and 3
Traceback (most recent call last):
File "F:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 524, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 333, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 307, in async_map_node_over_list
await process_inputs(input_dict, i)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 295, in process_inputs
result = f(**inputs)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy_api\internal_init.py", line 149, in wrapped_func
return method(locked_class, **inputs)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy_api\latest_io.py", line 1748, in EXECUTE_NORMALIZED
to_return = cls.execute(*args, **kwargs)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy_extras\nodes_custom_sampler.py", line 963, in execute
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 1049, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 993, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 968, in inner_sample
self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed, latent_shapes=latent_shapes)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 794, in process_conds
conds[k] = encode_model_conds(model.extra_conds, conds[k], noise, device, k, latent_image=latent_image, denoise_mask=denoise_mask, seed=seed, latent_shapes=latent_shapes)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 704, in encode_model_conds
out = model_function(**params)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 998, in extra_conds
cross_attn = self.diffusion_model.preprocess_text_embeds(cross_attn.to(device=device, dtype=self.get_dtype_inference()))
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lightricks\av_model.py", line 473, in preprocess_text_embeds
out_vid = self.video_embeddings_connector(context)[0]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "F:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1776, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "F:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1787, in _call_impl
return forward_call(*args, **kwargs)
File "F:\AI\ComfyUI_windows_portable\ComfyUI\comfy\ldm\lightricks\embeddings_connector.py", line 280, in forward
hidden_states = torch.cat((hidden_states, learnable_registers[hidden_states.shape[1]:].unsqueeze(0).repeat(hidden_states.shape[0], 1, 1)), dim=1)
RuntimeError: Tensors must have same number of dimensions: got 4 and 3
The clip loader should be same as before. In other words Gemma + Embedding
(so the embedding is both at the main model AND the Dual Clip loader. Use same embeddings file, and it wont use any more memory etc)
The clip loader should be same as before. In other words Gemma + Embedding
(so the embedding is both at the main model AND the Dual Clip loader. Use same embeddings file, and it wont use any more memory etc)
Thank you! I thought that now we need only one connector for diffusion model. Ok
