KeyError: 'float5_e3m1fn', any chance for help?

#1
by Asaf23 - opened
KeyError                                  Traceback (most recent call last)
Cell In[2], line 7
      4 from sdnq.common import use_torch_compile as triton_is_available
      5 from sdnq.loader import apply_sdnq_options_to_model
----> 7 pipe = diffusers.LTX2Pipeline.from_pretrained("Disty0/LTX-2-SDNQ-4bit-dynamic", torch_dtype=torch.bfloat16)
      9 # Enable INT8 MatMul for AMD, Intel ARC and Nvidia GPUs:
     10 # if triton_is_available and (torch.cuda.is_available() or torch.xpu.is_available()):
     11 #     pipe.transformer = apply_sdnq_options_to_model(pipe.transformer, use_quantized_matmul=True)
     12 #     pipe.text_encoder = apply_sdnq_options_to_model(pipe.text_encoder, use_quantized_matmul=True)
     13     # pipe.transformer = torch.compile(pipe.transformer) # optional for faster speeds
     15 pipe.vae.enable_tiling()

File ~/LatentSpaceJam/.venv/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
    111 if check_use_auth_token:
    112     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 114 return fn(*args, **kwargs)

File ~/LatentSpaceJam/.venv/lib/python3.13/site-packages/diffusers/pipelines/pipeline_utils.py:1021, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
   1014 else:
   1015     # load sub model
   1016     sub_model_dtype = (
   1017         torch_dtype.get(name, torch_dtype.get("default", torch.float32))
   1018         if isinstance(torch_dtype, dict)
   1019         else torch_dtype
...
    229 )
    231 if layer_class_name in conv_types:
    232     is_conv_type = True

KeyError: 'float5_e3m1fn'
Owner

Your SDNQ version is outdated. Update SDNQ.

Asaf23 changed discussion status to closed

how did you even run it? what workflow did you use and how did you place these files? im so confused tbh

Diffusers, running out of Comfy.

Sign up or log in to comment