code_execution_files / Lightricks_LTX-Video_0.txt
ariG23498's picture
ariG23498 HF Staff
Upload Lightricks_LTX-Video_0.txt with huggingface_hub
ec9b625 verified
raw
history blame
2.42 kB
```CODE:
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-Video", dtype=torch.bfloat16, device_map="cuda")
pipe.to("cuda")
prompt = "A man with short gray hair plays a red electric guitar."
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
)
output = pipe(image=image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")
```
ERROR:
Traceback (most recent call last):
File "/tmp/Lightricks_LTX-Video_0QlFD3T.py", line 26, in <module>
pipe = DiffusionPipeline.from_pretrained("Lightricks/LTX-Video", dtype=torch.bfloat16, device_map="cuda")
File "/tmp/.cache/uv/environments-v2/cb88948adb0dfc46/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "/tmp/.cache/uv/environments-v2/cb88948adb0dfc46/lib/python3.13/site-packages/diffusers/pipelines/pipeline_utils.py", line 1025, in from_pretrained
loaded_sub_model = load_sub_model(
library_name=library_name,
...<21 lines>...
quantization_config=quantization_config,
)
File "/tmp/.cache/uv/environments-v2/cb88948adb0dfc46/lib/python3.13/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 778, in load_sub_model
raise ValueError(
...<2 lines>...
)
ValueError: The component <class 'transformers.models.t5.tokenization_t5._LazyModule.__getattr__.<locals>.Placeholder'> of <class 'diffusers.pipelines.ltx.pipeline_ltx.LTXPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_pretrained', 'from_pretrained'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained'], 'ProcessorMixin': ['save_pretrained', 'from_pretrained'], 'ImageProcessingMixin': ['save_pretrained', 'from_pretrained'], 'ORTModule': ['save_pretrained', 'from_pretrained']}.