BUG: UnboundLocalError: cannot access local variable 'blocks_class' where it is not associated with a value
#1
by
jyothish99 - opened
I tried to run the example code given the repo:
import torch
from diffusers import ModularPipeline, ClassifierFreeGuidance
from diffusers.utils import export_to_video, load_image, load_video
mod_pipe = ModularPipeline.from_pretrained("BestWishYsh/Helios-Distilled")
mod_pipe.load_components(torch_dtype=torch.bfloat16)
mod_pipe.to("cuda")
# we need to upload guider to the model repo, so each checkpoint will be able to config their guidance differently
guider = ClassifierFreeGuidance(guidance_scale=1.0)
mod_pipe.update_components(guider=guider)
# --- T2V ---
print("=== T2V ===")
prompt = (
"A vibrant tropical fish swimming gracefully among colorful coral reefs in a clear, turquoise ocean. "
"The fish has bright blue and yellow scales with a small, distinctive orange spot on its side, its fins moving "
"fluidly. The coral reefs are alive with a variety of marine life, including small schools of colorful fish and "
"sea turtles gliding by. The water is crystal clear, allowing for a view of the sandy ocean floor below. The reef "
"itself is adorned with a mix of hard and soft corals in shades of red, orange, and green. The photo captures "
"the fish from a slightly elevated angle, emphasizing its lively movements and the vivid colors of its surroundings. "
"A close-up shot with dynamic movement."
)
output = mod_pipe(
prompt=prompt,
height=384,
width=640,
num_frames=240,
pyramid_num_inference_steps_list=[2, 2, 2],
is_amplify_first_chunk=True,
generator=torch.Generator("cuda").manual_seed(42),
output="videos",
)
export_to_video(output[0], "helios_distilled_modular_t2v_output.mp4", fps=24)
print(f"T2V max memory: {torch.cuda.max_memory_allocated() / 1024**3:.3f} GB")
torch.cuda.empty_cache()
torch.cuda.reset_peak_memory_stats()
But I get the following error:
(backend) unpriv@spark-4626:~/projects/KeyWest3_ImageGeneration/backend$ uv run helios.py
Uninstalled 1 package in 0.35ms
Installed 1 package in 2ms
Modular Diffusers is currently an experimental feature under active development. The API is subject to breaking changes in future releases.
/home/unpriv/projects/KeyWest3_ImageGeneration/backend/.venv/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py:202: UserWarning: The `local_dir_use_symlinks` argument is deprecated and ignored in `hf_hub_download`. Downloading to a local directory does not use symlinks anymore.
warnings.warn(
Traceback (most recent call last):
File "/home/unpriv/projects/KeyWest3_ImageGeneration/backend/helios.py", line 5, in <module>
mod_pipe = ModularPipeline.from_pretrained("BestWishYsh/Helios-Distilled")
File "/home/unpriv/projects/KeyWest3_ImageGeneration/backend/.venv/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/unpriv/projects/KeyWest3_ImageGeneration/backend/.venv/lib/python3.13/site-packages/diffusers/modular_pipelines/modular_pipeline.py", line 1863, in from_pretrained
pipeline = pipeline_class(
blocks=blocks,
...<5 lines>...
**kwargs,
)
File "/home/unpriv/projects/KeyWest3_ImageGeneration/backend/.venv/lib/python3.13/site-packages/diffusers/modular_pipelines/modular_pipeline.py", line 1686, in __init__
if blocks_class is not None:
^^^^^^^^^^^^
UnboundLocalError: cannot access local variable 'blocks_class' where it is not associated with a value
My current libraries:
[project]
name = "backend"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"accelerate>=1.12.0",
"diffusers>=0.37.0",
"fastapi[standard]>=0.135.1",
"huggingface-hub>=1.5.0",
"nvidia-ml-py>=13.590.48",
"nvidia-ml-py3>=7.352.0",
"peft>=0.18.1",
"torch>=2.10.0",
"transformers>=5.3.0",
]
[[tool.uv.index]]
name = "pytorch-cu130"
url = "https://download.pytorch.org/whl/cu130"
explicit = true
[tool.uv]
required-environments = [
"sys_platform == 'linux' and platform_machine == 'aarch64'",
]
[tool.uv.sources]
torch = [
{ index = "pytorch-cu130", marker = "sys_platform == 'linux'" },
]
torchvision = [
{ index = "pytorch-cu130", marker = "sys_platform == 'linux'" },
]
torchaudio= [
{ index = "pytorch-cu130", marker = "sys_platform == 'linux'" },
]
If anyone know something about how to fix this, could you please help?
@jyothish99 Thanks for your interest! We have updated the diffusers example, you can try the standard pipline version
https://github.com/PKU-YuanGroup/Helios?tab=readme-ov-file#-diffusers-pipeline
Thanks for you response.
I got it working as follows:
import torch
from diffusers import HeliosPyramidPipeline, ClassifierFreeGuidance
from diffusers.utils import export_to_video, load_image, load_video
import pickle
import numpy as np
torch.cuda.empty_cache()
torch.cuda.reset_peak_memory_stats()
with open("prompt.txt", "r") as f:
prompt = f.read().strip()
mod_pipe = HeliosPyramidPipeline.from_pretrained("BestWishYsh/Helios-Distilled", torch_dtype=torch.bfloat16)
# mod_pipe.to("cuda")
mod_pipe.enable_model_cpu_offload()
# we need to upload guider to the model repo, so each checkpoint will be able to config their guidance differently
guider = ClassifierFreeGuidance(guidance_scale=1.0)
# mod_pipe.update_components(guider=guider)
# --- T2V ---
print("=== T2V ===")
output = mod_pipe(
prompt=prompt,
height=384,
width=640,
num_frames=240,
pyramid_num_inference_steps_list=[2, 2, 2],
is_amplify_first_chunk=True,
guidance_scale=1.0,
generator=torch.Generator("cuda").manual_seed(42),
)
with open("helios_distilled_modular_t2v_output_up_10.pkl", "wb") as f:
pickle.dump(output, f)
video = output.frames[0]
export_to_video(video, "helios_distilled_modular_t2v_output_up_10.mp4", fps=24)
torch.cuda.empty_cache()
torch.cuda.reset_peak_memory_stats()
jyothish99 changed discussion status to
closed