How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Erland/tiny-wan2.2-t2v-a14b-debug", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Tiny Wan2.2 T2V A14B Debug Pipeline

This is a randomly initialized, tiny Diffusers WanPipeline fixture for Wan-AI/Wan2.2-T2V-A14B. Wan2.2 T2V-A14B high/low-noise expert layout represented as a Diffusers WanPipeline with transformer and transformer_2.

It is intended for fast load-path and inference-control debugging only. It is not trained and should not be used for generation quality evaluation.

from diffusers import WanPipeline

pipe = WanPipeline.from_pretrained("Erland/tiny-wan2.2-t2v-a14b-debug")
pipe.set_progress_bar_config(disable=True)
frames = pipe(
    prompt="debug prompt",
    height=64,
    width=64,
    num_frames=5,
    num_inference_steps=1,
    guidance_scale=1.0,
    max_sequence_length=8,
).frames[0]
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support