Text-to-Video
Diffusers
Safetensors
English
text-to-video;
image-to-video;
comfyUI;
video-generation;
Instructions to use lightx2v/Wan2.2-Lightning with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lightx2v/Wan2.2-Lightning with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("lightx2v/Wan2.2-Lightning", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
training problem!!!!!
#31
by yuduan - opened
Hello, I have a question. I followed the Selforcing Plus setup in section 2.2 (High Noise) and ran it with backward simulation. The timestep is set between 900–1000, shift = 5, denoising_step_list = [990, 960, 930, 900], minstep = 900, maxstep = 990, dfake = 5. I’m training on 16 GPUs. The DMD loss decreased from 0.8 to 0.3, while the critic loss went from 0.5 up to as high as 3, and is now oscillating around 1.5 after about 600 steps. Is this normal?
You re trying to fix it?