Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

Wan-AI
/
Wan2.1-T2V-1.3B-Diffusers

Text-to-Video
Diffusers
Safetensors
English
Chinese
WanPipeline
video
video-generation
Model card Files Files and versions
xet
Community
15

Instructions to use Wan-AI/Wan2.1-T2V-1.3B-Diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use Wan-AI/Wan2.1-T2V-1.3B-Diffusers with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
Wan2.1-T2V-1.3B-Diffusers / examples
Ctrl+K
Ctrl+K
  • 4 contributors
History: 1 commit
StevenZhang's picture
StevenZhang
update assets
0126088 about 1 year ago
  • i2v_input.JPG
    251 kB
    xet
    update assets about 1 year ago