Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

vidfom
/
wan-t2v

Diffusers
Safetensors
Model card Files Files and versions
xet
Community

Instructions to use vidfom/wan-t2v with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use vidfom/wan-t2v with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("vidfom/wan-t2v", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
wan-t2v / DiffSynth-Studio /examples /HunyuanVideo
9.65 kB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 1 commit
vidfom's picture
vidfom
Upload folder using huggingface_hub
f13a0f0 verified about 1 year ago
  • README.md
    1.48 kB
    Upload folder using huggingface_hub about 1 year ago
  • hunyuanvideo_24G.py
    1.52 kB
    Upload folder using huggingface_hub about 1 year ago
  • hunyuanvideo_6G.py
    2.24 kB
    Upload folder using huggingface_hub about 1 year ago
  • hunyuanvideo_80G.py
    1.65 kB
    Upload folder using huggingface_hub about 1 year ago
  • hunyuanvideo_v2v_6G.py
    2.76 kB
    Upload folder using huggingface_hub about 1 year ago