How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Gjm1234/Wan2.2-I2V-A14B-Diffusers", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

WAN 2.2 – Image to Video (I2V) – Diffusers Conversion

This repository contains a Diffusers-compatible custom pipeline for running WAN 2.2 Image-to-Video models inside a HuggingFace Inference Endpoint.

βœ” Works with:

  • DiffusionPipeline.from_pretrained(...)
  • Custom pipeline loading via custom_pipeline="pipeline_wan_i2v"
  • GPU HF Inference Endpoints
  • Automatic model component loading defined in model_index.json

πŸ“ Repository Structure

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Space using Gjm1234/Wan2.2-I2V-A14B-Diffusers 1