import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Gjm1234/Wan2.2-I2V-A14B-Diffusers", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
WAN 2.2 β Image to Video (I2V) β Diffusers Conversion
This repository contains a Diffusers-compatible custom pipeline for running WAN 2.2 Image-to-Video models inside a HuggingFace Inference Endpoint.
β Works with:
DiffusionPipeline.from_pretrained(...)- Custom pipeline loading via
custom_pipeline="pipeline_wan_i2v" - GPU HF Inference Endpoints
- Automatic model component loading defined in
model_index.json
π Repository Structure
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support