import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("phxdev/mochi-1-preview", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]Diffusers formation for mochi-1-preview model.
It was create by scripts: https://github.com/huggingface/diffusers/blob/mochi/scripts/convert_mochi_to_diffusers.py
The model can be directly load from pretrained with mochi branch: https://github.com/huggingface/diffusers/tree/mochi
from diffusers import MochiPipeline
from diffusers.utils import export_to_video
pipe = MochiPipeline.from_pretrained(model_path, torch_dtype=torch.bfloat16)
pipe.to("cuda")
prompt = "Close-up of a chameleon's eye, with its scaly skin changing color. Ultra high resolution 4k."
frames = pipe(prompt,
num_inference_steps=50,
guidance_scale=4.5,
num_frames=61,
generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]
export_to_video(frames, "mochi.mp4")
Some generated results:
Pretty thanks for the discussion in https://github.com/huggingface/diffusers/pull/9769
license: apache-2.0
- Downloads last month
- 7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support