Instructions to use Soul25r/ZoomEmMovimento with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Soul25r/ZoomEmMovimento with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.1-I2V-14B-480P,Wan-AI/Wan2.1-I2V-14B-480P-Diffusers", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("Soul25r/ZoomEmMovimento") prompt = "A man with short brown hair wearing a white shirt and a dark coat stands in the red neon light of a motel room doorway. He looks back towards the motel room. The camera performs a cr34sh crash zoom in effect, rapidly zooming closer to the man's face. He turns with a shocked expression, as if he heard a noise, and reaches for his pocket." input_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png") image = pipe(image=input_image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
Welcome to the community
The community tab is the place to discuss and collaborate with the HF community!