Instructions to use MCG-NJU/MotionRAG with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use MCG-NJU/MotionRAG with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import load_image, export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("MCG-NJU/MotionRAG", dtype=torch.bfloat16, device_map="cuda") pipe.to("cuda") prompt = "A man with short gray hair plays a red electric guitar." image = load_image( "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png" ) output = pipe(image=image, prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
Add pipeline_tag: image-to-video
#1
by nielsr HF Staff - opened
This PR updates the model card by adding the pipeline_tag: image-to-video to the metadata. This will improve discoverability on the Hugging Face Hub, allowing users to find the model when filtering by this pipeline tag.
flateon changed pull request status to merged