Instructions to use lucataco/mochi-lora-vhs with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use lucataco/mochi-lora-vhs with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline from diffusers.utils import export_to_video # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("mochi-1-preview", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("lucataco/mochi-lora-vhs") prompt = "a parrot flying in the blue skies, a grainy or noisy video effect in the background" output = pipe(prompt=prompt).frames[0] export_to_video(output, "output.mp4") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("mochi-1-preview", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("lucataco/mochi-lora-vhs")
prompt = "a parrot flying in the blue skies, a grainy or noisy video effect in the background"
output = pipe(prompt=prompt).frames[0]
export_to_video(output, "output.mp4")Mochi-1 Preview LoRA Finetune
This is a LoRA fine-tune of the Mochi-1 preview model. The model was trained using custom training data.
- Prompt
- a parrot flying in the blue skies, a grainy or noisy video effect in the background
Usage
from diffusers import MochiPipeline
from diffusers.utils import export_to_video
import torch
pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview")
pipe.load_lora_weights("lucataco/mochi-lora-vhs")
pipe.enable_model_cpu_offload()
video = pipe(
prompt="your prompt here",
guidance_scale=6.0,
num_inference_steps=64,
height=480,
width=848,
max_sequence_length=256,
).frames[0]
export_to_video(video, "output.mp4", fps=30)
Training details
Trained on Replicate using: lucataco/mochi-1-lora-trainer
- Downloads last month
- 6