| | --- |
| | base_model: mochi-1-preview |
| | library_name: diffusers |
| | license: apache-2.0 |
| | tags: |
| | - text-to-video |
| | - diffusers-training |
| | - diffusers |
| | - lora |
| | - replicate |
| | - mochi-1-preview |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the training script had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # Mochi-1 Preview LoRA Finetune |
| |
|
| | This is a LoRA fine-tune of the Mochi-1 preview model. The model was trained using custom training data. |
| |
|
| | ## Usage |
| |
|
| | ```python |
| | from diffusers import MochiPipeline |
| | from diffusers.utils import export_to_video |
| | import torch |
| | |
| | pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview") |
| | pipe.load_lora_weights("uglysonic3121/animationtest2.0") |
| | pipe.enable_model_cpu_offload() |
| | |
| | video = pipe( |
| | prompt="your prompt here", |
| | guidance_scale=6.0, |
| | num_inference_steps=64, |
| | height=480, |
| | width=848, |
| | max_sequence_length=256, |
| | ).frames[0] |
| | |
| | export_to_video(video, "output.mp4", fps=30) |
| | ``` |
| |
|
| | ## Training details |
| |
|
| | Trained on Replicate using: [lucataco/mochi-1-lora-trainer](https://replicate.com/lucataco/mochi-1-lora-trainer) |
| |
|
| |
|
| | ## Intended uses & limitations |
| |
|
| | #### How to use |
| |
|
| | ```python |
| | # TODO: add an example code snippet for running this diffusion pipeline |
| | ``` |
| |
|
| | #### Limitations and bias |
| |
|
| | [TODO: provide examples of latent issues and potential remediations] |
| |
|
| | ## Training details |
| |
|
| | [TODO: describe the data used to train the model] |