|
|
--- |
|
|
base_model: genmo/mochi-1-preview |
|
|
library_name: diffusers |
|
|
license: apache-2.0 |
|
|
instance_prompt: There is a *crab* blending into a +rocky ocean floor+ where the crab's |
|
|
mottled brown shell, rough texture, and uneven shape closely match the scattered |
|
|
rocks and coarse sand, all in muted brown and grey tones. The crab moves slowly |
|
|
and subtly, making it difficult to distinguish as its rough brown pattern looks |
|
|
just like a piece of rock among the uneven, similarly colored stones and patches |
|
|
of sand. |
|
|
widget: |
|
|
- text: There is a *crab* blending into a +rocky ocean floor+ where the crab's mottled |
|
|
brown shell, rough texture, and uneven shape closely match the scattered rocks |
|
|
and coarse sand, all in muted brown and grey tones. The crab moves slowly and |
|
|
subtly, making it difficult to distinguish as its rough brown pattern looks just |
|
|
like a piece of rock among the uneven, similarly colored stones and patches of |
|
|
sand. |
|
|
output: |
|
|
url: final_video_0.mp4 |
|
|
tags: |
|
|
- text-to-video |
|
|
- diffusers-training |
|
|
- diffusers |
|
|
- lora |
|
|
- mochi-1-preview |
|
|
- mochi-1-preview-diffusers |
|
|
- template:sd-lora |
|
|
- text-to-video |
|
|
- diffusers-training |
|
|
- diffusers |
|
|
- lora |
|
|
- mochi-1-preview |
|
|
- mochi-1-preview-diffusers |
|
|
- template:sd-lora |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the training script had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
|
|
|
# Mochi-1 Preview LoRA Finetune |
|
|
|
|
|
<Gallery /> |
|
|
|
|
|
## Model description |
|
|
|
|
|
This is a lora finetune of the Mochi-1 preview model `genmo/mochi-1-preview`. |
|
|
|
|
|
The model was trained using [CogVideoX Factory](https://github.com/a-r-r-o-w/cogvideox-factory) - a repository containing memory-optimized training scripts for the CogVideoX and Mochi family of models using [TorchAO](https://github.com/pytorch/ao) and [DeepSpeed](https://github.com/microsoft/DeepSpeed). The scripts were adopted from [CogVideoX Diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_lora.py). |
|
|
|
|
|
## Download model |
|
|
|
|
|
[Download LoRA](weathon/mochi-lora/tree/main) in the Files & Versions tab. |
|
|
|
|
|
## Usage |
|
|
|
|
|
Requires the [🧨 Diffusers library](https://github.com/huggingface/diffusers) installed. |
|
|
|
|
|
```py |
|
|
from diffusers import MochiPipeline |
|
|
from diffusers.utils import export_to_video |
|
|
import torch |
|
|
|
|
|
pipe = MochiPipeline.from_pretrained("genmo/mochi-1-preview") |
|
|
pipe.load_lora_weights("CHANGE_ME") |
|
|
pipe.enable_model_cpu_offload() |
|
|
|
|
|
with torch.autocast("cuda", torch.bfloat16): |
|
|
video = pipe( |
|
|
prompt="CHANGE_ME", |
|
|
guidance_scale=6.0, |
|
|
num_inference_steps=64, |
|
|
height=480, |
|
|
width=848, |
|
|
max_sequence_length=256, |
|
|
output_type="np" |
|
|
).frames[0] |
|
|
export_to_video(video) |
|
|
``` |
|
|
|
|
|
For more details, including weighting, merging and fusing LoRAs, check the [documentation](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) on loading LoRAs in diffusers. |
|
|
|
|
|
|
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
#### How to use |
|
|
|
|
|
```python |
|
|
# TODO: add an example code snippet for running this diffusion pipeline |
|
|
``` |
|
|
|
|
|
#### Limitations and bias |
|
|
|
|
|
[TODO: provide examples of latent issues and potential remediations] |
|
|
|
|
|
## Training details |
|
|
|
|
|
[TODO: describe the data used to train the model] |