Comfyui / models /vae /mochi_vae.md
v33ts's picture
Upload models/vae/mochi_vae.md with huggingface_hub
d7d3d72 verified
metadata
author: theally
baseModel: Mochi
hashes:
  AutoV1: C5334E79
  AutoV2: 1BE451CEC9
  AutoV3: ECE5DC12F490
  BLAKE3: 36164206DC087B77C3B01C82176EC560E0658DA0FE7F0D27826A74177471AB06
  CRC32: D1349C99
  SHA256: 1BE451CEC94B911980406169286BABC5269E7CF6A94BBBBDF45E8D3F2C961083
metadata:
  format: SafeTensor
  fp: fp8
  size: pruned
modelPage: https://civitai.com/models/922390?modelVersionId=1035187
preview:
  - >-
    https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2d275e3b-6b77-489d-8ea8-6376cb462ead/width=450/38738187.jpeg
  - >-
    https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/10f28599-4463-46d1-b164-83b658ff50fc/width=450/38738195.jpeg
website: Civitai

Trigger Words

No trigger words

About this version

No description about this version

Mochi 1 Preview - Video Model

Read our Quickstart Guide to Mochi on the Civitai Education Hub!

If you don't want to run it locally, you can try it out now on the Civitai Generator! Read the Guide to Video Generation in the Civitai Generator!

Mochi 1 preview, by creators https://www.genmo.ai, is an open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence in preliminary evaluation.

This model dramatically closes the gap between closed and open video generation systems.

The model is released under a permissive Apache 2.0 license.

To get started with ComfyUI;

  1. Update to the latest version of ComfyUI
  2. Download Mochi model weights into models/diffusion_models folder
  3. Make sure a text encoder [1][2] is in your models/clip folder
  4. Download the VAE to: ComfyUI/models/vae

Mochi has native ComfyUI support, and will run on 12GB+ VRAM.

Github: https://github.com/genmoai/models

HuggingFace: https://huggingface.co/genmo/mochi-1-preview