--- author: cogvideox996 baseModel: CogVideoX hashes: AutoV1: 8A96607E AutoV2: A410E48D98 AutoV3: 37ABB2444528 BLAKE3: F475F5AD96B6BB2A791A0DDD2C055810E4C12A4343CDD926BC2ED3B9ACE25FCD CRC32: 1AB3FB12 SHA256: A410E48D988C8224CEF392B68DB0654485CFD41F345F4A3A81D3E6B765BB995E metadata: format: SafeTensor modelPage: https://civitai.com/models/1009676?modelVersionId=1131953 preview: - https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/39402559-7cc5-4e33-b8ff-895ad0769b60/width=450/43812071.jpeg website: Civitai --- # Trigger Words No trigger words # About this version * **SHA256:** a410e48d988c8224cef392b68db0654485cfd41f345f4a3a81d3e6b765bb995e * **Pointer size:** 134 Bytes * **Size of remote file:** 862 MB # CogVideoX-VAE The model is released under a permissive CogVideoX license. To get started with ComfyUI; 1. Update to the latest version of ComfyUI 2. Download CogVideoX model weights into `models/diffusion_models` folder, use "CogVideoXModelLoader" load 3. Make sure a text encoder [[1](https://civitai.com/models/497255?modelVersionId=568405)] is in your `models/clip` folder 4. Download the VAE to: `ComfyUI/models/vae, use "`CogVideoXVAELoader`" load` CogVideoX has [kijai ComfyUI support](https://github.com/kijai/ComfyUI-CogVideoXWrapper), and will run on 20GB+ or 9GB VRAM. Github: HuggingFace: