CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer
Paper β’ 2408.06072 β’ Published β’ 38
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("vdo/CogVideoX-5b", dtype=torch.bfloat16, device_map="cuda")
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]
π δΈζι θ―» | π€ Huggingface Space | π Github | π arxiv
CogVideoX is an open-source version of the video generation model originating from QingYing. The table below displays the list of video generation models we currently offer, along with their foundational information.
| Model Name | CogVideoX-2B | CogVideoX-5B (This Repository) |
|---|---|---|
| Model Description | Entry-level model, balancing compatibility. Low cost for running and secondary development. | Larger model with higher video generation quality and better visual effects. |
| Inference Precision | FP16* (Recommended), BF16, FP32, FP8*, INT8, no support for INT4 | BF16 (Recommended), FP16, FP32, FP8*, INT8, no support for INT4 |
| Single GPU VRAM Consumption | FP16: 18GB using SAT / 12.5GB* using diffusers INT8: 7.8GB* using diffusers |
BF16: 26GB using SAT / 20.7GB* using diffusers INT8: 11.4GB* using diffusers |
| Multi-GPU Inference VRAM Consumption | FP16: 10GB* using diffusers | BF16: 15GB* using diffusers |
| Inference Speed (Step = 50, FP/BF16) |
Single A100: ~90 seconds Single H100: ~45 seconds |
Single A100: ~180 seconds Single H100: ~90 seconds |
| Fine-tuning Precision | FP16 | BF16 |
| Fine-tuning VRAM Consumption (per GPU) | 47 GB (bs=1, LORA) 61 GB (bs=2, LORA) 62GB (bs=1, SFT) |
63 GB (bs=1, LORA) 80 GB (bs=2, LORA) 75GB (bs=1, SFT) |
| Prompt Language | English* | |
| Prompt Length Limit | 226 Tokens | |
| Video Length | 6 Seconds | |
| Frame Rate | 8 Frames per Second | |
| Video Resolution | 720 x 480, no support for other resolutions (including fine-tuning) | |
| Positional Encoding | 3d_sincos_pos_embed | 3d_rope_pos_embed |
Data Explanation
enable_model_cpu_offload() option and pipe.vae.enable_tiling() optimization were enabled. This solution has not been tested for actual VRAM/memory usage on devices other than NVIDIA A100/H100. Generally, this solution can be adapted to all devices with NVIDIA Ampere architecture and above. If optimization is disabled, VRAM usage will increase significantly, with peak VRAM approximately 3 times the value in the table.enable_model_cpu_offload() optimization needs to be disabled.FP16 precision, while the 5B model is trained using BF16 precision. It is recommended to use the precision used in model training for inference.FP8 precision must be used on NVIDIA H100 and above devices, requiring source installation of the torch, torchao, diffusers, and accelerate Python packages. CUDA 12.4 is recommended.diffusers support quantization.Note
This model supports deployment using the huggingface diffusers library. You can deploy it by following these steps.
We recommend that you visit our GitHub and check out the relevant prompt optimizations and conversions to get a better experience.
# diffusers>=0.30.1
# transformers>=4.44.2
# accelerate>=0.33.0 (suggest install from source)
# imageio-ffmpeg>=0.5.1
pip install --upgrade transformers accelerate diffusers imageio-ffmpeg
import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_video
prompt = "A panda, dressed in a small, red jacket and a tiny hat, sits on a wooden stool in a serene bamboo forest. The panda's fluffy paws strum a miniature acoustic guitar, producing soft, melodic tunes. Nearby, a few other pandas gather, watching curiously and some clapping in rhythm. Sunlight filters through the tall bamboo, casting a gentle glow on the scene. The panda's face is expressive, showing concentration and joy as it plays. The background includes a small, flowing stream and vibrant green foliage, enhancing the peaceful and magical atmosphere of this unique musical performance."
pipe = CogVideoXPipeline.from_pretrained(
"THUDM/CogVideoX-5b",
torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()
pipe.vae.enable_tiling()
video = pipe(
prompt=prompt,
num_videos_per_prompt=1,
num_inference_steps=50,
num_frames=49,
guidance_scale=6,
generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]
export_to_video(video, "output.mp4", fps=8)
Welcome to our github, where you will find:
This model is released under the CogVideoX LICENSE.
@article{yang2024cogvideox,
title={CogVideoX: Text-to-Video Diffusion Models with An Expert Transformer},
author={Yang, Zhuoyi and Teng, Jiayan and Zheng, Wendi and Ding, Ming and Huang, Shiyu and Xu, Jiazheng and Yang, Yuanming and Hong, Wenyi and Zhang, Xiaohan and Feng, Guanyu and others},
journal={arXiv preprint arXiv:2408.06072},
year={2024}
}