Dataset Viewer
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
CoMoVi: Co-Generation of 3D Human Motions
and Realistic Videos
Chengfeng Zhao1,
Jiazhi Shu2,
Yubo Zhao1,
Tianyu Huang3,
Jiahao Lu1,
Zekai Gu1,
Chengwei Ren1,
Zhiyang Dou4,
Qing Shuai5,
Yuan Liu1
1HKUST
2SCUT
3CUHK
4MIT
5ZJU
Corresponding author
🚀 Getting Started
1. Environment Setup
conda create python=3.10 --name comovi
conda activate comovi
pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu121
pip install -r requirements.txt
pip install ninja
pip install flash_attn --no-build-isolation # ==2.7.3 for CUDA < 12
pip install git+https://github.com/facebookresearch/detectron2
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
pip install "git+https://github.com/facebookresearch/pytorch3d.git@stable"
2. Inference
bash scripts/inference.sh
Explanation of arguments:
archvalidation_file: to be deletedexp_name: to be merged withckpt_atfps: frame rate of generated video, default is16frames: frame num of generated video, default is81height:Hof generated video, default is704width:Wof generated video, default is1280ckpt_at: to be merged withexp_namemotion_type: to be deletedinteraction: to be deletedinterleave: to be deletednodebug: to be deleted
🔬 Training
1. Data Preparation
Install Blender
mkdir <dir_for_blender>
cd <dir_for_blender>
wget https://download.blender.org/release/Blender3.6/blender-3.6.0-linux-x64.tar.xz
xz -d blender-3.6.0-linux-x64.tar.xz
tar -xvf blender-3.6.0-linux-x64.tar
export PATH=<dir_for_blender>/blender-3.6.0-linux-x64:$PATH
Install CameraHMR
bash scripts/install_camerahmr.sh
Option-1: Download CoMoVi dataset
Coming soon.
Option-2: Pepare customized data step by step
Step-1: Estimate human motion from image frames
python -m prepare.step1_run_hmr
Step-2: Smooth framewise motion estimation
python -m prepare.step2_smooth
Step-3: Render 3D human motion to 2D motion representation
python -m prepare.step3_render_2d_morep
After the three steps above, your examples/ folder should have the following structure:
examples/
├── CameraHMR_smpl_results/ # raw HMR results
└── CameraHMR_smpl_results_overlay/ # raw HMR re-projection results for sanity check
└── CameraHMR_smpl_results_smoothed/ # smoothed HMR results
└── motion_2d_videos/ # rendered 2d motion representation video
└── rgb_videos/ # rgb video
Step-4: Normalize data to the native setting of Wan2.2 (e.g. resolution, fps, etc.)
python -m prepare.step4_normalize
Step-5: Caption description of human motion in videos
2. Train CoMoVi
bash scripts/wan2.2/train_puma_multinode_motion_branch_add_smpl.sh
Acknowledgments
Thanks to the following work that we refer to and benefit from:
- VideoX-Fun: the video generation model training framework;
- CameraHMR: the excellent SMPL estimation for pseudo labels;
- Champ: the data processing pipeline
Citation
@article{zhao2026comovi,
title={CoMoVi: Co-Generation of 3D Human Motions and Realistic Videos},
author={Zhao, Chengfeng and Shu, Jiazhi and Zhao, Yubo and Huang, Tianyu and Lu, Jiahao and Gu, Zekai and Ren, Chengwei and Dou, Zhiyang and Shuai, Qing and Liu, Yuan},
journal={arXiv preprint arXiv:2601.10632},
year={2026}
}
- Downloads last month
- -
Paper for lvxingzhihou/comovi
Paper
• 2601.10632 • Published