Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
(ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 03e81e85-8515-4799-ac0a-45c68c9ded26)')
Error code:   UnexpectedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
End of preview.

[NeurIPS 2025] OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions

Dataset Description

This is a testing dataset for multi-modal control video generation. It contains 648 manually collected and annotated data samples to support reference-to-video, reference-mask-to-video, reference-depth-to-video, and reference-instruction-to-video customization.

Here is the data overview:

Task #Subject Number Path
Reference-to-Video 1 113 reference/single_subject
Reference-to-Video 2 76 reference/double_subject
Reference-to-Video 3 74 reference/three_subject
Reference-to-Video 4 56 reference/four_subject
Reference-mask-to-video 1 68 mask/single_subject
Reference-depth-to-video 1 108 depth/single_subject
Reference-depth-to-video 3 40 depth/three_subject
Reference-instruction-to-video 1 113 instruct_edit/single_subject

Training Data Link

We also release a training dataset on HuggingFace at

https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Train

Github Code Link

This dataset is intended to be used together with our code. Please refer to the GitHub repository below for more detailed instructions.

https://github.com/caiyuanhao1998/Open-OmniVCus

Huggingface Model Link

We also release three models based on Wan2.1-1.3B, Wan2.1-14B, and Wan2.2-14B in the following link:

https://huggingface.co/CaiYuanhao/OmniVCus

Project Page Link

For more video customization results, please refer to our project page:

https://caiyuanhao1998.github.io/project/OmniVCus/

Arxiv Paper Link

For more technical details, please refer to our NeurIPS 2025 paper:

https://arxiv.org/abs/2506.23361

Citation

If you find our code, data, and models useful, please consider citing our paper:

@inproceedings{omnivcus,
  title={OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions},
  author={Yuanhao Cai and He Zhang and Xi Chen and Jinbo Xing and Kai Zhang and Yiwei Hu and Yuqian Zhou and Zhifei Zhang and Soo Ye Kim and Tianyu Wang and Yulun Zhang and Xiaokang Yang and Zhe Lin and Alan Yuille},
  booktitle={NeurIPS},
  year={2025}
}
Downloads last month
12

Models trained or fine-tuned on CaiYuanhao/OmniVCus-Test