nielsr HF Staff commited on
Commit
16677b9
·
verified ·
1 Parent(s): 94bd8dd

Add image-to-video task category and improve dataset card structure

Browse files

Hi! I'm Niels, part of the community science team at Hugging Face.

I've opened this PR to improve the dataset card for OmniVCus-Test. Specifically, I've:
- Added `task_categories: image-to-video` to the metadata to improve discoverability.
- Consolidated the links to the project page, paper, and code at the top for easier navigation.
- Organized the related Hugging Face artifacts (training data and models) for better visibility.

This ensures the dataset is correctly categorized and easier for researchers to use.

Files changed (1) hide show
  1. README.md +17 -42
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - en
 
5
  size_categories:
6
  - n<1K
7
  modalities:
@@ -9,17 +9,23 @@ modalities:
9
  - image
10
  - text
11
  arxiv: 2506.23361
 
 
 
 
 
 
12
  ---
13
 
14
- # [NeurIPS 2025] OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions
15
 
16
- ## Dataset Description
17
 
18
- This is a testing dataset for multi-modal control video generation. It contains 648 manually collected and annotated data samples to support
19
- reference-to-video, reference-mask-to-video, reference-depth-to-video, and reference-instruction-to-video customization.
20
 
21
- Here is the data overview:
22
 
 
23
 
24
  | Task | #Subject | Number | Path |
25
  |:----:|:--------:|:------:|:----:|
@@ -32,48 +38,17 @@ Here is the data overview:
32
  | Reference-depth-to-video | 3 | 40 | [`depth/three_subject`](https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Test/tree/main/depth/multiple_subject) |
33
  | Reference-instruction-to-video | 1 | 113 | [`instruct_edit/single_subject`](https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Test/tree/main/instruct_edit/single_subject) |
34
 
 
35
 
36
- ## Training Data Link
37
-
38
- We also release a training dataset on HuggingFace at
39
-
40
- https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Train
41
-
42
-
43
- ## Github Code Link
44
-
45
- This dataset is intended to be used together with our code. Please refer to the GitHub repository below for more detailed instructions.
46
-
47
- https://github.com/caiyuanhao1998/Open-OmniVCus
48
-
49
-
50
- ## Huggingface Model Link
51
-
52
- We also release three models based on Wan2.1-1.3B, Wan2.1-14B, and Wan2.2-14B in the following link:
53
-
54
- https://huggingface.co/CaiYuanhao/OmniVCus
55
-
56
-
57
- ## Project Page Link
58
-
59
- For more video customization results, please refer to our project page:
60
-
61
- https://caiyuanhao1998.github.io/project/OmniVCus/
62
-
63
-
64
- ## Arxiv Paper Link
65
-
66
- For more technical details, please refer to our NeurIPS 2025 paper:
67
-
68
- https://arxiv.org/abs/2506.23361
69
-
70
-
71
 
72
  ## Citation
73
 
74
  If you find our code, data, and models useful, please consider citing our paper:
75
 
76
- ```sh
77
  @inproceedings{omnivcus,
78
  title={OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions},
79
  author={Yuanhao Cai and He Zhang and Xi Chen and Jinbo Xing and Kai Zhang and Yiwei Hu and Yuqian Zhou and Zhifei Zhang and Soo Ye Kim and Tianyu Wang and Yulun Zhang and Xiaokang Yang and Zhe Lin and Alan Yuille},
 
1
  ---
 
2
  language:
3
  - en
4
+ license: apache-2.0
5
  size_categories:
6
  - n<1K
7
  modalities:
 
9
  - image
10
  - text
11
  arxiv: 2506.23361
12
+ task_categories:
13
+ - image-to-video
14
+ tags:
15
+ - video-customization
16
+ - subject-driven
17
+ - multi-modal
18
  ---
19
 
20
+ # OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions
21
 
22
+ [**Project Page**](https://caiyuanhao1998.github.io/project/OmniVCus/) | [**Paper**](https://huggingface.co/papers/2506.23361) | [**Code**](https://github.com/caiyuanhao1998/Open-OmniVCus) | [**Models**](https://huggingface.co/CaiYuanhao/OmniVCus)
23
 
24
+ ## Dataset Description
 
25
 
26
+ OmniVCus-Test is a testing dataset for multi-modal control video generation. It contains 648 manually collected and annotated data samples to support reference-to-video, reference-mask-to-video, reference-depth-to-video, and reference-instruction-to-video customization.
27
 
28
+ ### Data Overview
29
 
30
  | Task | #Subject | Number | Path |
31
  |:----:|:--------:|:------:|:----:|
 
38
  | Reference-depth-to-video | 3 | 40 | [`depth/three_subject`](https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Test/tree/main/depth/multiple_subject) |
39
  | Reference-instruction-to-video | 1 | 113 | [`instruct_edit/single_subject`](https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Test/tree/main/instruct_edit/single_subject) |
40
 
41
+ ## Related Resources
42
 
43
+ - **Training Data:** [OmniVCus-Train](https://huggingface.co/datasets/CaiYuanhao/OmniVCus-Train)
44
+ - **Hugging Face Models:** [OmniVCus Models](https://huggingface.co/CaiYuanhao/OmniVCus) (Wan2.1/2.2-based)
45
+ - **GitHub Repository:** [Open-OmniVCus](https://github.com/caiyuanhao1998/Open-OmniVCus)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
  ## Citation
48
 
49
  If you find our code, data, and models useful, please consider citing our paper:
50
 
51
+ ```bibtex
52
  @inproceedings{omnivcus,
53
  title={OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions},
54
  author={Yuanhao Cai and He Zhang and Xi Chen and Jinbo Xing and Kai Zhang and Yiwei Hu and Yuqian Zhou and Zhifei Zhang and Soo Ye Kim and Tianyu Wang and Yulun Zhang and Xiaokang Yang and Zhe Lin and Alan Yuille},