赵智轩 commited on
Commit
4f55799
·
1 Parent(s): 215cb9a

Update dataset card metadata and add license

Browse files
Files changed (2) hide show
  1. LICENSE +55 -0
  2. README.md +25 -18
LICENSE ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ PerceptionComp Research License
2
+
3
+ Copyright (c) 2026 PerceptionComp authors
4
+
5
+ 1. Scope
6
+
7
+ This repository contains benchmark annotations, metadata, documentation, and may contain
8
+ or reference benchmark video files. The annotation data and repository materials created
9
+ by the PerceptionComp authors are made available under this license. Video files and
10
+ other third-party materials may remain subject to the terms of their respective original
11
+ sources.
12
+
13
+ 2. Permitted Use
14
+
15
+ You may use, reproduce, and share the PerceptionComp benchmark materials for
16
+ non-commercial research, evaluation, benchmarking, and academic publication, provided
17
+ that you:
18
+
19
+ - preserve this license notice
20
+ - provide appropriate attribution to PerceptionComp
21
+ - comply with any applicable terms governing the underlying video sources
22
+
23
+ 3. Restrictions
24
+
25
+ You may not:
26
+
27
+ - use the benchmark materials for unlawful purposes
28
+ - use the benchmark materials for surveillance, identity recognition, or sensitive
29
+ attribute inference
30
+ - represent modified evaluation protocols or altered benchmark settings as official
31
+ PerceptionComp results
32
+ - redistribute any third-party video content except as permitted by the corresponding
33
+ source terms and applicable law
34
+ - use the benchmark materials for commercial purposes without prior written permission
35
+ from the rights holders
36
+
37
+ 4. Third-Party Content
38
+
39
+ To the extent this repository includes or references videos or other materials obtained
40
+ from third-party sources, those materials are provided only for research and evaluation
41
+ use as allowed by the original source terms. You are responsible for verifying that your
42
+ use complies with those terms and with applicable law.
43
+
44
+ 5. No Warranty
45
+
46
+ THE BENCHMARK MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
47
+ IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
48
+ PARTICULAR PURPOSE, TITLE, AND NON-INFRINGEMENT.
49
+
50
+ 6. Limitation of Liability
51
+
52
+ IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES, OR
53
+ OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT, OR OTHERWISE, ARISING FROM, OUT
54
+ OF, OR IN CONNECTION WITH THE BENCHMARK MATERIALS OR THE USE OR OTHER DEALINGS IN THE
55
+ BENCHMARK MATERIALS.
README.md CHANGED
@@ -1,9 +1,10 @@
1
  ---
2
  pretty_name: PerceptionComp
3
  license: other
 
 
4
  task_categories:
5
  - video-question-answering
6
- - multiple-choice
7
  language:
8
  - en
9
  tags:
@@ -13,12 +14,13 @@ tags:
13
  - reasoning
14
  - video-understanding
15
  - evaluation
 
16
  size_categories:
17
  - 1K<n<10K
18
  configs:
19
  - config_name: default
20
  data_files:
21
- - split: train
22
  path: questions.json
23
  ---
24
 
@@ -32,18 +34,18 @@ PerceptionComp is a benchmark for complex perception-centric video reasoning. It
32
 
33
  PerceptionComp contains 1,114 manually annotated five-choice questions associated with 273 videos. The benchmark covers seven categories: outdoor tour, shopping, sport, variety show, home tour, game, and movie.
34
 
35
- This Hugging Face dataset repository hosts the benchmark videos. The official annotation file, evaluation code, and model integration examples are maintained in the GitHub repository:
36
 
37
  - GitHub repository: https://github.com/hrinnnn/PerceptionComp
38
 
39
- - **Curated by:** PerceptionComp authors
40
- - **Language(s) (NLP):** English
41
- - **License:** Please replace `other` in the metadata above with the final data license before public release if a more specific license applies.
42
 
43
  ### Dataset Sources
44
 
45
- - **Repository:** https://github.com/hrinnnn/PerceptionComp
46
- <!-- - **Paper:** Add the public paper link here when available. -->
47
 
48
  ## Uses
49
 
@@ -78,10 +80,6 @@ Each benchmark question is associated with:
78
  - one semantic category
79
  - one difficulty label
80
 
81
- The official annotation file is maintained in the GitHub repository:
82
-
83
- - `benchmark/annotations/1-1114.json`
84
-
85
  Core fields in each annotation item:
86
 
87
  - `key`: question identifier
@@ -95,7 +93,13 @@ Core fields in each annotation item:
95
 
96
  ### Data Files
97
 
98
- The Hugging Face dataset stores the benchmark videos. The official evaluation code prepares them into the following local layout:
 
 
 
 
 
 
99
 
100
  ```text
101
  benchmark/videos/<video_id>.mp4
@@ -112,9 +116,9 @@ python scripts/download_data.py --repo-id hrinnnn/PerceptionComp
112
 
113
  ### Data Splits
114
 
115
- The current public release uses one official evaluation set:
116
 
117
- - `1-1114.json`: 1,114 multiple-choice questions over 273 videos
118
 
119
  ## Dataset Creation
120
 
@@ -156,7 +160,7 @@ The annotations were created by the PerceptionComp project team.
156
 
157
  The videos may contain people, faces, voices, public scenes, or other naturally occurring visual content. The dataset is intended for research evaluation, not for identity inference or sensitive attribute prediction.
158
 
159
- ### Recommendations
160
 
161
  Users should:
162
 
@@ -167,13 +171,16 @@ Users should:
167
 
168
  ## Citation
169
 
170
- If you use PerceptionComp, please cite the project paper when it is publicly available.
171
 
172
  ```bibtex
173
  @misc{perceptioncomp2026,
174
  title={PerceptionComp},
175
  author={PerceptionComp Authors},
176
  year={2026},
 
 
 
177
  howpublished={Hugging Face dataset and GitHub repository}
178
  }
179
  ```
@@ -201,4 +208,4 @@ python evaluate/evaluate.py \
201
 
202
  ## Dataset Card Authors
203
 
204
- PerceptionComp authors
 
1
  ---
2
  pretty_name: PerceptionComp
3
  license: other
4
+ license_name: PerceptionComp Research License
5
+ license_link: LICENSE
6
  task_categories:
7
  - video-question-answering
 
8
  language:
9
  - en
10
  tags:
 
14
  - reasoning
15
  - video-understanding
16
  - evaluation
17
+ - multiple-choice
18
  size_categories:
19
  - 1K<n<10K
20
  configs:
21
  - config_name: default
22
  data_files:
23
+ - split: test
24
  path: questions.json
25
  ---
26
 
 
34
 
35
  PerceptionComp contains 1,114 manually annotated five-choice questions associated with 273 videos. The benchmark covers seven categories: outdoor tour, shopping, sport, variety show, home tour, game, and movie.
36
 
37
+ This Hugging Face dataset repository is intended to host the benchmark videos together with a viewer-friendly annotation file, `questions.json`, for Dataset Preview and Data Studio. The canonical annotation source, evaluation code, and model integration examples are maintained in the official GitHub repository:
38
 
39
  - GitHub repository: https://github.com/hrinnnn/PerceptionComp
40
 
41
+ - Curated by: PerceptionComp authors
42
+ - Language(s): English
43
+ - License: PerceptionComp Research License
44
 
45
  ### Dataset Sources
46
 
47
+ - Repository: https://github.com/hrinnnn/PerceptionComp
48
+ - Paper: https://arxiv.org/abs/2603.26653
49
 
50
  ## Uses
51
 
 
80
  - one semantic category
81
  - one difficulty label
82
 
 
 
 
 
83
  Core fields in each annotation item:
84
 
85
  - `key`: question identifier
 
93
 
94
  ### Data Files
95
 
96
+ This upload bundle contains:
97
+
98
+ - `questions.json`: root-level annotation file used by Hugging Face Dataset Preview and Data Studio
99
+ - `README.md`: Hugging Face dataset card
100
+ - `LICENSE`: custom research-use terms for the benchmark materials
101
+
102
+ The official evaluation code prepares videos into the following local layout:
103
 
104
  ```text
105
  benchmark/videos/<video_id>.mp4
 
116
 
117
  ### Data Splits
118
 
119
+ The current public release uses one official evaluation split:
120
 
121
+ - `test`: 1,114 multiple-choice questions over 273 videos, exposed through `questions.json`
122
 
123
  ## Dataset Creation
124
 
 
160
 
161
  The videos may contain people, faces, voices, public scenes, or other naturally occurring visual content. The dataset is intended for research evaluation, not for identity inference or sensitive attribute prediction.
162
 
163
+ ## Recommendations
164
 
165
  Users should:
166
 
 
171
 
172
  ## Citation
173
 
174
+ If you use PerceptionComp, please cite the project paper:
175
 
176
  ```bibtex
177
  @misc{perceptioncomp2026,
178
  title={PerceptionComp},
179
  author={PerceptionComp Authors},
180
  year={2026},
181
+ eprint={2603.26653},
182
+ archivePrefix={arXiv},
183
+ primaryClass={cs.CV},
184
  howpublished={Hugging Face dataset and GitHub repository}
185
  }
186
  ```
 
208
 
209
  ## Dataset Card Authors
210
 
211
+ PerceptionComp authors