File size: 9,224 Bytes
3d347de 8bb5680 4f55799 3d347de 8bb5680 3d347de 4f55799 3d347de 2e58d29 215cb9a 4f55799 215cb9a 2e58d29 3d347de 898c152 3d347de 9ce6a53 74cf9f1 3d347de 3dd0dfd 3d347de 4f55799 3d347de 4f55799 3d347de 4f55799 3d347de 3dd0dfd 3d347de 3dd0dfd 4f55799 3dd0dfd 4f55799 3dd0dfd 4f55799 3d347de 3dd0dfd 3d347de 3dd0dfd 3d347de 4f55799 3d347de 3dd0dfd 3d347de 4f55799 3d347de 4f55799 3d347de 4f55799 3d347de 3dd0dfd 3d347de 4f55799 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 | ---
pretty_name: PerceptionComp
license: other
license_name: perceptioncomp-research-license
license_link: LICENSE
task_categories:
- visual-question-answering
- multiple-choice
language:
- en
tags:
- video
- benchmark
- multimodal
- reasoning
- video-understanding
- evaluation
- multiple-choice
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: test
path: questions.json
---
# PerceptionComp: A Benchmark for Complex Perception-Centric Video Reasoning
<a href="https://arxiv.org/abs/2603.26653">
<img src="https://img.shields.io/badge/Paper-arXiv-B31B1B?logo=arxiv&logoColor=white" alt="Paper">
</a>
<a href="https://shaoxuanli.github.io/PerceptionComp.github.io/">
<img src="https://img.shields.io/badge/Website-Project%20Page-0A7F5A" alt="Website">
</a>
<a href="https://github.com/hrinnnn/PerceptionComp">
<img src="https://img.shields.io/badge/GitHub-Repository-181717?logo=github&logoColor=white" alt="GitHub">
</a>
PerceptionComp is a benchmark for complex perception-centric video reasoning. It focuses on questions that cannot be solved from a single frame, a short clip, or a shallow caption. Models must revisit visually complex videos, gather evidence across temporally separated segments, and combine multiple perceptual cues before answering.
## Dataset Details
### Dataset Description
PerceptionComp contains 1,114 manually annotated five-choice questions associated with 273 referenced video IDs. The benchmark covers seven categories: outdoor tour, shopping, sport, variety show, home tour, game, and movie.
This Hugging Face dataset repository is intended to host the benchmark videos together with a viewer-friendly annotation file, `questions.json`, for Dataset Preview and Data Studio. The canonical annotation source, evaluation code, and model integration examples are maintained in the official GitHub repository:
- GitHub repository: https://github.com/hrinnnn/PerceptionComp
- Curated by: PerceptionComp authors
- Language(s): English
- License: PerceptionComp Research License
### Dataset Sources
- Repository: https://github.com/hrinnnn/PerceptionComp
- Paper: https://arxiv.org/abs/2603.26653
## Uses
### Direct Use
PerceptionComp is intended for:
- benchmarking video-language models on complex perception-centric reasoning
- evaluating long-horizon and multi-evidence video understanding
- comparing proprietary and open-source multimodal models under a unified protocol
Users are expected to download the videos from this Hugging Face dataset and run evaluation with the official GitHub repository.
### Out-of-Scope Use
PerceptionComp is not intended for:
- unrestricted commercial redistribution of hosted videos when original source terms do not allow it
- surveillance, identity inference, or sensitive attribute prediction
- modifying the benchmark protocol and reporting those results as directly comparable official scores
## Evaluation Workflow
The Hugging Face repository hosts the benchmark videos and the viewer-friendly test annotations. The evaluation code lives in the GitHub repository and follows this workflow:
### Step 1. Clone the Repository
```bash
git clone https://github.com/hrinnnn/PerceptionComp.git
cd PerceptionComp
```
### Step 2. Install Dependencies
```bash
pip install -r requirements.txt
```
### Step 3. Download the Benchmark Videos
```bash
python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp
```
If the Hugging Face dataset requires authentication:
```bash
python3 scripts/download_data.py \
--repo-id hrinnnn/PerceptionComp \
--hf-token YOUR_HF_TOKEN
```
The download helper fetches video files from the Hugging Face `data/` directory, flattens them into `benchmark/videos/`, and validates the required `video_id` set against `benchmark/annotations/1-1114.json`.
### Step 4. Run Evaluation
OpenAI-compatible API example:
```bash
python3 evaluate/evaluate.py \
--model YOUR_MODEL_NAME \
--provider api \
--api-key YOUR_API_KEY \
--base-url YOUR_BASE_URL \
--video-dir benchmark/videos
```
Gemini example:
```bash
python3 evaluate/evaluate.py \
--model YOUR_GEMINI_MODEL_NAME \
--provider gemini \
--api-key YOUR_GEMINI_API_KEY \
--video-dir benchmark/videos
```
### Step 5. Check the Outputs
Evaluation outputs are written to:
```text
evaluate/results/Results-<model>.json
evaluate/results/Results-<model>.csv
```
## Dataset Structure
### Data Instances
Each benchmark question is associated with:
- one `video_id`
- one multiple-choice question
- five answer options
- one correct answer
- one semantic category
- one difficulty label
Core fields in each annotation item:
- `key`: question identifier
- `video_id`: video filename stem without `.mp4`
- `question`: question text
- `answer_choice_0` to `answer_choice_4`: five answer options
- `answer_id`: zero-based index of the correct option
- `answer`: text form of the correct answer
- `category`: semantic category
- `difficulty`: difficulty label
### Data Files
This Hugging Face dataset repository contains:
- `questions.json`: root-level annotation file used by Hugging Face Dataset Preview and Data Studio
- `data/<video_id>.<ext>`: benchmark video files downloaded by the official helper script
- `README.md`: Hugging Face dataset card
- `LICENSE`: custom research-use terms for the benchmark materials
The canonical annotation file used by the evaluator remains:
- `benchmark/annotations/1-1114.json` in the GitHub repository
The official evaluation code prepares videos into the following local layout:
```text
benchmark/videos/<video_id>.mp4
```
Use the official download script from the GitHub repository:
```bash
git clone https://github.com/hrinnnn/PerceptionComp.git
cd PerceptionComp
pip install -r requirements.txt
python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp
```
If your environment provides `python` instead of `python3`, use that alias consistently for the commands below.
### Data Splits
The current public release uses one official evaluation split:
- `test`: 1,114 multiple-choice questions over 273 referenced video IDs, exposed through `questions.json`
## Dataset Creation
### Curation Rationale
PerceptionComp was created to evaluate a failure mode that is not well covered by simpler video benchmarks: questions that require models to combine multiple perceptual constraints over time instead of relying on a single salient frame or a short summary.
### Source Data
The benchmark uses real-world videos paired with manually written multiple-choice questions.
#### Data Collection and Processing
Videos were collected and organized for benchmark evaluation. Annotation authors then wrote perception-centric multiple-choice questions for the selected videos. Each question was designed to require visual evidence from the video rather than simple prior knowledge or caption-level shortcuts.
The release process includes:
- associating each question with a `video_id`
- formatting each sample as a five-choice multiple-choice item
- assigning semantic categories
- assigning difficulty labels
- consolidating the release into one official annotation file
#### Who are the source data producers?
The underlying videos may originate from third-party public sources. The benchmark annotations were created by the PerceptionComp authors and collaborators.
### Annotations
#### Annotation Process
PerceptionComp contains 1,114 manually annotated five-choice questions. Questions were written to test perception-centric reasoning over videos rather than single-frame recognition alone.
#### Who are the annotators?
The annotations were created by the PerceptionComp project team.
#### Personal and Sensitive Information
The videos may contain people, faces, voices, public scenes, or other naturally occurring visual content. The dataset is intended for research evaluation, not for identity inference or sensitive attribute prediction.
## Recommendations
Users should:
- report results with the official evaluation code
- avoid changing prompts, parsing rules, or metrics when claiming benchmark numbers
- verify that their usage complies with the terms of the original video sources
- avoid using the dataset for surveillance, identity recognition, or sensitive attribute inference
## Citation
If you use PerceptionComp, please cite the project paper:
```bibtex
@misc{perceptioncomp2026,
title={PerceptionComp},
author={PerceptionComp Authors},
year={2026},
eprint={2603.26653},
archivePrefix={arXiv},
primaryClass={cs.CV},
howpublished={Hugging Face dataset and GitHub repository}
}
```
## More Information
Official evaluation code and documentation:
- GitHub: https://github.com/hrinnnn/PerceptionComp
Example evaluation workflow:
```bash
git clone https://github.com/hrinnnn/PerceptionComp.git
cd PerceptionComp
pip install -r requirements.txt
python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp
python3 evaluate/evaluate.py \
--model YOUR_MODEL_NAME \
--provider api \
--api-key YOUR_API_KEY \
--base-url YOUR_BASE_URL \
--video-dir benchmark/videos
```
## Dataset Card Authors
PerceptionComp authors
|