| --- |
| pretty_name: PerceptionComp |
| license: other |
| license_name: perceptioncomp-research-license |
| license_link: LICENSE |
| task_categories: |
| - visual-question-answering |
| - multiple-choice |
| language: |
| - en |
| tags: |
| - video |
| - benchmark |
| - multimodal |
| - reasoning |
| - video-understanding |
| - evaluation |
| - multiple-choice |
| size_categories: |
| - 1K<n<10K |
| configs: |
| - config_name: default |
| data_files: |
| - split: test |
| path: questions.json |
| --- |
| |
| # PerceptionComp: A Benchmark for Complex Perception-Centric Video Reasoning |
|
|
| <a href="https://arxiv.org/abs/2603.26653"> |
| <img src="https://img.shields.io/badge/Paper-arXiv-B31B1B?logo=arxiv&logoColor=white" alt="Paper"> |
| </a> |
| <a href="https://shaoxuanli.github.io/PerceptionComp.github.io/"> |
| <img src="https://img.shields.io/badge/Website-Project%20Page-0A7F5A" alt="Website"> |
| </a> |
| <a href="https://github.com/hrinnnn/PerceptionComp"> |
| <img src="https://img.shields.io/badge/GitHub-Repository-181717?logo=github&logoColor=white" alt="GitHub"> |
| </a> |
| |
| PerceptionComp is a benchmark for complex perception-centric video reasoning. It focuses on questions that cannot be solved from a single frame, a short clip, or a shallow caption. Models must revisit visually complex videos, gather evidence across temporally separated segments, and combine multiple perceptual cues before answering. |
|
|
| ## Dataset Details |
|
|
| ### Dataset Description |
|
|
| PerceptionComp contains 1,114 manually annotated five-choice questions associated with 273 referenced video IDs. The benchmark covers seven categories: outdoor tour, shopping, sport, variety show, home tour, game, and movie. |
|
|
| This Hugging Face dataset repository is intended to host the benchmark videos together with a viewer-friendly annotation file, `questions.json`, for Dataset Preview and Data Studio. The canonical annotation source, evaluation code, and model integration examples are maintained in the official GitHub repository: |
|
|
| - GitHub repository: https://github.com/hrinnnn/PerceptionComp |
|
|
| - Curated by: PerceptionComp authors |
| - Language(s): English |
| - License: PerceptionComp Research License |
|
|
| ### Dataset Sources |
|
|
| - Repository: https://github.com/hrinnnn/PerceptionComp |
| - Paper: https://arxiv.org/abs/2603.26653 |
|
|
| ## Uses |
|
|
| ### Direct Use |
|
|
| PerceptionComp is intended for: |
|
|
| - benchmarking video-language models on complex perception-centric reasoning |
| - evaluating long-horizon and multi-evidence video understanding |
| - comparing proprietary and open-source multimodal models under a unified protocol |
|
|
| Users are expected to download the videos from this Hugging Face dataset and run evaluation with the official GitHub repository. |
|
|
| ### Out-of-Scope Use |
|
|
| PerceptionComp is not intended for: |
|
|
| - unrestricted commercial redistribution of hosted videos when original source terms do not allow it |
| - surveillance, identity inference, or sensitive attribute prediction |
| - modifying the benchmark protocol and reporting those results as directly comparable official scores |
|
|
| ## Evaluation Workflow |
|
|
| The Hugging Face repository hosts the benchmark videos and the viewer-friendly test annotations. The evaluation code lives in the GitHub repository and follows this workflow: |
|
|
| ### Step 1. Clone the Repository |
|
|
| ```bash |
| git clone https://github.com/hrinnnn/PerceptionComp.git |
| cd PerceptionComp |
| ``` |
|
|
| ### Step 2. Install Dependencies |
|
|
| ```bash |
| pip install -r requirements.txt |
| ``` |
|
|
| ### Step 3. Download the Benchmark Videos |
|
|
| ```bash |
| python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp |
| ``` |
|
|
| If the Hugging Face dataset requires authentication: |
|
|
| ```bash |
| python3 scripts/download_data.py \ |
| --repo-id hrinnnn/PerceptionComp \ |
| --hf-token YOUR_HF_TOKEN |
| ``` |
|
|
| The download helper fetches video files from the Hugging Face `data/` directory, flattens them into `benchmark/videos/`, and validates the required `video_id` set against `benchmark/annotations/1-1114.json`. |
|
|
| ### Step 4. Run Evaluation |
|
|
| OpenAI-compatible API example: |
|
|
| ```bash |
| python3 evaluate/evaluate.py \ |
| --model YOUR_MODEL_NAME \ |
| --provider api \ |
| --api-key YOUR_API_KEY \ |
| --base-url YOUR_BASE_URL \ |
| --video-dir benchmark/videos |
| ``` |
|
|
| Gemini example: |
|
|
| ```bash |
| python3 evaluate/evaluate.py \ |
| --model YOUR_GEMINI_MODEL_NAME \ |
| --provider gemini \ |
| --api-key YOUR_GEMINI_API_KEY \ |
| --video-dir benchmark/videos |
| ``` |
|
|
| ### Step 5. Check the Outputs |
|
|
| Evaluation outputs are written to: |
|
|
| ```text |
| evaluate/results/Results-<model>.json |
| evaluate/results/Results-<model>.csv |
| ``` |
|
|
| ## Dataset Structure |
|
|
| ### Data Instances |
|
|
| Each benchmark question is associated with: |
|
|
| - one `video_id` |
| - one multiple-choice question |
| - five answer options |
| - one correct answer |
| - one semantic category |
| - one difficulty label |
|
|
| Core fields in each annotation item: |
|
|
| - `key`: question identifier |
| - `video_id`: video filename stem without `.mp4` |
| - `question`: question text |
| - `answer_choice_0` to `answer_choice_4`: five answer options |
| - `answer_id`: zero-based index of the correct option |
| - `answer`: text form of the correct answer |
| - `category`: semantic category |
| - `difficulty`: difficulty label |
|
|
| ### Data Files |
|
|
| This Hugging Face dataset repository contains: |
|
|
| - `questions.json`: root-level annotation file used by Hugging Face Dataset Preview and Data Studio |
| - `data/<video_id>.<ext>`: benchmark video files downloaded by the official helper script |
| - `README.md`: Hugging Face dataset card |
| - `LICENSE`: custom research-use terms for the benchmark materials |
|
|
| The canonical annotation file used by the evaluator remains: |
|
|
| - `benchmark/annotations/1-1114.json` in the GitHub repository |
|
|
| The official evaluation code prepares videos into the following local layout: |
|
|
| ```text |
| benchmark/videos/<video_id>.mp4 |
| ``` |
|
|
| Use the official download script from the GitHub repository: |
|
|
| ```bash |
| git clone https://github.com/hrinnnn/PerceptionComp.git |
| cd PerceptionComp |
| pip install -r requirements.txt |
| python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp |
| ``` |
|
|
| If your environment provides `python` instead of `python3`, use that alias consistently for the commands below. |
|
|
| ### Data Splits |
|
|
| The current public release uses one official evaluation split: |
|
|
| - `test`: 1,114 multiple-choice questions over 273 referenced video IDs, exposed through `questions.json` |
|
|
| ## Dataset Creation |
|
|
| ### Curation Rationale |
|
|
| PerceptionComp was created to evaluate a failure mode that is not well covered by simpler video benchmarks: questions that require models to combine multiple perceptual constraints over time instead of relying on a single salient frame or a short summary. |
|
|
| ### Source Data |
|
|
| The benchmark uses real-world videos paired with manually written multiple-choice questions. |
|
|
| #### Data Collection and Processing |
|
|
| Videos were collected and organized for benchmark evaluation. Annotation authors then wrote perception-centric multiple-choice questions for the selected videos. Each question was designed to require visual evidence from the video rather than simple prior knowledge or caption-level shortcuts. |
|
|
| The release process includes: |
|
|
| - associating each question with a `video_id` |
| - formatting each sample as a five-choice multiple-choice item |
| - assigning semantic categories |
| - assigning difficulty labels |
| - consolidating the release into one official annotation file |
|
|
| #### Who are the source data producers? |
|
|
| The underlying videos may originate from third-party public sources. The benchmark annotations were created by the PerceptionComp authors and collaborators. |
|
|
| ### Annotations |
|
|
| #### Annotation Process |
|
|
| PerceptionComp contains 1,114 manually annotated five-choice questions. Questions were written to test perception-centric reasoning over videos rather than single-frame recognition alone. |
|
|
| #### Who are the annotators? |
|
|
| The annotations were created by the PerceptionComp project team. |
|
|
| #### Personal and Sensitive Information |
|
|
| The videos may contain people, faces, voices, public scenes, or other naturally occurring visual content. The dataset is intended for research evaluation, not for identity inference or sensitive attribute prediction. |
|
|
| ## Recommendations |
|
|
| Users should: |
|
|
| - report results with the official evaluation code |
| - avoid changing prompts, parsing rules, or metrics when claiming benchmark numbers |
| - verify that their usage complies with the terms of the original video sources |
| - avoid using the dataset for surveillance, identity recognition, or sensitive attribute inference |
|
|
| ## Citation |
|
|
| If you use PerceptionComp, please cite the project paper: |
|
|
| ```bibtex |
| @misc{perceptioncomp2026, |
| title={PerceptionComp}, |
| author={PerceptionComp Authors}, |
| year={2026}, |
| eprint={2603.26653}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.CV}, |
| howpublished={Hugging Face dataset and GitHub repository} |
| } |
| ``` |
|
|
| ## More Information |
|
|
| Official evaluation code and documentation: |
|
|
| - GitHub: https://github.com/hrinnnn/PerceptionComp |
|
|
| Example evaluation workflow: |
|
|
| ```bash |
| git clone https://github.com/hrinnnn/PerceptionComp.git |
| cd PerceptionComp |
| pip install -r requirements.txt |
| python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp |
| python3 evaluate/evaluate.py \ |
| --model YOUR_MODEL_NAME \ |
| --provider api \ |
| --api-key YOUR_API_KEY \ |
| --base-url YOUR_BASE_URL \ |
| --video-dir benchmark/videos |
| ``` |
|
|
| ## Dataset Card Authors |
|
|
| PerceptionComp authors |
|
|