PerceptionComp / README.md
hrinnnn's picture
Update README.md
4b4e57b verified
metadata
pretty_name: PerceptionComp
license: other
license_name: perceptioncomp-research-license
license_link: LICENSE
task_categories:
  - visual-question-answering
  - multiple-choice
language:
  - en
tags:
  - video
  - benchmark
  - multimodal
  - reasoning
  - video-understanding
  - evaluation
  - multiple-choice
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: test
        path: questions.json

PerceptionComp: A Benchmark for Complex Perception-Centric Video Reasoning

Paper Website GitHub

PerceptionComp is a benchmark for complex perception-centric video reasoning. It focuses on questions that cannot be solved from a single frame, a short clip, or a shallow caption. Models must revisit visually complex videos, gather evidence across temporally separated segments, and combine multiple perceptual cues before answering.

Dataset Details

Dataset Description

PerceptionComp contains 1,114 manually annotated five-choice questions associated with 273 referenced video IDs. The benchmark covers seven categories: outdoor tour, shopping, sport, variety show, home tour, game, and movie.

This Hugging Face dataset repository is intended to host the benchmark videos together with a viewer-friendly annotation file, questions.json, for Dataset Preview and Data Studio. The canonical annotation source, evaluation code, and model integration examples are maintained in the official GitHub repository:

Dataset Sources

Uses

Direct Use

PerceptionComp is intended for:

  • benchmarking video-language models on complex perception-centric reasoning
  • evaluating long-horizon and multi-evidence video understanding
  • comparing proprietary and open-source multimodal models under a unified protocol

Users are expected to download the videos from this Hugging Face dataset and run evaluation with the official GitHub repository.

Out-of-Scope Use

PerceptionComp is not intended for:

  • unrestricted commercial redistribution of hosted videos when original source terms do not allow it
  • surveillance, identity inference, or sensitive attribute prediction
  • modifying the benchmark protocol and reporting those results as directly comparable official scores

Evaluation Workflow

The Hugging Face repository hosts the benchmark videos and the viewer-friendly test annotations. The evaluation code lives in the GitHub repository and follows this workflow:

Step 1. Clone the Repository

git clone https://github.com/hrinnnn/PerceptionComp.git
cd PerceptionComp

Step 2. Install Dependencies

pip install -r requirements.txt

Step 3. Download the Benchmark Videos

python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp

If the Hugging Face dataset requires authentication:

python3 scripts/download_data.py \
  --repo-id hrinnnn/PerceptionComp \
  --hf-token YOUR_HF_TOKEN

The download helper fetches video files from the Hugging Face data/ directory, flattens them into benchmark/videos/, and validates the required video_id set against benchmark/annotations/1-1114.json.

Step 4. Run Evaluation

OpenAI-compatible API example:

python3 evaluate/evaluate.py \
  --model YOUR_MODEL_NAME \
  --provider api \
  --api-key YOUR_API_KEY \
  --base-url YOUR_BASE_URL \
  --video-dir benchmark/videos

Gemini example:

python3 evaluate/evaluate.py \
  --model YOUR_GEMINI_MODEL_NAME \
  --provider gemini \
  --api-key YOUR_GEMINI_API_KEY \
  --video-dir benchmark/videos

Step 5. Check the Outputs

Evaluation outputs are written to:

evaluate/results/Results-<model>.json
evaluate/results/Results-<model>.csv

Dataset Structure

Data Instances

Each benchmark question is associated with:

  • one video_id
  • one multiple-choice question
  • five answer options
  • one correct answer
  • one semantic category
  • one difficulty label

Core fields in each annotation item:

  • key: question identifier
  • video_id: video filename stem without .mp4
  • question: question text
  • answer_choice_0 to answer_choice_4: five answer options
  • answer_id: zero-based index of the correct option
  • answer: text form of the correct answer
  • category: semantic category
  • difficulty: difficulty label

Data Files

This Hugging Face dataset repository contains:

  • questions.json: root-level annotation file used by Hugging Face Dataset Preview and Data Studio
  • data/<video_id>.<ext>: benchmark video files downloaded by the official helper script
  • README.md: Hugging Face dataset card
  • LICENSE: custom research-use terms for the benchmark materials

The canonical annotation file used by the evaluator remains:

  • benchmark/annotations/1-1114.json in the GitHub repository

The official evaluation code prepares videos into the following local layout:

benchmark/videos/<video_id>.mp4

Use the official download script from the GitHub repository:

git clone https://github.com/hrinnnn/PerceptionComp.git
cd PerceptionComp
pip install -r requirements.txt
python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp

If your environment provides python instead of python3, use that alias consistently for the commands below.

Data Splits

The current public release uses one official evaluation split:

  • test: 1,114 multiple-choice questions over 273 referenced video IDs, exposed through questions.json

Dataset Creation

Curation Rationale

PerceptionComp was created to evaluate a failure mode that is not well covered by simpler video benchmarks: questions that require models to combine multiple perceptual constraints over time instead of relying on a single salient frame or a short summary.

Source Data

The benchmark uses real-world videos paired with manually written multiple-choice questions.

Data Collection and Processing

Videos were collected and organized for benchmark evaluation. Annotation authors then wrote perception-centric multiple-choice questions for the selected videos. Each question was designed to require visual evidence from the video rather than simple prior knowledge or caption-level shortcuts.

The release process includes:

  • associating each question with a video_id
  • formatting each sample as a five-choice multiple-choice item
  • assigning semantic categories
  • assigning difficulty labels
  • consolidating the release into one official annotation file

Who are the source data producers?

The underlying videos may originate from third-party public sources. The benchmark annotations were created by the PerceptionComp authors and collaborators.

Annotations

Annotation Process

PerceptionComp contains 1,114 manually annotated five-choice questions. Questions were written to test perception-centric reasoning over videos rather than single-frame recognition alone.

Who are the annotators?

The annotations were created by the PerceptionComp project team.

Personal and Sensitive Information

The videos may contain people, faces, voices, public scenes, or other naturally occurring visual content. The dataset is intended for research evaluation, not for identity inference or sensitive attribute prediction.

Recommendations

Users should:

  • report results with the official evaluation code
  • avoid changing prompts, parsing rules, or metrics when claiming benchmark numbers
  • verify that their usage complies with the terms of the original video sources
  • avoid using the dataset for surveillance, identity recognition, or sensitive attribute inference

Citation

If you use PerceptionComp, please cite the project paper:

@misc{perceptioncomp2026,
  title={PerceptionComp},
  author={PerceptionComp Authors},
  year={2026},
  eprint={2603.26653},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  howpublished={Hugging Face dataset and GitHub repository}
}

More Information

Official evaluation code and documentation:

Example evaluation workflow:

git clone https://github.com/hrinnnn/PerceptionComp.git
cd PerceptionComp
pip install -r requirements.txt
python3 scripts/download_data.py --repo-id hrinnnn/PerceptionComp
python3 evaluate/evaluate.py \
  --model YOUR_MODEL_NAME \
  --provider api \
  --api-key YOUR_API_KEY \
  --base-url YOUR_BASE_URL \
  --video-dir benchmark/videos

Dataset Card Authors

PerceptionComp authors