HERBench / README.md
DanBenAmi's picture
Update dataset card with accurate statistics and links
c174cf2
metadata
language:
  - en
license: cc-by-nc-sa-4.0
task_categories:
  - visual-question-answering
  - multiple-choice
tags:
  - video-understanding
  - multi-evidence-reasoning
  - long-video
  - temporal-reasoning
  - spatial-reasoning
  - video-qa
size_categories:
  - 10K<n<100K
pretty_name: HERBench
configs:
  - config_name: full
    data_files:
      - split: test
        path: data/herbench_full.parquet
    default: true
  - config_name: lite
    data_files:
      - split: test
        path: data/herbench_lite.parquet

HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering

HERBench Logo

Paper GitHub Project Page License HF Dataset

A challenging benchmark for evaluating multi-evidence integration capabilities of vision-language models


πŸ“‹ Dataset Summary

HERBench is a challenging benchmark designed to evaluate vision-language models on multi-evidence integration in long videos. Unlike existing benchmarks where questions can often be answered from single frames, HERBench enforces a High Evidential Requirement (ER) where each question requires aggregating at least k β‰₯ 3 distinct, temporally separated visual cues.

HERBench Teaser

Key Statistics

Metric Full Version Lite Version
πŸ“Š Total Questions 27,936 five-way multiple-choice 5,960 questions (21.3%)
🎬 Videos 335 unique videos 68 unique videos (20.3%)
⏱️ Avg. Video Length 424 seconds 421 seconds
πŸ“ Total Size ~161 GB ~35 GB

Why HERBench?

Current video QA benchmarks often allow models to answer questions using single frames or limited context, failing to test true multi-evidence reasoning. HERBench addresses this by:

βœ… Enforcing multi-evidence integration - Each question requires k β‰₯ 3 temporally separated frames
βœ… Preventing single-frame shortcuts - Questions cannot be answered from isolated frames
βœ… Testing compositional reasoning - Combines temporal, spatial, and causal reasoning
βœ… Evaluating long-video understanding - Average video length of 6.6 minutes

🎯 Choose Your Version

HERBench is available in two versions to accommodate different storage and computational constraints:

Full Version (~161 GB)

  • 27,936 questions across 335 videos
  • Complete benchmark for comprehensive evaluation
  • Recommended for: Final paper results, thorough model evaluation, benchmarking

Lite Version (~35 GB) πŸš€

  • 5,960 questions across 68 videos (21.3% subset)
  • Same task distribution and difficulty as full version
  • Videos sampled to maintain diversity across all 12 tasks
  • Recommended for: Quick prototyping, limited storage, initial experiments, development

Both versions maintain the same quality standards and high evidential requirements!


πŸ“Š Leaderboard

Current state-of-the-art results on HERBench (Full benchmark):

Model Bench Version # Frames TR&C R&T GC&V ME&N Overall Avg.
Random Baseline Full 16 20.0 20.0 20.0 20.0 20.0
GPT-4.1 Full 16 25.4 66.0 37.1 29.0 39.4
Gemini-2.5-Flash Full 16 29.7 69.9 34.9 26.8 40.3
Qwen2.5-VL-72B Full 16 26.9 70.9 36.6 24.4 39.7
Gemma-3-27B Full 16 32.0 58.4 21.5 23.5 33.8
LLaMA-4-Scout-17B Full 16 18.8 57.3 25.5 24.2 31.4
InternVL3.5-14B Full 16 37.7 69.3 31.1 27.8 41.5
Ovis-2.5-9B Full 16 18.9 73.5 46.8 29.2 42.1
InternVL3.5-8B Full 16 33.6 70.2 29.7 30.8 41.1
LLaVA-OneVision1.5-8B Full 16 26.1 67.7 33.6 24.9 38.1
Qwen3-VL-8B Full 16 19.0 68.7 40.6 25.2 38.3
MiniCPM-V4.5-8B Full 16 23.8 71.1 39.7 24.9 39.9
Qwen2.5-VL-7B Full 16 21.8 60.6 38.7 22.6 35.9
LLaVA-OneVision-7B Full 16 27.3 59.1 30.1 26.0 35.6

TR&C = Temporal Reasoning & Chronology, R&T = Referring & Tracking, GC&V = Global Consistency & Verification, ME&N = Multi-Entity Aggregation & Numeracy

Key Findings:

  • πŸ” Referring & Tracking is easier: Models perform best on R&T tasks (avg. 66.8%) compared to other categories
  • 🧩 Multi-evidence is challenging: Overall accuracy of 38.2% shows substantial room for improvement
  • πŸ“ Top performers: Ovis-2.5-9B (42.1%) and InternVL3.5-14B (41.5%) lead the benchmark
  • βš–οΈ Task variance: Performance varies significantly across task families, with GC&V and ME&N being most challenging

πŸ“ˆ MRFS Analysis

HERBench requires significantly more evidence integration than existing benchmarks, as measured by the Minimum Required Frame-Set (MRFS) metric:

MRFS Comparison

Key Insights:

  • HERBench has the highest MRFS (5.49) among video QA benchmarks, requiring integration of ~5.5 frames on average
  • 4Γ— larger than existing benchmarks with lower text-only accuracy (less language bias)
  • Higher evidential requirement: Questions cannot be answered from single frames or limited context
  • Demonstrates the need for true multi-evidence reasoning in video understanding

🎯 Dataset Features

High Evidential Requirement (ER)

Each question in HERBench is designed to require:

  1. Multiple evidence pieces (k β‰₯ 3 frames minimum)
  2. Temporal separation between evidence frames
  3. Compositional reasoning across evidence
  4. Integration of visual information from different moments

12 Compositional Task Types

Temporal Reasoning & Chronology

Task Name Abilities Tested Example
[TSO] Temporal Shot Ordering Understanding event order, high-level scene transitions, chronological reconstruction using content cues "The following 4 shots take place in the video: [Shot 1-4 descriptions]. Select the option that correctly reflects the order in which these shots occur in the video."
[MPDR] Multi-Person Duration Reasoning Fine-grained time-span contrasts, interval statistics, comparing appearance durations across individuals "These people were in the video: [Person 1-3 descriptions]. Who stayed in the frame FOV for the longest time?"
[ASII] Action Sequence Integrity & Identification Micro-level task sequencing, action ordering, temporal understanding of fine-grained activities "What is the correct temporal order of the 5 narrated events? (e.g., 1. slide coffee capsule -> 2. close lid -> 3. turn off processor -> 4. place orange -> 5. put down sponge)"

Referring & Tracking

Task Name Abilities Tested Example
[AGBI] Appearance-Grounded Behavior Interactions Social and relational cues, identity maintenance across time, interaction recognition "In the video there is exactly one individual that fits the following description: [Appearance]. Who is accompanying the person as they walk across the frame?"
[AGAR] Appearance-Grounded Attribute Recognition Moment-specific attribute extraction, target tracking, reading contextual details from specific individuals "In the video there is exactly one individual that fits the following description: [Appearance]. What color is the jacket worn by the individual who remains seated as the main subject walks past?"
[AGLT] Appearance-Grounded Localization Trajectory Global path-level motion reasoning, trajectory tracking, spatial exit/entry point identification "In the video there is exactly one individual that fits the following description: [Appearance]. How does the person exit the frame at the end of their path?"

Global Consistency & Verification

Task Name Abilities Tested Example
[FAM] False Action Memory Action-level absence detection, exhaustive video-wide verification, distinguishing what did not occur "Which of the following actions did NOT occur in the video? (A) open drawer (B) open up fridge (C) turn on tap..."
[SVA] Scene Verification Arrangement Shot-level fidelity checking, chronology verification, distinguishing real from fabricated descriptions "From the correctly described shots, which is the one that appears first in the video? [Multiple shot descriptions provided]"
[FOM] False Object Memory Object-level absence detection, interaction verification, identifying non-interacted objects "Which object did the camera wearer NOT interact with? (A) Cutting board (B) Sponge (C) Dish soap (D) Garlic presser..."

Multi-Entity Aggregation & Numeracy

Task Name Abilities Tested Example
[MEGL] Multi-Entities Grounding & Localization Set membership verification, identity deduplication, exact-match appearance verification "Which of the following people appeared in the video (the person description must match exactly): [Person 1-3 descriptions] - A) only 1 and 3"
[AC] Action Counting Event-accumulation across dispersed moments, counting repeated actions, temporal aggregation "How many times does the action-object pair 'close tap' occur? A) 3 B) 5 C) 7..."
[RLPC] Region-Localized People Counting Region-conditioned identity aggregation, spatial partitioning, counting with spatial constraints "How many people entered the frame through the top edge? Select the range that includes the correct count."

Video Sources

Videos are sourced from diverse, high-quality datasets:

  • WildTrack (56 segments): Multi-camera pedestrian tracking scenes
  • HD-EPIC (176 videos): First-person egocentric daily activities
  • PersonPath22 (24 videos): Person tracking scenarios
  • Movie Trailers (81 videos): Narrative storytelling content

πŸ“₯ Dataset Structure

HERBench/
β”œβ”€β”€ data/
β”‚   β”œβ”€β”€ herbench_annotations.json        # Full: 27,936 questions
β”‚   β”œβ”€β”€ herbench_annotations_lite.json   # Lite: ~5,600 questions
β”‚   β”œβ”€β”€ task_metadata.json               # Task descriptions (shared)
β”‚   β”œβ”€β”€ video_metadata.json              # Video information (shared)
β”‚   └── README_DATA.md                   # Data format documentation
β”œβ”€β”€ videos/
β”‚   β”œβ”€β”€ videos.tar.part.00               # Lite videos start here
β”‚   β”œβ”€β”€ videos.tar.part.01               # |
β”‚   β”œβ”€β”€ videos.tar.part.02               # | Lite: parts 00-03 (~35GB)
β”‚   β”œβ”€β”€ videos.tar.part.03               # |
β”‚   β”œβ”€β”€ videos.tar.part.04               # |
β”‚   β”œβ”€β”€ ...                              # | Full: all parts 00-XX (~161GB)
β”‚   β”œβ”€β”€ videos.tar.part.XX               # |
β”‚   β”œβ”€β”€ videos.tar.checksums.txt         # SHA256 checksums
β”‚   └── videos_lite_info.txt             # Info about archive structure
β”œβ”€β”€ herbench.py                          # HF Hub loading script (powers Dataset Viewer)

Archive Structure: Videos are organized so that Lite videos are in the first archive parts (00-03), and Full-only videos are in the remaining parts. This allows efficient downloading of either version without duplication.

Dataset Viewer: The HF Dataset Viewer uses herbench.py to load and preview the dataset. The script defines a stable schema that handles the varying metadata structures across different task types, ensuring efficient streaming and compatibility with Arrow/Parquet format.


Annotation Format

Each sample contains:

{
  "question_id": "HER_001234",
  "video_id": "cam2_segment_4_180s_240s",
  "video_path": "videos/WildTrack/cam2_segment_4_180s_240s.mp4",
  "question": "What is the main activity happening throughout the video?",
  "choices": [
    "A. People walking across the scene",
    "B. People standing and talking",
    "C. People running in the same direction",
    "D. People sitting on benches",
    "E. People cycling through the area"
  ],
  "answer": "A",
  "answer_index": 0,
  "answer_text": "People walking across the scene",
  "task_type": "activity_recognition",
  "metadata": {
    "source_dataset": "WildTrack",
    "duration": 60.0,
    "resolution": "1920x1080",
    "difficulty": "medium"
  }
}

For detailed format documentation, see data/README_DATA.md.


πŸš€ Quick Start

1. Download the Dataset

Option A: Using Hugging Face CLI (Recommended)

# Install Hugging Face CLI
pip install huggingface-hub

# Download FULL version (27,936 questions, ~161 GB)
huggingface-cli download DanBenAmi/HERBench --repo-type dataset --local-dir HERBench

# Download LITE version only (~5,600 questions, ~35 GB videos)
huggingface-cli download DanBenAmi/HERBench \
    --include "data/herbench_lite.parquet" \
    --include "data/*metadata.json" \
    --include "videos/videos.tar.part.00" \
    --include "videos/videos.tar.part.01" \
    --include "videos/videos.tar.part.02" \
    --include "videos/videos.tar.part.03" \
    --include "videos/videos_lite_info.txt" \
    --include "videos/videos.tar.checksums.txt" \
    --local-dir HERBench

# Or download only annotations (no videos, ~6 MB)
huggingface-cli download DanBenAmi/HERBench --include "data/*.parquet" --include "data/*metadata.json" --local-dir HERBench

Option B: Using Python (Datasets Library)

The dataset is provided in Parquet format for optimal compatibility with HuggingFace Datasets and reliable schema handling.

from datasets import load_dataset

# Load FULL version (default) - 27,936 questions
dataset_full = load_dataset("DanBenAmi/HERBench", "full")
print(f"Total questions: {len(dataset_full['test'])}")

# Access test split
test_data = dataset_full["test"]

# Get a single example
example = test_data[0]
print(f"Question: {example['question']}")
print(f"Choices: {example['choices']}")
print(f"Answer: {example['answer']}")
print(f"Task: {example['task_type']}")
print(f"Video: {example['video_path']}")

# Load LITE version - ~5,600 questions (20% sample)
dataset_lite = load_dataset("DanBenAmi/HERBench", "lite")
print(f"Lite questions: {len(dataset_lite['test'])}")

Schema: Each example contains:

  • question_id - Unique question identifier
  • video_id - Video identifier
  • video_path - Path to video file
  • question - Question text
  • choices - List of 5 multiple-choice options
  • answer - Correct answer (A/B/C/D/E)
  • answer_index - Zero-indexed answer position (0-4)
  • answer_text - Answer value
  • task_type - Task category name
  • source_dataset - Source dataset name
  • duration - Video duration in seconds (float)
  • resolution - Video resolution (width x height)
  • metadata_json - Full metadata as JSON string

Note: Original JSON files are also available in the data/ folder for users who need the raw format for custom processing.

2. Extract Videos

For Full Version:

cd HERBench/videos

# Concatenate all split archives
cat videos.tar.part.* > videos_full.tar

# Extract videos
tar -xvf videos_full.tar

# Verify checksums (optional)
sha256sum -c videos.tar.checksums.txt

# Clean up tar file (optional)
rm videos_full.tar

For Lite Version:

cd HERBench/videos

# Concatenate only lite archives (parts 00-03)
cat videos.tar.part.{00..03} > videos_lite.tar

# Extract videos
tar -xvf videos_lite.tar

# Clean up tar file (optional)
rm videos_lite.tar

Note: The archive is structured so lite videos are in the first parts (00-03). This means if you download the full version, you automatically have the lite videos too!

3. Load and Use the Data

from datasets import load_dataset

# Load the dataset (choose version)
dataset = load_dataset("DanBenAmi/HERBench", name="full")  # or name="lite"

# Access a sample
sample = dataset['test'][0]
print(f"Question: {sample['question']}")
print(f"Choices: {sample['choices']}")
print(f"Answer: {sample['answer']}")
print(f"Video: {sample['video_path']}")
print(f"Task: {sample['task_type']}")

# Filter by task type
temporal_questions = [
    q for q in dataset['test']
    if q['task_type'] == 'temporal_reasoning'
]
print(f"Temporal reasoning questions: {len(temporal_questions)}")

# Compare versions
dataset_full = load_dataset("DanBenAmi/HERBench", name="full")
dataset_lite = load_dataset("DanBenAmi/HERBench", name="lite")
print(f"Full: {len(dataset_full['test'])} questions")
print(f"Lite: {len(dataset_lite['test'])} questions")

4. Run Evaluation

# Clone the evaluation code
git clone https://github.com/DanBenAmi/HERBench.git
cd HERBench

# Install dependencies
pip install -r requirements.txt

# Run evaluation on your model
python evaluation/run_evaluation.py \
    model=your_model \
    data_path=./HERBench \
    output_path=./results

πŸ“œ Citation

If you use HERBench in your research, please cite:

@article{herbench2025,
  title={HERBench: A Benchmark for Multi-Evidence Integration in Video Question Answering},
  author={Ben-Ami, Dan and Serussi, Gabriele and Cohen, Kobi and Baskin, Chaim},
  journal={arXiv preprint arXiv:2512.14870},
  year={2025}
}

πŸ“„ License

This dataset is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).

Terms of Use

Research Use Only.
HERBench is released strictly for non-commercial research and educational purposes.
The benchmark is constructed using videos originating from existing datasets and platforms, including WildTrack, HD-EPIC, PersonPath22, and publicly available online videos (e.g., YouTube trailers). All rights to the original video content remain with their respective owners and licensors.

HERBench does not claim ownership of any underlying video content. The use of such materials is intended solely for academic evaluation and analysis, in accordance with the terms of the respective source datasets and platforms.

Removal upon request.
If any content owner or rights holder believes that their material has been included in HERBench in a manner that violates applicable terms or rights, please contact us. Upon notification, we will promptly investigate the request and remove the relevant content as appropriate.


πŸ™ Acknowledgments

We thank the creators of the original video datasets (WildTrack, HD-EPIC, PersonPath22) for making their data publicly available. We also acknowledge the movie studios for releasing promotional trailers.

This work was supported by [Institution/Grant acknowledgments to be added].


πŸ“§ Contact

Authors

Support


πŸ”„ Updates

  • v1.0.0 (January 2025): Initial release with 27,936 questions across 335 videos

πŸ”— Links


Built with ❀️ for advancing video understanding research

If you find HERBench useful, please ⭐ star our GitHub repository!