metadata
language:
- en
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- video-text-to-text
tags:
- video-understanding
- temporal-reasoning
- counting
- benchmark
VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance in Long Videos
VCBench is a streaming counting benchmark that repositions counting as a minimal probe for diagnosing spatial-temporal state maintenance capability in video-language models. By querying models at multiple timepoints during video playback, VCBench observes how model predictions evolve rather than checking isolated answers.
Task Taxonomy
VCBench decomposes state maintenance into 8 fine-grained subcategories across two dimensions:
Object Counting (tracking entities)
| Subcategory | Description |
|---|---|
| O1-Snap | How many objects are visible at this moment? |
| O1-Delta | How many objects appeared in the past N seconds? |
| O2-Unique | How many different individuals have appeared so far? |
| O2-Gain | How many new individuals appeared in the past N seconds? |
Event Counting (tracking actions)
| Subcategory | Description |
|---|---|
| E1-Action | How many times has an atomic action occurred so far? |
| E1-Transit | How many scene transitions have occurred so far? |
| E2-Episode | How many activity segments have occurred so far? |
| E2-Periodic | How many complete cycles of a periodic action so far? |
Dataset Summary
- Total Videos: 406 source videos (generating 4,574 clipped segments)
- Total Size: ~80 GB
- Annotations: 1,000 counting questions with 4,576 streaming query points and 10,071 frame-by-frame annotations.
- Sources: YouTube, ARKitScenes, ScanNet, ScanNet++, Ego4D, RoomTour3D, CODa, OmniWorld, and physics simulations.
Usage
Download via CLI
You can download the dataset using the huggingface-cli:
huggingface-cli download buaaplay/VCBench --repo-type dataset --local-dir data/videos
The chunkedVideos/ directory contains 4,576 video clips (one per query point), each truncated to the query timestamp.
Evaluation
To compute metrics (GPA, MoC, UDA) on results using the official evaluation scripts:
# Compute metrics on provided results
python eval/compute_metrics.py results/vcbench_gemini3flash_unified.jsonl data/vcbench_eval.jsonl
Citation
@article{vcbench2025,
title={VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance in Long Videos},
author={Liu, Pengyiang and Shi, Zhongyue and Hao, Hongye and Fu, Qi and Bi, Xueting and Zhang, Siwei and Hu, Xiaoyang and Wang, Zitian and Huang, Linjiang and Liu, Si},
year={2026}
}
License
This dataset and code are released under CC BY 4.0.