VCBench / README.md
buaaplay's picture
Fix E2-Periodic and E2-Episode clip counts (swap corrected)
cc670e4 verified
metadata
license: mit
task_categories:
  - video-classification
  - question-answering
language:
  - en
tags:
  - video-understanding
  - temporal-reasoning
  - counting
  - benchmark
size_categories:
  - 1K<n<10K

VCBench: Clipped Videos Dataset

Dataset Description

This dataset contains 4,574 clipped video segments from the VCBench (Video Counting Benchmark), designed for evaluating spatial-temporal state maintenance capabilities in video understanding models.

Dataset Summary

  • Total Videos: 4,574 clips
  • Total Size: ~80 GB
  • Video Format: MP4 (H.264)
  • Categories: 8 subcategories across object counting and event counting tasks

Categories

Object Counting (2,297 clips):

  • O1-Snap: Current-state snapshot (252 clips)
  • O1-Delta: Current-state delta (98 clips)
  • O2-Unique: Global unique counting (1,869 clips)
  • O2-Gain: Windowed gain counting (78 clips)

Event Counting (2,277 clips):

  • E1-Action: Instantaneous action (1,281 clips)
  • E1-Transit: State transition (205 clips)
  • E2-Periodic: Periodic action (280 clips)
  • E2-Episode: Episodic segment (511 clips)

File Naming Convention

Multi-query clips

Format: {category}_{question_id}_{query_index}.mp4

Example: e1action_0000_00.mp4, e1action_0000_01.mp4

Single-query clips

Format: {category}_{question_id}.mp4

Example: o1delta_0007.mp4, o2gain_0000.mp4

Video Properties

  • Encoding: H.264 (using -c copy for lossless clipping)
  • Frame Rates: Preserved from source (3fps, 24fps, 25fps, 30fps, 60fps)
  • Duration Accuracy: ±0.1s from annotation timestamps
  • Quality: Original quality maintained (no re-encoding)

Source Datasets

Videos are clipped from multiple source datasets:

  • YouTube walking tours and sports videos
  • RoomTour3D (indoor navigation)
  • Ego4D (first-person view)
  • ScanNet, ScanNetPP, ARKitScenes (3D indoor scenes)
  • TOMATO, CODa, OmniWorld (temporal reasoning)
  • Simulated physics videos

Usage

Loading with Python

from huggingface_hub import hf_hub_download
import cv2

# Download a specific video
video_path = hf_hub_download(
    repo_id="YOUR_USERNAME/VCBench",
    filename="e1action_0000_00.mp4",
    repo_type="dataset"
)

# Load with OpenCV
cap = cv2.VideoCapture(video_path)

Batch Download

# Install huggingface-cli
pip install huggingface_hub

# Download entire dataset
huggingface-cli download YOUR_USERNAME/VCBench --repo-type dataset --local-dir ./vcbench_videos

Annotations

For complete annotations including questions, query points, and ground truth answers, please refer to the original VCBench repository:

  • Object counting annotations: object_count_data/*.json
  • Event counting annotations: event_counting_data/*.json

Each annotation file contains:

  • id: Question identifier
  • source_dataset: Original video source
  • video_path: Original video filename
  • question: Counting question
  • query_time or query_points: Timestamp(s) for queries
  • count: Ground truth answer(s)

Quality Validation

All videos have been validated for:

  • ✓ Duration accuracy (100% within ±0.1s)
  • ✓ Frame rate preservation (original fps maintained)
  • ✓ No frame drops or speed changes
  • ✓ Lossless clipping (no re-encoding artifacts)

Citation

If you use this dataset, please cite the VCBench paper:

@article{vcbench2026,
  title={VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance},
  author={[Authors]},
  journal={[Journal/Conference]},
  year={2026}
}

License

MIT License - See LICENSE file for details.

Dataset Statistics

Category Clips Avg Duration Total Size
O1-Snap 252 ~2min ~4.3 GB
O1-Delta 98 ~1min ~1.7 GB
O2-Unique 1,869 ~3min ~32 GB
O2-Gain 78 ~1min ~1.3 GB
E1-Action 1,281 ~4min ~28 GB
E1-Transit 205 ~2min ~3.5 GB
E2-Periodic 280 ~3min ~8.7 GB
E2-Episode 511 ~2min ~4.8 GB
Total 4,574 - ~80 GB

Contact

For questions or issues, please open an issue in the dataset repository.