license: mit
Dataset Card for FineBench
FineBench is a large-scale, multiple-choice Video Question Answering (VQA) dataset designed specifically to evaluate the fine-grained understanding of human actions in videos. It leverages the dense spatial (bounding boxes) and temporal (timestamps) annotations from the AVA v2.2 dataset, providing ~200k questions focused on nuanced person movements, interactions, and object manipulations within long video contexts.
Dataset Details
Dataset Description
FineBench addresses a key gap in existing VQA benchmarks by focusing on fine-grained human action understanding coupled with dense spatio-temporal grounding. Based on the AVA v2.2 dataset, which annotates atomic visual actions in movie clips, FineBench automatically generates multiple-choice questions (MCQs) using a template-based approach. Each question probes specific aspects of person movement, person interaction, or object manipulation, referencing individuals using spatial descriptors derived from their bounding boxes. The dataset includes ~200k QA pairs across 64 unique source videos (derived from AVA sources, primarily movies), with an average video duration of 900 seconds and high QA density. Its primary goal is to provide a challenging benchmark for evaluating the ability of Vision-Language Models (VLMs) to precisely localize and comprehend subtle human behaviors in complex scenes over time.
- Curated by: N/A
- Language(s) (NLP): English
- License: MIT
Dataset Sources
- Repository: https://huggingface.co/datasets/FINEBENCH/FineBench
- Paper: Coming Soon
- Demo: Coming Soon
Uses
Direct Use
FineBench is primarily intended for evaluating and benchmarking Vision-Language Models (VLMs) on tasks requiring fine-grained understanding of human actions in videos. Specific use cases include:
- Assessing model capabilities in spatio-temporal reasoning regarding human actions.
- Evaluating understanding of nuanced person movement, person interaction, and object manipulation categories.
- Probing model robustness in handling multiple actors and spatial references within complex scenes.
- Analyzing model failure modes related to fine-grained comprehension (as demonstrated in the associated paper).
- [Stretch] Training or fine-tuning VLMs to improve fine-grained action understanding (though primarily designed as a benchmark).
Out-of-Scope Use
FineBench is not suitable for:
- Directly inferring real-world statistics about human behavior (due to the source videos being primarily movies).
- Training models for surveillance or sensitive identity recognition, as it lacks the necessary labels and focuses on atomic actions from fictional content. Misuse related to analyzing depicted sensitive actions, even if fictional, should be avoided.
Dataset Structure
FineBench is structured as a multiple-choice question-answering dataset. Each instance typically corresponds to a question about a specific person within a specific timestamped segment of a video. The key fields likely include:
video_id: Identifier for the source video.timestamp: Timestamp indicating the relevant moment or segment in the video.bbox: Bounding box coordinates for the person(s) relevant to the question.question: The generated multiple-choice question (string).options: A list of possible answers (strings), including the correct answer and generated distractors.answer: The index of the correct answer within theoptionslist.action_name: The ground truth action label(s) the question is based on.action_type: The high-level category (Person Movement, Person Interaction, Object Manipulation) the question pertains to.
The dataset structure ensures that each question is grounded in specific spatial regions (bounding boxes) and temporal moments (timestamps).
Dataset Creation
Curation Rationale
Existing VQA datasets often lack the necessary dense spatial and temporal grounding, or the specific focus on fine-grained human actions required to rigorously evaluate modern VLMs' capabilities in nuanced video understanding. As shown in analyses accompanying this dataset, even state-of-the-art VLMs struggle with precisely localizing actions and distinguishing between subtle variations in human movement and interaction. FineBench was created to directly address this gap, providing a large-scale, challenging benchmark specifically designed to probe these fine-grained understanding abilities.
Source Data
The primary source data for FineBench is the AVA (Atomic Visual Actions) v2.2 dataset \cite{gu2018ava}. AVA provides dense annotations of atomic visual actions performed by humans within movie clips, including:
- Action labels (80 atomic actions).
- Bounding boxes localizing the person performing the action.
- Timestamps indicating when the action occurs.
FineBench utilizes these annotations and the corresponding video segments from AVA's source movies.
Data Collection and Processing
The FineBench QA pairs were not manually collected but algorithmically generated based on the AVA v2.2 annotations. The process involved:
- Template-Based Question Generation: A comprehensive set of question templates (~70) was designed, categorized by action type (Person Movement, Object Manipulation, Person Interaction).
- Spatial Referencing: Placeholders in templates (e.g.,
{person}) were instantiated using dynamic spatial descriptors (e.g., "the leftmost person", "the person in the center", "the second person from the left") derived from AVA bounding box locations to ensure unambiguous subject reference. - Distractor Selection: For each question based on a ground truth AVA action, plausible incorrect answer options (distractors) were selected using a two-tiered strategy: first prioritizing semantically similar actions based on a predefined mapping, and falling back to random selection within the same action category if necessary. Compound questions were generated for simultaneous actions.
- Data Structuring: The generated questions, options, correct answer labels, and relevant metadata (video ID, timestamp, bounding box, action category) were compiled into the final dataset splits, preserving the original AVA annotations.
Who are the source data producers?
The original annotations (action labels, bounding boxes, timestamps) were created by human annotators as part of the AVA v2.2 dataset curation process \cite{gu2018ava}. Details on the annotators (demographics, compensation) are available in the original AVA publications. The underlying visual data comes from movies, produced by various film studios, directors, actors, etc.
Annotation process
Described in the accompanying paper.
Who are the annotators?
- Base Annotations (Actions, Boxes): Human annotators for AVA v2.2.
- QA Pairs (Questions, Distractors): Algorithmically generated by the creators of FineBench ([N/A]).
Personal and Sensitive Information
The source videos are from commercially distributed movies, not private recordings. Therefore, the risk of exposing PII of individuals in the traditional sense is low. The dataset itself does not contain explicit PII beyond potentially identifiable actors (who are public figures). No anonymization was applied as the source material is public-domain or commercially distributed film content. However, the actions depicted (even if fictional) could potentially be sensitive depending on the context (e.g., depictions of violence, specific interactions).
Bias, Risks, and Limitations
- Bias: FineBench inherits potential biases from its source, AVA v2.2, which is based on movies.
- Limitations:
- Focuses exclusively on human actions; does not cover general scene understanding or object-centric VQA beyond human manipulation.
Citation
BibTeX:
Coming Soon