Joslefaure commited on
Commit
97b2ae4
·
verified ·
1 Parent(s): 7481668

Update README.md

Browse files

Update dataset card

Files changed (1) hide show
  1. README.md +52 -83
README.md CHANGED
@@ -2,140 +2,109 @@
2
  license: mit
3
  ---
4
 
5
- # Dataset Card for Dataset Name
6
 
7
- <!-- Provide a quick summary of the dataset. -->
8
-
9
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
10
 
11
  ## Dataset Details
12
 
13
  ### Dataset Description
14
 
15
- <!-- Provide a longer summary of what this dataset is. -->
16
-
17
 
18
-
19
- - **Curated by:** [More Information Needed]
20
- - **Funded by [optional]:** [More Information Needed]
21
- - **Shared by [optional]:** [More Information Needed]
22
- - **Language(s) (NLP):** [More Information Needed]
23
- - **License:** [More Information Needed]
24
 
25
  ### Dataset Sources [optional]
26
 
27
- <!-- Provide the basic links for the dataset. -->
28
-
29
- - **Repository:** [More Information Needed]
30
- - **Paper [optional]:** [More Information Needed]
31
- - **Demo [optional]:** [More Information Needed]
32
 
33
  ## Uses
34
 
35
- <!-- Address questions around how the dataset is intended to be used. -->
36
-
37
  ### Direct Use
38
 
39
- <!-- This section describes suitable use cases for the dataset. -->
40
-
41
- [More Information Needed]
 
 
 
42
 
43
  ### Out-of-Scope Use
44
 
45
- <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
46
-
47
- [More Information Needed]
48
 
49
  ## Dataset Structure
50
 
51
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
 
 
 
 
 
 
 
 
 
52
 
53
- [More Information Needed]
54
 
55
  ## Dataset Creation
56
 
57
  ### Curation Rationale
58
 
59
- <!-- Motivation for the creation of this dataset. -->
60
-
61
- [More Information Needed]
62
 
63
  ### Source Data
64
 
65
- <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
 
 
 
66
 
67
- #### Data Collection and Processing
68
 
69
- <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
70
 
71
- [More Information Needed]
 
 
 
 
72
 
73
  #### Who are the source data producers?
74
 
75
- <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
76
-
77
- [More Information Needed]
78
 
79
- ### Annotations [optional]
80
-
81
- <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
82
 
83
  #### Annotation process
84
 
85
- <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
86
-
87
- [More Information Needed]
88
 
89
  #### Who are the annotators?
90
 
91
- <!-- This section describes the people or systems who created the annotations. -->
92
-
93
- [More Information Needed]
94
 
95
  #### Personal and Sensitive Information
96
 
97
- <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
98
-
99
- [More Information Needed]
100
 
101
  ## Bias, Risks, and Limitations
102
 
103
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
 
 
104
 
105
- [More Information Needed]
106
 
107
- ### Recommendations
108
-
109
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
110
-
111
- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
112
-
113
- ## Citation [optional]
114
-
115
- <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
116
 
117
  **BibTeX:**
118
 
119
- [More Information Needed]
120
-
121
- **APA:**
122
-
123
- [More Information Needed]
124
-
125
- ## Glossary [optional]
126
-
127
- <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
128
-
129
- [More Information Needed]
130
-
131
- ## More Information [optional]
132
-
133
- [More Information Needed]
134
-
135
- ## Dataset Card Authors [optional]
136
-
137
- [More Information Needed]
138
-
139
- ## Dataset Card Contact
140
-
141
- [More Information Needed]
 
2
  license: mit
3
  ---
4
 
5
+ # Dataset Card for FineBench
6
 
7
+ FineBench is a large-scale, multiple-choice Video Question Answering (VQA) dataset designed specifically to evaluate the fine-grained understanding of human actions in videos. It leverages the dense spatial (bounding boxes) and temporal (timestamps) annotations from the AVA v2.2 dataset, providing ~200k questions focused on nuanced person movements, interactions, and object manipulations within long video contexts.
 
 
8
 
9
  ## Dataset Details
10
 
11
  ### Dataset Description
12
 
13
+ FineBench addresses a key gap in existing VQA benchmarks by focusing on **fine-grained human action understanding** coupled with **dense spatio-temporal grounding**. Based on the AVA v2.2 dataset, which annotates atomic visual actions in movie clips, FineBench automatically generates multiple-choice questions (MCQs) using a template-based approach. Each question probes specific aspects of person movement, person interaction, or object manipulation, referencing individuals using spatial descriptors derived from their bounding boxes. The dataset includes ~200k QA pairs across 64 unique source videos (derived from AVA sources, primarily movies), with an average video duration of 900 seconds and high QA density. Its primary goal is to provide a challenging benchmark for evaluating the ability of Vision-Language Models (VLMs) to precisely localize and comprehend subtle human behaviors in complex scenes over time.
 
14
 
15
+ - **Curated by:** N/A
16
+ - **Language(s) (NLP):** English
17
+ - **License:** MIT
 
 
 
18
 
19
  ### Dataset Sources [optional]
20
 
21
+ - **Repository:** https://huggingface.co/datasets/FINEBENCH/FineBench
22
+ - **Paper:** Coming Soon
23
+ - **Demo:** Coming Soon
 
 
24
 
25
  ## Uses
26
 
 
 
27
  ### Direct Use
28
 
29
+ FineBench is primarily intended for **evaluating and benchmarking Vision-Language Models (VLMs)** on tasks requiring fine-grained understanding of human actions in videos. Specific use cases include:
30
+ * Assessing model capabilities in spatio-temporal reasoning regarding human actions.
31
+ * Evaluating understanding of nuanced person movement, person interaction, and object manipulation categories.
32
+ * Probing model robustness in handling multiple actors and spatial references within complex scenes.
33
+ * Analyzing model failure modes related to fine-grained comprehension (as demonstrated in the associated paper).
34
+ * [Stretch] Training or fine-tuning VLMs to improve fine-grained action understanding (though primarily designed as a benchmark).
35
 
36
  ### Out-of-Scope Use
37
 
38
+ FineBench is **not suitable** for:
39
+ * Directly inferring real-world statistics about human behavior (due to the source videos being primarily movies).
40
+ * Training models for surveillance or sensitive identity recognition, as it lacks the necessary labels and focuses on atomic actions from fictional content. Misuse related to analyzing depicted sensitive actions, even if fictional, should be avoided.
41
 
42
  ## Dataset Structure
43
 
44
+ FineBench is structured as a multiple-choice question-answering dataset. Each instance typically corresponds to a question about a specific person within a specific timestamped segment of a video. The key fields likely include:
45
+
46
+ * `video_id`: Identifier for the source video.
47
+ * `timestamp`: Timestamp indicating the relevant moment or segment in the video.
48
+ * `bbox`: Bounding box coordinates for the person(s) relevant to the question.
49
+ * `question`: The generated multiple-choice question (string).
50
+ * `options`: A list of possible answers (strings), including the correct answer and generated distractors.
51
+ * `answer`: The index of the correct answer within the `options` list.
52
+ * `action_name`: The ground truth action label(s) the question is based on.
53
+ * `action_type`: The high-level category (Person Movement, Person Interaction, Object Manipulation) the question pertains to.
54
 
55
+ The dataset structure ensures that each question is grounded in specific spatial regions (bounding boxes) and temporal moments (timestamps).
56
 
57
  ## Dataset Creation
58
 
59
  ### Curation Rationale
60
 
61
+ Existing VQA datasets often lack the necessary dense spatial and temporal grounding, or the specific focus on fine-grained human actions required to rigorously evaluate modern VLMs' capabilities in nuanced video understanding. As shown in analyses accompanying this dataset, even state-of-the-art VLMs struggle with precisely localizing actions and distinguishing between subtle variations in human movement and interaction. FineBench was created to directly address this gap, providing a large-scale, challenging benchmark specifically designed to probe these fine-grained understanding abilities.
 
 
62
 
63
  ### Source Data
64
 
65
+ The primary source data for FineBench is the **AVA (Atomic Visual Actions) v2.2 dataset** \cite{gu2018ava}. AVA provides dense annotations of atomic visual actions performed by humans within movie clips, including:
66
+ * Action labels (80 atomic actions).
67
+ * Bounding boxes localizing the person performing the action.
68
+ * Timestamps indicating when the action occurs.
69
 
70
+ FineBench utilizes these annotations and the corresponding video segments from AVA's source movies.
71
 
72
+ #### Data Collection and Processing
73
 
74
+ The FineBench QA pairs were not manually collected but **algorithmically generated** based on the AVA v2.2 annotations. The process involved:
75
+ 1. **Template-Based Question Generation:** A comprehensive set of question templates (~70) was designed, categorized by action type (Person Movement, Object Manipulation, Person Interaction).
76
+ 2. **Spatial Referencing:** Placeholders in templates (e.g., `{person}`) were instantiated using dynamic spatial descriptors (e.g., "the leftmost person", "the person in the center", "the second person from the left") derived from AVA bounding box locations to ensure unambiguous subject reference.
77
+ 3. **Distractor Selection:** For each question based on a ground truth AVA action, plausible incorrect answer options (distractors) were selected using a two-tiered strategy: first prioritizing semantically similar actions based on a predefined mapping, and falling back to random selection within the same action category if necessary. Compound questions were generated for simultaneous actions.
78
+ 4. **Data Structuring:** The generated questions, options, correct answer labels, and relevant metadata (video ID, timestamp, bounding box, action category) were compiled into the final dataset splits, preserving the original AVA annotations.
79
 
80
  #### Who are the source data producers?
81
 
82
+ The original annotations (action labels, bounding boxes, timestamps) were created by human annotators as part of the AVA v2.2 dataset curation process \cite{gu2018ava}. Details on the annotators (demographics, compensation) are available in the original AVA publications. The underlying visual data comes from movies, produced by various film studios, directors, actors, etc.
 
 
83
 
 
 
 
84
 
85
  #### Annotation process
86
 
87
+ Described in the accompanying paper.
 
 
88
 
89
  #### Who are the annotators?
90
 
91
+ * **Base Annotations (Actions, Boxes):** Human annotators for AVA v2.2.
92
+ * **QA Pairs (Questions, Distractors):** Algorithmically generated by the creators of FineBench ([N/A]).
 
93
 
94
  #### Personal and Sensitive Information
95
 
96
+ The source videos are from commercially distributed movies, not private recordings. Therefore, the risk of exposing PII of individuals in the traditional sense is low. The dataset itself does not contain explicit PII beyond potentially identifiable actors (who are public figures). No anonymization was applied as the source material is public-domain or commercially distributed film content. However, the *actions* depicted (even if fictional) could potentially be sensitive depending on the context (e.g., depictions of violence, specific interactions).
 
 
97
 
98
  ## Bias, Risks, and Limitations
99
 
100
+ * **Bias:** FineBench inherits potential biases from its source, AVA v2.2, which is based on movies.
101
+ * **Limitations:**
102
+ * Focuses exclusively on human actions; does not cover general scene understanding or object-centric VQA beyond human manipulation.
103
 
 
104
 
105
+ ## Citation
 
 
 
 
 
 
 
 
106
 
107
  **BibTeX:**
108
 
109
+ ```bibtex
110
+ Coming Soon