Datasets:

ArXiv:
License:
hzli1202 commited on
Commit
dcacda1
·
verified ·
1 Parent(s): 457249e

Updated README.md with dataset information

Browse files
Files changed (1) hide show
  1. README.md +108 -3
README.md CHANGED
@@ -1,3 +1,108 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ ---
4
+
5
+ # Dataset Card for MimeQA
6
+
7
+ <!-- Provide a quick summary of the dataset. -->
8
+
9
+ MimeQA is a video question-answering benchmark designed to evaluate nonverbal social reasoning capabilities of AI models. Sourced from ~8 hours of mime performances on YouTube, it comprises 101 videos and 806 open-ended QA pairs that span fine-grained perceptual grounding to high-level social cognition. Unlike existing social video benchmarks that involves human dialogues, MimeQA challenges models to interpret nonverbal, embodied gestures in the absence of speech, props, or narration.
10
+
11
+ ## Dataset Details
12
+
13
+ ### Dataset Description
14
+
15
+ <!-- Provide a longer summary of what this dataset is. -->
16
+
17
+ MimeQA evaluates the capacity of AI systems to understand nonverbal communication and social interactions through mime videos. The dataset consists of short video segments (1–10 minutes) paired with open-ended questions and a single-sentence answer. Questions span a three-tier hierarchy:
18
+ - **Grounding the Imagined**: recognizing pretend objects or activities through gestures.
19
+ - **Scene-Level**: reasoning over local events, affect, and intentions.
20
+ - **Global-Level**: evaluating working memory, social judgment, and theory of mind across entire videos.
21
+
22
+ The dataset is densely annotated, with ~8 QA pairs per video, and includes optional timestamps for localized segments. MIMEQA is particularly challenging for current video-language models: while humans score ~86%, top models only perform between 20–30% accuracy.
23
+
24
+ - **Language(s) (NLP):** English
25
+ - **License:** CC-BY-SA
26
+
27
+ ### Dataset Sources
28
+
29
+ <!-- Provide the basic links for the dataset. -->
30
+
31
+ - **Repository:** [https://github.com/MIT-MI/MimeQA]
32
+ - **Paper:** [https://arxiv.org/abs/2502.16671]
33
+
34
+ ## Dataset Structure
35
+
36
+ <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
37
+
38
+ Each row in `metada.csv` includes
39
+
40
+ - **file_name**: File name of the corresponding video
41
+ - **timestamp**: (optional) Timestamps for the relevant scene
42
+ - **question_type**: One of grounding, temporal, affect recognition, intention, working memory, social judgment, theory of mind
43
+ - **question**: An open-ended question
44
+ - **reference_answer**: Ground-truth answer
45
+
46
+ The videos are saved as `.mp4` files in the `/videos` folder
47
+
48
+ ## Dataset Creation
49
+
50
+ ### Curation Rationale
51
+
52
+ <!-- Motivation for the creation of this dataset. -->
53
+
54
+ Existing video QA benchmarks rely heavily on spoken language, limiting evaluation of models’ nonverbal and embodied reasoning capabilities. MimeQA was created to fill this gap by leveraging the art of mime—where performers use only gestures, movement, and body language to tell stories.
55
+
56
+ ### Source Data
57
+
58
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
59
+
60
+ #### Data Collection and Processing
61
+
62
+ <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
63
+
64
+ - Collected 221 Creative Commons-licensed YouTube videos containing "mime" in search terms.
65
+ - Filtered down to 101 videos (1–10 minutes each), totaling ~8 hours of content.
66
+ - Human annotators annotated ~8 questions per video: ~6 scene-level, ~4 global-level, and additional grounding questions as applicable.
67
+
68
+ ### Annotations
69
+
70
+ <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
71
+
72
+ #### Annotation process
73
+
74
+ Each video in MimeQA was annotated by human annotators using a structured hierarchy of social reasoning tasks. Annotators were instructed to write approximately six scene-level questions, four global-level questions, and as many grounding questions as relevant for each video. For scene-level and grounding questions, annotators also provided start and end timestamps to specify the referenced video segment. After annotation, each QA pair was independently verified by a second annotator who watched the video and answered the same questions without seeing the original answers. Verifiers compared their responses to the originals and marked inconsistencies or suggested revisions. This pipeline resulted in a final set of **806 high-quality QA pairs** across 101 videos, with an inter-annotator agreement rate of 97.6%.
75
+
76
+ ## Citation
77
+
78
+ <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
79
+
80
+ **BibTeX:**
81
+
82
+ ```bibtex
83
+ @article{li2025mimeqa,
84
+ title={MimeQA: Towards Socially-Intelligent Nonverbal Foundation Models},
85
+ author={Li, Hengzhi and Tjandrasuwita, Megan and Fung, Yi R and Solar-Lezama, Armando and Liang, Paul Pu},
86
+ journal={arXiv preprint arXiv:2502.16671},
87
+ year={2025}
88
+ }
89
+ ```
90
+
91
+ **APA:**
92
+
93
+ Li, H., Tjandrasuwita, M., Fung, Y. R., Solar-Lezama, A., & Liang, P. P. (2025). MimeQA: Towards Socially-Intelligent Nonverbal Foundation Models. *arXiv preprint arXiv:2502.16671.*
94
+
95
+
96
+ ## Dataset Card Authors
97
+
98
+ - Hengzhi Li (MIT / Imperial College London)
99
+ - Megan Tjandrasuwita (MIT)
100
+ - Yi R. Fung (MIT)
101
+ - Armando Solar-Lezama (MIT)
102
+ - Paul Pu Liang (MIT)
103
+
104
+ ## Dataset Card Contact
105
+
106
+ Hengzhi Li: hengzhil[at]mit[dot]edu
107
+
108
+ Megan Tjandrasuwita: megantj[at]mit[dot]edu