The dataset viewer is not available for this split.
Error code: RowsPostProcessingError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Card for MimeQA
MimeQA is a video question-answering benchmark designed to evaluate nonverbal social reasoning capabilities of AI models. Sourced from ~8 hours of mime performances on YouTube, it comprises 101 videos and 806 open-ended QA pairs that span fine-grained perceptual grounding to high-level social cognition. Unlike existing social video benchmarks that involves human dialogues, MimeQA challenges models to interpret nonverbal, embodied gestures in the absence of speech, props, or narration.
Dataset Details
Dataset Description
MimeQA evaluates the capacity of AI systems to understand nonverbal communication and social interactions through mime videos. The dataset consists of short video segments (1–10 minutes) paired with open-ended questions and a single-sentence answer. Questions span a three-tier hierarchy:
- Grounding the Imagined: recognizing pretend objects or activities through gestures.
- Scene-Level: reasoning over local events, affect, and intentions.
- Global-Level: evaluating working memory, social judgment, and theory of mind across entire videos.
The dataset is densely annotated, with ~8 QA pairs per video, and includes optional timestamps for localized segments. MIMEQA is particularly challenging for current video-language models: while humans score ~86%, top models only perform between 20–30% accuracy.
- Language(s) (NLP): English
- License: CC-BY-SA
Dataset Sources
- Repository: [https://github.com/MIT-MI/MimeQA]
- Paper: [https://arxiv.org/abs/2502.16671]
Dataset Structure
Each row in metada.csv includes
- file_name: File name of the corresponding video
- timestamp: (optional) Timestamps for the relevant scene
- question_type: One of grounding, temporal, affect recognition, intention, working memory, social judgment, theory of mind
- question: An open-ended question
- reference_answer: Ground-truth answer
The videos are saved as .mp4 files in the /videos folder
Dataset Creation
Curation Rationale
Existing video QA benchmarks rely heavily on spoken language, limiting evaluation of models’ nonverbal and embodied reasoning capabilities. MimeQA was created to fill this gap by leveraging the art of mime—where performers use only gestures, movement, and body language to tell stories.
Source Data
Data Collection and Processing
- Collected 221 Creative Commons-licensed YouTube videos containing "mime" in search terms.
- Filtered down to 101 videos (1–10 minutes each), totaling ~8 hours of content.
- Human annotators annotated ~8 questions per video: ~6 scene-level, ~4 global-level, and additional grounding questions as applicable.
Annotations
Annotation process
Each video in MimeQA was annotated by human annotators using a structured hierarchy of social reasoning tasks. Annotators were instructed to write approximately six scene-level questions, four global-level questions, and as many grounding questions as relevant for each video. For scene-level and grounding questions, annotators also provided start and end timestamps to specify the referenced video segment. After annotation, each QA pair was independently verified by a second annotator who watched the video and answered the same questions without seeing the original answers. Verifiers compared their responses to the originals and marked inconsistencies or suggested revisions. This pipeline resulted in a final set of 806 high-quality QA pairs across 101 videos, with an inter-annotator agreement rate of 97.6%.
Citation
BibTeX:
@article{li2025mimeqa,
title={MimeQA: Towards Socially-Intelligent Nonverbal Foundation Models},
author={Li, Hengzhi and Tjandrasuwita, Megan and Fung, Yi R and Solar-Lezama, Armando and Liang, Paul Pu},
journal={arXiv preprint arXiv:2502.16671},
year={2025}
}
APA:
Li, H., Tjandrasuwita, M., Fung, Y. R., Solar-Lezama, A., & Liang, P. P. (2025). MimeQA: Towards Socially-Intelligent Nonverbal Foundation Models. arXiv preprint arXiv:2502.16671.
Dataset Card Authors
- Hengzhi Li (MIT / Imperial College London)
- Megan Tjandrasuwita (MIT)
- Yi R. Fung (MIT)
- Armando Solar-Lezama (MIT)
- Paul Pu Liang (MIT)
Dataset Card Contact
Hengzhi Li: hengzhil[at]mit[dot]edu
Megan Tjandrasuwita: megantj[at]mit[dot]edu
- Downloads last month
- 111