Datasets:

Modalities:
Image
ArXiv:
Libraries:
Datasets
License:
YiyangAiLab commited on
Commit
9af41de
·
verified ·
1 Parent(s): c75288a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -3
README.md CHANGED
@@ -1,3 +1,72 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ ---
4
+ # MIRA: Multimodal Imagination for Reasoning Assessment
5
+
6
+ ## Dataset Description
7
+
8
+ **MIRA (Multimodal Imagination for Reasoning Assessment)** evaluates whether MLLMs can *think while drawing*—i.e., generate and use intermediate **visual** representations (sketches, diagrams, trajectories) as part of reasoning.
9
+ MIRA includes **546** carefully curated problems spanning **20 task types** across four domains:
10
+
11
+ - **Euclidean Geometry (EG)**
12
+ - **Physics-Based Reasoning (PBR)**
13
+ - **Abstract Spatial & Logical Puzzles (ASLP)**
14
+ - **Causal Transformations (CT)**
15
+
16
+ Each instance comes with gold **visual chain-of-thought (Visual-CoT) images** and final answers. We provide three evaluation settings: **Direct** (image + question), **Text-CoT**, and **Visual-CoT**.
17
+
18
+ <p align="center">
19
+ <img src="https://huggingface.co/datasets/YiyangAiLab/MIRA/fig1.jpg" width="45%">
20
+ </p>
21
+
22
+ ---
23
+
24
+ ## Paper / Code / Project
25
+
26
+ - **Paper**: https://arxiv.org/abs/2511.02779
27
+ - **Project**: https://mira-benchmark.github.io/
28
+ - **Code**: https://github.com/aiming-lab/MIRA
29
+
30
+ ---
31
+
32
+ ## Dataset Usage
33
+
34
+ ### Install
35
+
36
+ You can download the dataset by the following command (Taking downloading billiards data as an example):
37
+
38
+ ```python
39
+ from datasets import load_dataset
40
+ dataset = load_dataset("YiyangAiLab/MIRA", "billiards", split="test")
41
+ ```
42
+
43
+ ### Data Format
44
+
45
+ The dataset is provided in **JSON Lines (jsonl)** format. Each line is a standalone JSON object with the following fields:
46
+
47
+ ```json
48
+ {
49
+ "uid (int)": Unique identifier for the sample,
50
+ "image_path (string)": Relative or absolute path to the input image file,
51
+ "question (string)": The natural-language prompt associated with the image,
52
+ "answer (int|string)": The gold final answer. Use a number for numeric answers; a string for textual answers if applicable,
53
+ }
54
+ ```
55
+
56
+ ### Automatic Evaluation
57
+
58
+ To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/aiming-lab/MIRA).
59
+
60
+ ## Citation
61
+
62
+ ```
63
+ @misc{zhou2025visualizingstepreasoningmira,
64
+ title={When Visualizing is the First Step to Reasoning: MIRA, a Benchmark for Visual Chain-of-Thought},
65
+ author={Yiyang Zhou and Haoqin Tu and Zijun Wang and Zeyu Wang and Niklas Muennighoff and Fan Nie and Yejin Choi and James Zou and Chaorui Deng and Shen Yan and Haoqi Fan and Cihang Xie and Huaxiu Yao and Qinghao Ye},
66
+ year={2025},
67
+ eprint={2511.02779},
68
+ archivePrefix={arXiv},
69
+ primaryClass={cs.CV},
70
+ url={https://arxiv.org/abs/2511.02779},
71
+ }
72
+ ```