Datasets:

Modalities:
Image
ArXiv:
Libraries:
Datasets
License:
File size: 2,585 Bytes
9af41de
 
 
85bc4c8
9af41de
 
 
 
 
 
 
 
 
 
 
 
 
 
eb2fbea
9af41de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43a43dc
9af41de
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: cc-by-4.0
---
# When Visualizing is the First Step to Reasoning: MIRA, a Benchmark for Visual Chain-of-Thought

## Dataset Description

**MIRA (Multimodal Imagination for Reasoning Assessment)** evaluates whether MLLMs can *think while drawing*—i.e., generate and use intermediate **visual** representations (sketches, diagrams, trajectories) as part of reasoning.  
MIRA includes **546** carefully curated problems spanning **20 task types** across four domains:

- **Euclidean Geometry (EG)**
- **Physics-Based Reasoning (PBR)**
- **Abstract Spatial & Logical Puzzles (ASLP)**
- **Causal Transformations (CT)**

Each instance comes with gold **visual chain-of-thought (Visual-CoT) images** and final answers. We provide three evaluation settings: **Direct** (image + question), **Text-CoT**, and **Visual-CoT**.

<p align="center">
    <img src="https://huggingface.co/datasets/YiyangAiLab/MIRA/resolve/main/fig1.jpg" width="95%"> <br>
</p>

---

## Paper / Code / Project

- **Paper**: https://arxiv.org/abs/2511.02779  
- **Project**: https://mira-benchmark.github.io/  
- **Code**: https://github.com/aiming-lab/MIRA 

---

## Dataset Usage

### Install

You can download the dataset by the following command (Taking downloading billiards data as an example):

```python
from datasets import load_dataset
dataset = load_dataset("YiyangAiLab/MIRA", "billiards")
```

### Data Format

The dataset is provided in **JSON Lines (jsonl)** format. Each line is a standalone JSON object with the following fields:

```json
{
  "uid (int)": Unique identifier for the sample,
  "image_path (string)": Relative or absolute path to the input image file,
  "question (string)": The natural-language prompt associated with the image,
  "answer (int|string)": The gold final answer. Use a number for numeric answers; a string for textual answers if applicable,
}
```

### Automatic Evaluation

 To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/aiming-lab/MIRA).

## Citation

```
@misc{zhou2025visualizingstepreasoningmira,
      title={When Visualizing is the First Step to Reasoning: MIRA, a Benchmark for Visual Chain-of-Thought}, 
      author={Yiyang Zhou and Haoqin Tu and Zijun Wang and Zeyu Wang and Niklas Muennighoff and Fan Nie and Yejin Choi and James Zou and Chaorui Deng and Shen Yan and Haoqi Fan and Cihang Xie and Huaxiu Yao and Qinghao Ye},
      year={2025},
      eprint={2511.02779},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2511.02779}, 
}
```