Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
File size: 3,780 Bytes
63d83d8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92df262
63d83d8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36db424
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
license: mit
task_categories:
- visual-question-answering
language:
- en
tags:
- multimodal
pretty_name: EMVista
size_categories:
- 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test.parquet
---

# EMVista Dataset
<center><h1>EMVista</h1></center>

<p align="center">
  <img src="./assets/pipeline.png" alt="EMVista" style="display: block; margin: auto; max-width: 70%;">
</p>

<p align="center">
<a href="https://huggingface.co/datasets/EMVista/EMVista"><b>HuggingFace</b></a>
</p>

---

## 🔥 Latest News

- **[2026/01]** EMVista v1.0 is officially released.

<!-- <details>
<summary>Unfold to see more details.</summary>
<br>

- EMVista supports **English** prompts.

</details> -->

<!-- ---

## Motivation: TODO

<details>
<summary>Unfold to see more details.</summary>
<br>

Recent advances in Multimodal Large Language Models (MLLMs) have demonstrated impressive performance on generic vision-language benchmarks. However, most existing benchmarks primarily assess **coarse-grained perception** or **commonsense visual understanding**, falling short in evaluating models’ abilities to reason over **complex, expert-level visual information**.

In realistic applications—such as scientific analysis, technical inspection, diagram interpretation, and abstract visual reasoning—models must go beyond recognizing objects or captions. They need to **extract structured visual cues**, **understand implicit visual attributes**, and **perform multi-step reasoning across multiple visual sources**.

To address this gap, we introduce **EMVista**, a benchmark designed to systematically evaluate multimodal models’ **visual understanding and reasoning capabilities** through carefully curated expert-level visual tasks.

</details>

--- -->
## Overview
**EMVista** is a benchmark for evaluating **instance-level microstructural understanding** in electron microscopy (EM) images across **three core capability
dimensions**:

1. **Microstructural Perception**  
   Evaluates the ability to detect, delineate, and separate individual
   microstructural instances in complex EM scenes.
2. **Microstructural Attribute Understanding**  
   Measures the capacity to interpret key microstructural attributes, including
   morphology, density, spatial distribution, layering, and scale variation.
3. **Robustness in Dense Scenes**  
   Assesses model stability and accuracy under extreme instance crowding,
   overlap, and multi-scale complexity.

EMVista contains **expert-annotated EM images** with instance-level labels and
structured attribute descriptions, designed to reflect **realistic challenges**
in materials microstructure analysis.

---
## Dataset Characteristics

- **Task Format**: Visual Question Answering (VQA)
- **Modalities**: Image + Text
- **Languages**: English
- **Annotation**: Expert-verified
---

### Download EMVista Dataset

You can download the EMVista dataset using the HuggingFace `datasets` library
(make sure you have installed
[HuggingFace Datasets](https://huggingface.co/docs/datasets/quickstart)):

```python
from datasets import load_dataset

dataset = load_dataset("InnovatorLab/EMVista")
```

## Evaluations

We use [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for evaluations. Please see [here](./evaluation/README.md) for detail files.

## License

EMVista is released under the MIT License. See [LICENSE](./LICENSE) for more details.

## Citation

```bibtex
@article{wen2026innovator,
  title={Innovator-VL: A Multimodal Large Language Model for Scientific Discovery},
  author={Wen, Zichen and Yang, Boxue and Chen, Shuang and Zhang, Yaojie and Han, Yuhang and Ke, Junlong and Wang, Cong and others},
  journal={arXiv preprint arXiv:2601.19325},
  year={2026}
}
```