|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- visual-question-answering |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- multimodal |
|
|
pretty_name: EMVista |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: test |
|
|
path: data/test.parquet |
|
|
--- |
|
|
|
|
|
# EMVista Dataset |
|
|
<center><h1>EMVista</h1></center> |
|
|
|
|
|
<p align="center"> |
|
|
<img src="./assets/pipeline.png" alt="EMVista" style="display: block; margin: auto; max-width: 70%;"> |
|
|
</p> |
|
|
|
|
|
<p align="center"> |
|
|
<a href="https://huggingface.co/datasets/EMVista/EMVista"><b>HuggingFace</b></a> |
|
|
</p> |
|
|
|
|
|
--- |
|
|
|
|
|
## 🔥 Latest News |
|
|
|
|
|
- **[2026/01]** EMVista v1.0 is officially released. |
|
|
|
|
|
<!-- <details> |
|
|
<summary>Unfold to see more details.</summary> |
|
|
<br> |
|
|
|
|
|
- EMVista supports **English** prompts. |
|
|
|
|
|
</details> --> |
|
|
|
|
|
<!-- --- |
|
|
|
|
|
## Motivation: TODO |
|
|
|
|
|
<details> |
|
|
<summary>Unfold to see more details.</summary> |
|
|
<br> |
|
|
|
|
|
Recent advances in Multimodal Large Language Models (MLLMs) have demonstrated impressive performance on generic vision-language benchmarks. However, most existing benchmarks primarily assess **coarse-grained perception** or **commonsense visual understanding**, falling short in evaluating models’ abilities to reason over **complex, expert-level visual information**. |
|
|
|
|
|
In realistic applications—such as scientific analysis, technical inspection, diagram interpretation, and abstract visual reasoning—models must go beyond recognizing objects or captions. They need to **extract structured visual cues**, **understand implicit visual attributes**, and **perform multi-step reasoning across multiple visual sources**. |
|
|
|
|
|
To address this gap, we introduce **EMVista**, a benchmark designed to systematically evaluate multimodal models’ **visual understanding and reasoning capabilities** through carefully curated expert-level visual tasks. |
|
|
|
|
|
</details> |
|
|
|
|
|
--- --> |
|
|
## Overview |
|
|
**EMVista** is a benchmark for evaluating **instance-level microstructural understanding** in electron microscopy (EM) images across **three core capability |
|
|
dimensions**: |
|
|
|
|
|
1. **Microstructural Perception** |
|
|
Evaluates the ability to detect, delineate, and separate individual |
|
|
microstructural instances in complex EM scenes. |
|
|
2. **Microstructural Attribute Understanding** |
|
|
Measures the capacity to interpret key microstructural attributes, including |
|
|
morphology, density, spatial distribution, layering, and scale variation. |
|
|
3. **Robustness in Dense Scenes** |
|
|
Assesses model stability and accuracy under extreme instance crowding, |
|
|
overlap, and multi-scale complexity. |
|
|
|
|
|
EMVista contains **expert-annotated EM images** with instance-level labels and |
|
|
structured attribute descriptions, designed to reflect **realistic challenges** |
|
|
in materials microstructure analysis. |
|
|
|
|
|
--- |
|
|
## Dataset Characteristics |
|
|
|
|
|
- **Task Format**: Visual Question Answering (VQA) |
|
|
- **Modalities**: Image + Text |
|
|
- **Languages**: English |
|
|
- **Annotation**: Expert-verified |
|
|
--- |
|
|
|
|
|
### Download EMVista Dataset |
|
|
|
|
|
You can download the EMVista dataset using the HuggingFace `datasets` library |
|
|
(make sure you have installed |
|
|
[HuggingFace Datasets](https://huggingface.co/docs/datasets/quickstart)): |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("InnovatorLab/EMVista") |
|
|
``` |
|
|
|
|
|
## Evaluations |
|
|
|
|
|
We use [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for evaluations. Please see [here](./evaluation/README.md) for detail files. |
|
|
|
|
|
## License |
|
|
|
|
|
EMVista is released under the MIT License. See [LICENSE](./LICENSE) for more details. |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@article{wen2026innovator, |
|
|
title={Innovator-VL: A Multimodal Large Language Model for Scientific Discovery}, |
|
|
author={Wen, Zichen and Yang, Boxue and Chen, Shuang and Zhang, Yaojie and Han, Yuhang and Ke, Junlong and Wang, Cong and others}, |
|
|
journal={arXiv preprint arXiv:2601.19325}, |
|
|
year={2026} |
|
|
} |
|
|
``` |