Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
EMVista / README.md
zichenwen's picture
Update README.md
36db424 verified
metadata
license: mit
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - multimodal
pretty_name: EMVista
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test.parquet

EMVista Dataset

EMVista

EMVista

HuggingFace


🔥 Latest News

  • [2026/01] EMVista v1.0 is officially released.

Overview

EMVista is a benchmark for evaluating instance-level microstructural understanding in electron microscopy (EM) images across three core capability dimensions:

  1. Microstructural Perception
    Evaluates the ability to detect, delineate, and separate individual microstructural instances in complex EM scenes.
  2. Microstructural Attribute Understanding
    Measures the capacity to interpret key microstructural attributes, including morphology, density, spatial distribution, layering, and scale variation.
  3. Robustness in Dense Scenes
    Assesses model stability and accuracy under extreme instance crowding, overlap, and multi-scale complexity.

EMVista contains expert-annotated EM images with instance-level labels and structured attribute descriptions, designed to reflect realistic challenges in materials microstructure analysis.


Dataset Characteristics

  • Task Format: Visual Question Answering (VQA)
  • Modalities: Image + Text
  • Languages: English
  • Annotation: Expert-verified

Download EMVista Dataset

You can download the EMVista dataset using the HuggingFace datasets library (make sure you have installed HuggingFace Datasets):

from datasets import load_dataset

dataset = load_dataset("InnovatorLab/EMVista")

Evaluations

We use lmms-eval for evaluations. Please see here for detail files.

License

EMVista is released under the MIT License. See LICENSE for more details.

Citation

@article{wen2026innovator,
  title={Innovator-VL: A Multimodal Large Language Model for Scientific Discovery},
  author={Wen, Zichen and Yang, Boxue and Chen, Shuang and Zhang, Yaojie and Han, Yuhang and Ke, Junlong and Wang, Cong and others},
  journal={arXiv preprint arXiv:2601.19325},
  year={2026}
}