--- license: mit task_categories: - visual-question-answering language: - en tags: - multimodal pretty_name: EMVista size_categories: - 1K

EMVista

EMVista

HuggingFace

--- ## 🔥 Latest News - **[2026/01]** EMVista v1.0 is officially released. ## Overview **EMVista** is a benchmark for evaluating **instance-level microstructural understanding** in electron microscopy (EM) images across **three core capability dimensions**: 1. **Microstructural Perception** Evaluates the ability to detect, delineate, and separate individual microstructural instances in complex EM scenes. 2. **Microstructural Attribute Understanding** Measures the capacity to interpret key microstructural attributes, including morphology, density, spatial distribution, layering, and scale variation. 3. **Robustness in Dense Scenes** Assesses model stability and accuracy under extreme instance crowding, overlap, and multi-scale complexity. EMVista contains **expert-annotated EM images** with instance-level labels and structured attribute descriptions, designed to reflect **realistic challenges** in materials microstructure analysis. --- ## Dataset Characteristics - **Task Format**: Visual Question Answering (VQA) - **Modalities**: Image + Text - **Languages**: English - **Annotation**: Expert-verified --- ### Download EMVista Dataset You can download the EMVista dataset using the HuggingFace `datasets` library (make sure you have installed [HuggingFace Datasets](https://huggingface.co/docs/datasets/quickstart)): ```python from datasets import load_dataset dataset = load_dataset("InnovatorLab/EMVista") ``` ## Evaluations We use [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) for evaluations. Please see [here](./evaluation/README.md) for detail files. ## License EMVista is released under the MIT License. See [LICENSE](./LICENSE) for more details. ## Citation ```bibtex @article{wen2026innovator, title={Innovator-VL: A Multimodal Large Language Model for Scientific Discovery}, author={Wen, Zichen and Yang, Boxue and Chen, Shuang and Zhang, Yaojie and Han, Yuhang and Ke, Junlong and Wang, Cong and others}, journal={arXiv preprint arXiv:2601.19325}, year={2026} } ```