Datasets:
license: apache-2.0
configs:
- config_name: default
data_files:
- split: cross_modal
path: data/cross_modal-*
- split: unimodal
path: data/unimodal-*
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: scene
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: task
dtype: string
- name: object_count
dtype: int64
- name: pointer_attribute
dtype: string
- name: target_attribute
dtype: string
splits:
- name: cross_modal
num_bytes: 1102554749
num_examples: 5500
- name: unimodal
num_bytes: 556691316
num_examples: 5500
download_size: 1646533969
dataset_size: 1659246065
task_categories:
- visual-question-answering
language:
- en
pretty_name: MATE
size_categories:
- 1K<n<10K
tags:
- synthetic
Dataset Card for HiTZ/MATE
This dataset provides a benchmark consisting of 5,500 question-answering examples to assess the cross-modal entity linking capabilities of vision-language models (VLMs). The ability to link entities in different modalities is measured in a question-answering setting, where each scene is represented in both the visual modality (image) and the textual one (a list of objects and their attributes in JSON format).
The dataset is provided in two configurations:
cross_modal: Our main configuration, where the link between the visual and textual scene representations is necessary to answer the question correctly.unimodal: Our secondary ablation benchmark, where each question is answered by using either the visual or textual scene representation, but not both.
Dataset Details
Dataset Description
The MATE benchmark is designed to isolate the cross-modal entity linking capabilities of VLMs (Alonso et al., 2025). Each example features a scene composed of three to ten 3D geometric objects with various colors, shapes, materials, and sizes; represented in both visual and textual modalities. The scenes in MATE are based on the CLEVR dataset (Johnson et al., 2017), which are synthetically generated, but we extend them with additional shapes and uniquely identifiable object names.
MATE includes one question per example, and each question features a pointer and a target attribute. The pointer attribute is used to discern which is the queried object, whereas the target attribute is the attribute that needs to be found from that object. In the cross_modal configuration, the pointer and target attribute belong to different modalities, while both attributes are shared in the same modality in the unimodal configuration.
When the pointer or target attribute belongs to the visual modality, we use the color or shape of the object. For attributes residing in the textual modality, we use name, rotation, size, and 3D coordinates. Additionally, the dataset features a material attribute, which, although not used as a pointer or target due to its limited value range, still serves as a descriptive property.
In the examples above, you can find two questions that refer to the same scene, which is represented via image and JSON format:
- In the first question, the pointer attribute is color (red) and the target one is name (Object_0).
- In the second question, these attributes are swapped, as the pointer attribute is name (Object_2) and the target one is color (green).
Note that even though every serialized scene in our dataset contains all these attributes, the scene included in the prompt never contains the attribute pointed to or retrieved from the image. Therefore,
Each instance of the dataset has the following information:
- id (
str): 128-bit hexadecimal representation of the instance's ID. - image (
png): visual representation of the scene. - scene (
json): textual representation of the scene. - question (
str): question to be answered about the scene. - answer (
str): answer of the question. - task (
str): modalities in which the pointer and target attribute can be found:- In the
cross_modalconfiguration, both attributes are in different modalities:- img2data: the pointer attribute is in the image, and the target attribute in the text.
- data2img: the pointer attribute is in the text, and the target attribute in the image.
- In the
uni_modalconfiguration, both attributes are in the same modality:- img2img: the pointer and target attributes are in the image.
- data2data: the pointer and target attributes are in the text.
- In the
- object_count (
int): number of objects in the scene, between 3 and 10. - pointer_attribute (
str): attribute used to detect the queried object. - target_attribute (
str): attribute asked in the question.
Note: This dataset is not intended for training models.
- Curated by: Iñigo Alonso, Gorka Azcune, Ander Salaberria, Jeremy Barnes, and Oier Lopez de Lacalle.
- Funded by: This work is partially supported by the Ministry of Science and Innovation of the Spanish Government (AWARE project TED2021-131617B-I00, DeepKnowledge project PID2021-127777OB-C21), project funded by MCIN/AEI/10.13039/501100011033 and by FEDER, the Basque Government (IXA excellence research group IT1570-22), the European Union under Horizon Europe (Project LUMINOUS, grant number 101135724), and the UK Engineering and Physical Sciences Research Council (grant EP/W002876/1).
- Shared by: HiTZ Center - Ixa, University of the Basque Country UPV/EHU.
- License: Apache License 2.0.
Dataset Sources
- Repository (Project & Code): https://github.com/hitz-zentroa/MATE
- Paper: Alonso, I., Salaberria, A., Azkune, G., Barnes, J., & de Lacalle, O. L. (2025). Vision-Language Models Struggle to Align Entities across Modalities. arXiv preprint arXiv:2503.03854.
- CLEVR paper: Johnson, J., Hariharan, B., Van Der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., & Girshick, R. (2017). Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2901-2910).
Citation
BibTeX:
@article{alonso2025vision,
title={Vision-Language Models Struggle to Align Entities across Modalities},
author={Alonso, I{\~n}igo and Salaberria, Ander and Azkune, Gorka and Barnes, Jeremy and de Lacalle, Oier Lopez},
journal={arXiv preprint arXiv:2503.03854},
year={2025}
}
More Information
For more details on the methodology, dataset creation, and experimental results, please refer to the full paper: "Vision-Language Models Struggle to Align Entities across Modalities" (https://arxiv.org/abs/2503.03854) and the project repository.