Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 7,073 Bytes
fd3bc59
dcc0822
fd3bc59
 
 
b735cc5
 
 
 
fd3bc59
 
 
 
 
 
 
43f2b39
b735cc5
 
 
 
 
 
 
 
 
 
 
 
fd3bc59
b735cc5
43f2b39
b735cc5
 
43f2b39
b735cc5
43f2b39
 
fd3bc59
 
 
 
 
 
 
dcc0822
 
a41827b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76b9d6b
a41827b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0a8982e
a41827b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2c85aef
a41827b
 
 
 
 
2c85aef
a41827b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2c85aef
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
license: apache-2.0
configs:
- config_name: default
  data_files:
  - split: cross_modal
    path: data/cross_modal-*
  - split: unimodal
    path: data/unimodal-*
dataset_info:
  features:
  - name: id
    dtype: string
  - name: image
    dtype: image
  - name: scene
    dtype: string
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: task
    dtype: string
  - name: object_count
    dtype: int64
  - name: pointer_attribute
    dtype: string
  - name: target_attribute
    dtype: string
  splits:
  - name: cross_modal
    num_bytes: 1102554749.0
    num_examples: 5500
  - name: unimodal
    num_bytes: 556691316.0
    num_examples: 5500
  download_size: 1646533969
  dataset_size: 1659246065.0
task_categories:
- visual-question-answering
language:
- en
pretty_name: MATE
size_categories:
- 1K<n<10K
tags:
- synthetic
---

# Dataset Card for HiTZ/MATE

This dataset provides a benchmark consisting of 5,500 question-answering examples to assess the cross-modal entity linking capabilities of vision-language models (VLMs). The ability to link entities in different modalities is measured in a question-answering setting, where each scene is represented in both the visual modality (image) and the textual one (a list of objects and their attributes in JSON format).

The dataset is provided in two configurations:
- `cross_modal`: Our main configuration, where the link between the visual and textual scene representations is necessary to answer the question correctly.
- `unimodal`: Our secondary ablation benchmark, where each question is answered by using either the visual or textual scene representation, but not both.


## Dataset Details

### Dataset Description

The MATE benchmark is designed to isolate the cross-modal entity linking capabilities of VLMs (Alonso et al., 2025). Each example features a scene composed of three to ten 3D geometric objects with various colors, shapes, materials, and sizes; represented in both visual and textual modalities. The scenes in MATE are based on the CLEVR dataset (Johnson et al., 2017), which are synthetically generated, but we extend them with additional shapes and uniquely identifiable object names.


MATE includes one question per example, and each question features a pointer and a target attribute. The pointer attribute is used to discern which is the queried object, whereas the target attribute is the attribute that needs to be found from that object. In the `cross_modal` configuration, the pointer and target attribute belong to different modalities, while both attributes are shared in the same modality in the `unimodal` configuration.

When the pointer or target attribute belongs to the visual modality, we use the *color* or *shape* of the object. For attributes residing in the textual modality, we use *name*, *rotation*, *size*, and *3D coordinates*. Additionally, the dataset features a *material* attribute, which, although not used as a pointer or target due to its limited value range, still serves as a descriptive property. 

<img src="media/overview_mate.png" alt="Example scene and questions in MATE." width="60%"/>

In the examples above, you can find two questions that refer to the same scene, which is represented via image and JSON format:
 1) In the first question, the pointer attribute is *color* (red) and the target one is *name* (Object_0).
 2) In the second question, these attributes are swapped, as the pointer attribute is *name* (Object_2) and the target one is *color* (green).

Note that even though every serialized scene in our dataset contains all these attributes, the scene included in the prompt never contains the attribute pointed to or retrieved from the image. Therefore, 


Each instance of the dataset has the following information:

- *id* (`str`): 128-bit hexadecimal representation of the instance's ID. 
- *image* (`png`): visual representation of the scene.
- *scene* (`json`): textual representation of the scene.
- *question* (`str`): question to be answered about the scene.
- *answer* (`str`): answer of the question.
- *task* (`str`): modalities in which the pointer and target attribute can be found:
  - In the `cross_modal` configuration, both attributes are in different modalities:
    - *img2data*: the pointer attribute is in the image, and the target attribute in the text.
    - *data2img*: the pointer attribute is in the text, and the target attribute in the image.
  - In the `uni_modal` configuration, both attributes are in the same modality:
    - *img2img*: the pointer and target attributes are in the image.
    - *data2data*: the pointer and target attributes are in the text.
- *object_count* (`int`): number of objects in the scene, between 3 and 10.
- *pointer_attribute* (`str`): attribute used to detect the queried object.
- *target_attribute* (`str`): attribute asked in the question.

**Note**: This dataset is not intended for training models.


- **Curated by:** Iñigo Alonso, Gorka Azcune, Ander Salaberria, Jeremy Barnes, and Oier Lopez de Lacalle.
- **Funded by:** This work is partially supported by the Ministry of Science and Innovation of the Spanish Government (AWARE project TED2021-131617B-I00, DeepKnowledge project PID2021-127777OB-C21), project funded by MCIN/AEI/10.13039/501100011033 and by FEDER, the Basque Government (IXA excellence research group IT1570-22), the European Union under Horizon Europe (Project LUMINOUS, grant number 101135724), and the UK Engineering and Physical Sciences Research Council (grant EP/W002876/1).
- **Shared by:** HiTZ Center - Ixa, University of the Basque Country UPV/EHU.
- **License:** Apache License 2.0.

### Dataset Sources

- **Repository (Project & Code):** [https://github.com/hitz-zentroa/MATE](https://github.com/hitz-zentroa/MATE)
- **[Paper](https://arxiv.org/abs/2503.03854):** Alonso, I., Salaberria, A., Azkune, G., Barnes, J., & de Lacalle, O. L. (2025). Vision-Language Models Struggle to Align Entities across Modalities. arXiv preprint arXiv:2503.03854.  
- **[CLEVR paper](https://openaccess.thecvf.com/content_cvpr_2017/html/Johnson_CLEVR_A_Diagnostic_CVPR_2017_paper.html):** Johnson, J., Hariharan, B., Van Der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., & Girshick, R. (2017). Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2901-2910). 


## Citation

**BibTeX:**

```bibtex
@article{alonso2025vision,
  title={Vision-Language Models Struggle to Align Entities across Modalities},
  author={Alonso, I{\~n}igo and Salaberria, Ander and Azkune, Gorka and Barnes, Jeremy and de Lacalle, Oier Lopez},
  journal={arXiv preprint arXiv:2503.03854},
  year={2025}
}
```

## More Information

For more details on the methodology, dataset creation, and experimental results, please refer to the full paper: "Vision-Language Models Struggle to Align Entities across Modalities" (`https://arxiv.org/abs/2503.03854`) and the project [repository](https://github.com/hitz-zentroa/MATE).