--- license: apache-2.0 configs: - config_name: default data_files: - split: cross_modal path: data/cross_modal-* - split: unimodal path: data/unimodal-* dataset_info: features: - name: id dtype: string - name: image dtype: image - name: scene dtype: string - name: question dtype: string - name: answer dtype: string - name: task dtype: string - name: object_count dtype: int64 - name: pointer_attribute dtype: string - name: target_attribute dtype: string splits: - name: cross_modal num_bytes: 1102554749.0 num_examples: 5500 - name: unimodal num_bytes: 556691316.0 num_examples: 5500 download_size: 1646533969 dataset_size: 1659246065.0 task_categories: - visual-question-answering language: - en pretty_name: MATE size_categories: - 1K In the examples above, you can find two questions that refer to the same scene, which is represented via image and JSON format: 1) In the first question, the pointer attribute is *color* (red) and the target one is *name* (Object_0). 2) In the second question, these attributes are swapped, as the pointer attribute is *name* (Object_2) and the target one is *color* (green). Note that even though every serialized scene in our dataset contains all these attributes, the scene included in the prompt never contains the attribute pointed to or retrieved from the image. Therefore, Each instance of the dataset has the following information: - *id* (`str`): 128-bit hexadecimal representation of the instance's ID. - *image* (`png`): visual representation of the scene. - *scene* (`json`): textual representation of the scene. - *question* (`str`): question to be answered about the scene. - *answer* (`str`): answer of the question. - *task* (`str`): modalities in which the pointer and target attribute can be found: - In the `cross_modal` configuration, both attributes are in different modalities: - *img2data*: the pointer attribute is in the image, and the target attribute in the text. - *data2img*: the pointer attribute is in the text, and the target attribute in the image. - In the `uni_modal` configuration, both attributes are in the same modality: - *img2img*: the pointer and target attributes are in the image. - *data2data*: the pointer and target attributes are in the text. - *object_count* (`int`): number of objects in the scene, between 3 and 10. - *pointer_attribute* (`str`): attribute used to detect the queried object. - *target_attribute* (`str`): attribute asked in the question. **Note**: This dataset is not intended for training models. - **Curated by:** IƱigo Alonso, Gorka Azcune, Ander Salaberria, Jeremy Barnes, and Oier Lopez de Lacalle. - **Funded by:** This work is partially supported by the Ministry of Science and Innovation of the Spanish Government (AWARE project TED2021-131617B-I00, DeepKnowledge project PID2021-127777OB-C21), project funded by MCIN/AEI/10.13039/501100011033 and by FEDER, the Basque Government (IXA excellence research group IT1570-22), the European Union under Horizon Europe (Project LUMINOUS, grant number 101135724), and the UK Engineering and Physical Sciences Research Council (grant EP/W002876/1). - **Shared by:** HiTZ Center - Ixa, University of the Basque Country UPV/EHU. - **License:** Apache License 2.0. ### Dataset Sources - **Repository (Project & Code):** [https://github.com/hitz-zentroa/MATE](https://github.com/hitz-zentroa/MATE) - **[Paper](https://arxiv.org/abs/2503.03854):** Alonso, I., Salaberria, A., Azkune, G., Barnes, J., & de Lacalle, O. L. (2025). Vision-Language Models Struggle to Align Entities across Modalities. arXiv preprint arXiv:2503.03854. - **[CLEVR paper](https://openaccess.thecvf.com/content_cvpr_2017/html/Johnson_CLEVR_A_Diagnostic_CVPR_2017_paper.html):** Johnson, J., Hariharan, B., Van Der Maaten, L., Fei-Fei, L., Lawrence Zitnick, C., & Girshick, R. (2017). Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2901-2910). ## Citation **BibTeX:** ```bibtex @article{alonso2025vision, title={Vision-Language Models Struggle to Align Entities across Modalities}, author={Alonso, I{\~n}igo and Salaberria, Ander and Azkune, Gorka and Barnes, Jeremy and de Lacalle, Oier Lopez}, journal={arXiv preprint arXiv:2503.03854}, year={2025} } ``` ## More Information For more details on the methodology, dataset creation, and experimental results, please refer to the full paper: "Vision-Language Models Struggle to Align Entities across Modalities" (`https://arxiv.org/abs/2503.03854`) and the project [repository](https://github.com/hitz-zentroa/MATE).