Datasets:
Tasks:
Visual Question Answering
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
Synthetic
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -74,7 +74,7 @@ The dataset is provided in two configurations:
|
|
| 74 |
|
| 75 |
### Dataset Description
|
| 76 |
|
| 77 |
-
The MATE benchmark is designed to isolate the cross-modal entity linking capabilities of VLMs. Each example features a scene composed of three to ten 3D geometric objects with various colors, shapes, materials, and sizes; represented in both visual and textual modalities. The scenes in MATE are based on the CLEVR dataset
|
| 78 |
|
| 79 |
|
| 80 |
MATE includes one question per example, and each question features a pointer and a target attribute. The pointer attribute is used to discern which is the queried object, whereas the target attribute is the attribute that needs to be found from that object. In the `cross_modal` configuration, the pointer and target attribute belong to different modalities, while both attributes are shared in the same modality in the `unimodal` configuration.
|
|
|
|
| 74 |
|
| 75 |
### Dataset Description
|
| 76 |
|
| 77 |
+
The MATE benchmark is designed to isolate the cross-modal entity linking capabilities of VLMs (Alonso et al., 2025). Each example features a scene composed of three to ten 3D geometric objects with various colors, shapes, materials, and sizes; represented in both visual and textual modalities. The scenes in MATE are based on the CLEVR dataset (Johnson et al., 2017), which are synthetically generated, but we extend them with additional shapes and uniquely identifiable object names.
|
| 78 |
|
| 79 |
|
| 80 |
MATE includes one question per example, and each question features a pointer and a target attribute. The pointer attribute is used to discern which is the queried object, whereas the target attribute is the attribute that needs to be found from that object. In the `cross_modal` configuration, the pointer and target attribute belong to different modalities, while both attributes are shared in the same modality in the `unimodal` configuration.
|