Update dataset card: Add task categories, language, dataset structure, usage, and contributing info

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +265 -191
README.md CHANGED
@@ -1,192 +1,266 @@
1
- ---
2
- pretty_name: "Do-You-See-Me Dataset"
3
- license: cdla-permissive-2.0
4
- tags:
5
- - visual-perception
6
- - vision-language
7
- - MLLMs
8
- - dataset
9
- papers:
10
- - title: "Do You See Me: A Multidimensional Benchmark for Evaluating Visual Perception in Multimodal LLMs"
11
- url: "https://arxiv.org/pdf/2506.02022"
12
- ---
13
-
14
- # DoYouSeeMe
15
- <div style="display: flex; justify-content: space-between;">
16
- <img src="img/main_fig.png" width="100%" alt="Results on Do You See Me">
17
- </div>
18
-
19
-
20
- ## Overview
21
-
22
- The DoYouSeeMe benchmark is a comprehensive evaluation framework designed to assess visual perception capabilities in Machine Learning Language Models (MLLMs). This fully automated test suite dynamically generates both visual stimuli and perception-focused questions (VPQA) with incremental difficulty levels, enabling a graded evaluation of MLLM performance across multiple perceptual dimensions. Our benchmark consists of both 2D and 3D photorealistic evaluations of MLLMs.
23
-
24
- ## Theoretical Foundation
25
-
26
- The dataset's structure is grounded in established human psychological frameworks that categorize visual perception into core abilities (Chalfant and Scheffelin, 1969). Drawing inspiration from standardized assessments like the Test of Visual Perception Skills (TVPS) (Gardner, 1988) and Motor-Free Visual Perception Test (MVPT) (Colarusso, 2003), DoYouSeeMe adapts these principles to create a systematic evaluation methodology for machine vision systems.
27
-
28
- ## Perceptual Dimensions
29
-
30
- The benchmark focuses on seven key dimensions of visual perception:
31
-
32
- 1. **Shape Discrimination (2D and 3D)**: Evaluates the ability to recognize shapes.
33
-
34
- 2. **Joint Shape-Color Discrimination (2D and 3D)**: Evaluates the ability to jointly recognize shapes and color.
35
-
36
- 3. **Visual Form Constancy (2D and 3D)**: Tests MLLM ability to identify a test shape configuration from similarly placed disctractors.
37
-
38
- 4. **Letter Disambiguation (2D and 3D)**: Tests the recognition of letters.
39
-
40
- 5. **Visual Figure-Ground (2D)**: Evaluates the ability to distinguish the main object from its background under varying conditions.
41
-
42
- 6. **Visual Closure (2D)**: Assesses the ability to complete partially obscured shapes by mentally filling in missing information.
43
-
44
- 7. **Visual Spatial (2D and 3D)**: Examines the ability to perceive positions of objects relative to oneself and to other objects.
45
-
46
-
47
- Note: While human visual perception also includes Visual Memory (the ability to remember sequences of presented images), this dimension is omitted from the benchmark as current MLLMs lack short-term visual memory capabilities beyond textual descriptions.
48
-
49
- ## Technical Implementation
50
-
51
- The entire dataset generation framework is implemented in Python and uses SVG representations to create visual stimuli with precisely controlled parameters. This approach allows for:
52
-
53
- - Dynamic generation of test images with systematic variations
54
- - Controlled difficulty progression across perception dimensions
55
- - Reproducible evaluation conditions
56
- - Fine-grained assessment of model performance
57
-
58
- ### Control Parameters
59
-
60
- <div style="display: flex; justify-content: space-between;">
61
- <img src="img/control_param_syn_dataset.png" width="100%" alt="Results on Do You See Me">
62
- </div>
63
-
64
- The code is open-sourced to facilitate further research and advancement in the field of visual perception for artificial intelligence systems.
65
-
66
- Paper: [DoYouSeeMe Benchmark on arXiv](https://arxiv.org/pdf/2506.02022)
67
-
68
- Code: [DoYouSeeMe Github Repo](https://github.com/microsoft/Do-You-See-Me)
69
-
70
-
71
-
72
-
73
- ## Samples
74
-
75
- ### Visual Spatial
76
-
77
- Tests the ability to perceive and understand spatial relationships between objects. Evaluates orientation discrimination and positional awareness.
78
-
79
- <div style="display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; align-items: start;">
80
- <img src="2D_DoYouSeeMe/visual_spatial/1.png" style="width: 100%; height: auto;" alt="Visual Spatial Example 1">
81
- <img src="2D_DoYouSeeMe/visual_spatial/50.png" style="width: 100%; height: auto;" alt="Visual Spatial Example 2">
82
- <img src="2D_DoYouSeeMe/visual_spatial/100.png" style="width: 100%; height: auto;" alt="Visual Spatial Example 3">
83
- </div>
84
-
85
- *Sample Question: Starting from the black circle at position (row 1, column 3), how many triangles are there bottom of it in the same row?*
86
-
87
-
88
- ### Visual Figure-Ground
89
-
90
- Examines the ability to distinguish an object from its background. Challenges perception by varying contrast, noise, and complexity.
91
-
92
- <div style="display: flex; justify-content: space-between;">
93
- <img src="2D_DoYouSeeMe/visual_figure_ground/1.png" width="30%" alt="Figure-Ground Example 1">
94
- <img src="2D_DoYouSeeMe/visual_figure_ground/50.png" width="30%" alt="Figure-Ground Example 2">
95
- <img src="2D_DoYouSeeMe/visual_figure_ground/89.png" width="30%" alt="Figure-Ground Example 3">
96
- </div>
97
-
98
- *Sample Question: The figure consists of a Target image, which is embedded in some background noise. Out of the four given options, your task is to pick the option which has the same figure as the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
99
-
100
- ### Visual Form Constancy
101
-
102
- Assesses recognition of shapes despite changes in size, orientation, or context. Tests invariance in visual perception.
103
-
104
- <div style="display: flex; justify-content: space-between;">
105
- <img src="2D_DoYouSeeMe/visual_form_constancy/1.png" width="30%" alt="Form Constancy Example 1">
106
- <img src="2D_DoYouSeeMe/visual_form_constancy/50.png" width="30%" alt="Form Constancy Example 2">
107
- <img src="2D_DoYouSeeMe/visual_form_constancy/100.png" width="30%" alt="Form Constancy Example 3">
108
- </div>
109
-
110
- *Sample Question: The figure consists of a Target image. Out of the four given options, your task is to pick the option which has the same figure as the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
111
-
112
-
113
- ### Shape Disambiguation
114
-
115
- Challenges the ability to identify ambiguous shapes that can be interpreted in multiple ways. Explores perceptual flexibility.
116
-
117
- <div style="display: flex; justify-content: space-between;">
118
- <img src="2D_DoYouSeeMe/geometric_dataset/1.png" width="30%" alt="Shape Disambiguation Example 1">
119
- <img src="2D_DoYouSeeMe/geometric_dataset/50.png" width="30%" alt="Shape Disambiguation Example 2">
120
- <img src="2D_DoYouSeeMe/geometric_dataset/100.png" width="30%" alt="Shape Disambiguation Example 3">
121
- </div>
122
-
123
- *Sample Question: Count the total number of triangles in the image, including each concentric triangle separately. For example, if there is one triangle with 2 inner concentric rings, that counts as 3 triangles. Respond with only a number.*
124
-
125
-
126
- ### Shape Color Discrimination
127
-
128
- Tests the ability to differentiate shapes based on color properties while controlling for other visual features.
129
-
130
- <div style="display: flex; justify-content: space-between;">
131
- <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/1.png" width="30%" alt="Shape Color Example 1">
132
- <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/50.png" width="30%" alt="Shape Color Example 2">
133
- <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/89.png" width="30%" alt="Shape Color Example 3">
134
- </div>
135
-
136
- *Sample Question: Count the number of star's that are red.*
137
-
138
-
139
-
140
- ### Letter Disambiguation
141
-
142
- Examines recognition of letters under various transformations and distortions. Evaluates robustness of character recognition.
143
-
144
- <div style="display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; align-items: start;">
145
- <img src="2D_DoYouSeeMe/letter_disambiguation/1.png" style="width: 100%; height: auto;" alt="Letter Disambiguation Example 1">
146
- <img src="2D_DoYouSeeMe/letter_disambiguation/50.png" style="width: 100%; height: auto;" alt="Letter Disambiguation Example 2">
147
- <img src="2D_DoYouSeeMe/letter_disambiguation/100.png" style="width: 100%; height: auto;" alt="Letter Disambiguation Example 3">
148
- </div>
149
-
150
- *Sample Question: The image shows one or more letters formed by a grid of small squares. What letter(s) can you identify in this image? Please respond with only the letter(s) you see.*
151
-
152
-
153
-
154
- ### Visual Closure
155
-
156
- Tests the ability to recognize incomplete figures by mentally filling in missing information. Evaluates gestalt processing.
157
-
158
- <div style="display: flex; justify-content: space-between;">
159
- <img src="2D_DoYouSeeMe/visual_closure/1.png" width="30%" alt="Visual Closure Example 1">
160
- <img src="2D_DoYouSeeMe/visual_closure/50.png" width="30%" alt="Visual Closure Example 2">
161
- <img src="2D_DoYouSeeMe/visual_closure/100.png" width="30%" alt="Visual Closure Example 3">
162
- </div>
163
-
164
- *Sample Question: The figure consists of a target image which is complete, Out of the four given options (which are partially complete), your task is to pick the option which when completed matches the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
165
-
166
- ## Citation
167
-
168
- If you use this benchmark or dataset in your research, please cite our work as follows:
169
- ```
170
- @misc{kanade2025multidimensionalbenchmarkevaluating,
171
- title={Do You See Me : A Multidimensional Benchmark for Evaluating Visual Perception in Multimodal LLMs},
172
- author={Aditya Kanade and Tanuja Ganu},
173
- year={2025},
174
- eprint={2506.02022},
175
- archivePrefix={arXiv},
176
- primaryClass={cs.CV},
177
- url={https://arxiv.org/abs/2506.02022},
178
- }
179
- ```
180
-
181
- ## Trademarks
182
-
183
- This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
184
- trademarks or logos is subject to and must follow
185
- [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
186
- Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
187
- Any use of third-party trademarks or logos are subject to those third-party's policies.
188
-
189
- ## License 📜
190
-
191
- The **code** in this repository is licensed under the [MIT License](https://opensource.org/licenses/MIT).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
192
  The **dataset** is licensed under the [Community Data License Agreement - Permissive - Version 2.0 (CDLA-Permissive-2.0)](https://cdla.dev/permissive-2-0/).
 
1
+ ---
2
+ license: cdla-permissive-2.0
3
+ pretty_name: Do-You-See-Me Dataset
4
+ tags:
5
+ - visual-perception
6
+ - vision-language
7
+ - MLLMs
8
+ - dataset
9
+ language:
10
+ - en
11
+ task_categories:
12
+ - image-text-to-text
13
+ papers:
14
+ - title: 'Do You See Me: A Multidimensional Benchmark for Evaluating Visual Perception
15
+ in Multimodal LLMs'
16
+ url: https://arxiv.org/pdf/2506.02022
17
+ ---
18
+
19
+ # DoYouSeeMe
20
+ <div style="display: flex; justify-content: space-between;">
21
+ <img src="img/main_fig.png" width="100%" alt="Results on Do You See Me">
22
+ </div>
23
+
24
+
25
+ ## Overview
26
+
27
+ The DoYouSeeMe benchmark is a comprehensive evaluation framework designed to assess visual perception capabilities in Machine Learning Language Models (MLLMs). This fully automated test suite dynamically generates both visual stimuli and perception-focused questions (VPQA) with incremental difficulty levels, enabling a graded evaluation of MLLM performance across multiple perceptual dimensions. Our benchmark consists of both 2D and 3D photorealistic evaluations of MLLMs.
28
+
29
+ ## Theoretical Foundation
30
+
31
+ The dataset's structure is grounded in established human psychological frameworks that categorize visual perception into core abilities (Chalfant and Scheffelin, 1969). Drawing inspiration from standardized assessments like the Test of Visual Perception Skills (TVPS) (Gardner, 1988) and Motor-Free Visual Perception Test (MVPT) (Colarusso, 2003), DoYouSeeMe adapts these principles to create a systematic evaluation methodology for machine vision systems.
32
+
33
+ ## Perceptual Dimensions
34
+
35
+ The benchmark focuses on seven key dimensions of visual perception:
36
+
37
+ 1. **Shape Discrimination (2D and 3D)**: Evaluates the ability to recognize shapes.
38
+
39
+ 2. **Joint Shape-Color Discrimination (2D and 3D)**: Evaluates the ability to jointly recognize shapes and color.
40
+
41
+ 3. **Visual Form Constancy (2D and 3D)**: Tests MLLM ability to identify a test shape configuration from similarly placed disctractors.
42
+
43
+ 4. **Letter Disambiguation (2D and 3D)**: Tests the recognition of letters.
44
+
45
+ 5. **Visual Figure-Ground (2D)**: Evaluates the ability to distinguish the main object from its background under varying conditions.
46
+
47
+ 6. **Visual Closure (2D)**: Assesses the ability to complete partially obscured shapes by mentally filling in missing information.
48
+
49
+ 7. **Visual Spatial (2D and 3D)**: Examines the ability to perceive positions of objects relative to oneself and to other objects.
50
+
51
+
52
+ Note: While human visual perception also includes Visual Memory (the ability to remember sequences of presented images), this dimension is omitted from the benchmark as current MLLMs lack short-term visual memory capabilities beyond textual descriptions.
53
+
54
+ ## Technical Implementation
55
+
56
+ The entire dataset generation framework is implemented in Python and uses SVG representations to create visual stimuli with precisely controlled parameters. This approach allows for:
57
+
58
+ - Dynamic generation of test images with systematic variations
59
+ - Controlled difficulty progression across perception dimensions
60
+ - Reproducible evaluation conditions
61
+ - Fine-grained assessment of model performance
62
+
63
+ ### Control Parameters
64
+
65
+ <div style="display: flex; justify-content: space-between;">
66
+ <img src="img/control_param_syn_dataset.png" width="100%" alt="Results on Do You See Me">
67
+ </div>
68
+
69
+ The code and dataset are open-sourced to facilitate further research and advancement in the field of visual perception for artificial intelligence systems.
70
+
71
+ Paper: [DoYouSeeMe Benchmark on arXiv](https://arxiv.org/pdf/2506.02022)
72
+
73
+ Code: [DoYouSeeMe Github Repo](https://github.com/microsoft/Do-You-See-Me)
74
+
75
+ ## Dataset Overview and Structure
76
+
77
+ This repository contains a synthetic dataset exploring seven distinct dimensions of visual perception and processing. Each dimension examines a specific aspect of how we interpret visual information. The benchmark dataset is released as a zip file named *dataset.zip* in the main folder.
78
+
79
+ ### Dataset Structure
80
+
81
+ The repository is organized into two directories:
82
+ - 2D_DoYouSeeMe
83
+ - 3D_DoYouSeeMe
84
+
85
+ Each directory consists of separate dimension wise dataset:
86
+
87
+ **2D**
88
+ - 2D_DoYouSeeMe/visual_spatial
89
+ - 2D_DoYouSeeMe/visual_figure_ground
90
+ - 2D_DoYouSeeMe/visual_form_constancy
91
+ - 2D_DoYouSeeMe/shape_disambiguation
92
+ - 2D_DoYouSeeMe/shape_color_discrimination
93
+ - 2D_DoYouSeeMe/letter_disambiguation
94
+ - 2D_DoYouSeeMe/visual_closure
95
+
96
+ **3D**
97
+ - 3D_DoYouSeeMe/visual_spatial
98
+ - 3D_DoYouSeeMe/visual_form_constancy
99
+ - 3D_DoYouSeeMe/shape_disambiguation
100
+ - 3D_DoYouSeeMe/shape_color_discrimination
101
+ - 3D_DoYouSeeMe/letter_disambiguation
102
+
103
+ ### Data Format
104
+
105
+ Each dimension directory contains:
106
+ - Images(`<xx>.png`): Images with controlled variations
107
+ - dataset_info.csv: Metadata file containing control parameters and ground truth answers for each image
108
+
109
+ ## Dataset Generation and Usage
110
+
111
+ To generate data, run the Python file corresponding to the visual-perception dimension you are interested in. The general command structure is:
112
+
113
+ ```bash
114
+ python scripts/<dimensionality>/<dimension-name>.py
115
+ ```
116
+ * Replace `<dimensionality>` with either `2D` or `3D`.
117
+ * Replace `<dimension-name>` with the actual name of the visual-perception dimension (e.g., `visual_spatial`, `shape_disambiguation`).
118
+
119
+ **Example:** To generate data for the 2D `visual_spatial` dimension, you would execute:
120
+
121
+ ```bash
122
+ python scripts/2D/visual_spatial.py
123
+ ```
124
+ Each python file has a control towards the end, where sweeps are defined for each control parameter listed in **Table 1**, these can be changed to increase data. For 1) visual_spatial, 2) shape_disambiguation, and 3) shape_color_discrimination a *dataset_dump.csv* is created in related directory, this dump file captures all the details for each generated image, we then use a *dataset_creator.py* file (added in all the three dirs) to generate the actual dataset (dataset_info.csv), where multiple perception questions are formulated per image (refer the dataset_creator.py to change number of questions per image). Each visual-perception dim has a dataset_info.csv containing filename, question, answer, and sweep column.
125
+
126
+ ## Results
127
+
128
+ <div style="display: flex; justify-content: space-between;">
129
+ <img src="img/results_syn_dataset.png" width="100%" alt="Results on Do You See Me">
130
+ </div>
131
+
132
+
133
+ ## Samples
134
+
135
+ ### Visual Spatial
136
+
137
+ Tests the ability to perceive and understand spatial relationships between objects. Evaluates orientation discrimination and positional awareness.
138
+
139
+ <div style="display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; align-items: start;">
140
+ <img src="2D_DoYouSeeMe/visual_spatial/1.png" style="width: 100%; height: auto;" alt="Visual Spatial Example 1">
141
+ <img src="2D_DoYouSeeMe/visual_spatial/50.png" style="width: 100%; height: auto;" alt="Visual Spatial Example 2">
142
+ <img src="2D_DoYouSeeMe/visual_spatial/100.png" style="width: 100%; height: auto;" alt="Visual Spatial Example 3">
143
+ </div>
144
+
145
+ *Sample Question: Starting from the black circle at position (row 1, column 3), how many triangles are there bottom of it in the same row?*
146
+
147
+
148
+ ### Visual Figure-Ground
149
+
150
+ Examines the ability to distinguish an object from its background. Challenges perception by varying contrast, noise, and complexity.
151
+
152
+ <div style="display: flex; justify-content: space-between;">
153
+ <img src="2D_DoYouSeeMe/visual_figure_ground/1.png" width="30%" alt="Figure-Ground Example 1">
154
+ <img src="2D_DoYouSeeMe/visual_figure_ground/50.png" width="30%" alt="Figure-Ground Example 2">
155
+ <img src="2D_DoYouSeeMe/visual_figure_ground/89.png" width="30%" alt="Figure-Ground Example 3">
156
+ </div>
157
+
158
+ *Sample Question: The figure consists of a Target image, which is embedded in some background noise. Out of the four given options, your task is to pick the option which has the same figure as the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
159
+
160
+ ### Visual Form Constancy
161
+
162
+ Assesses recognition of shapes despite changes in size, orientation, or context. Tests invariance in visual perception.
163
+
164
+ <div style="display: flex; justify-content: space-between;">
165
+ <img src="2D_DoYouSeeMe/visual_form_constancy/1.png" width="30%" alt="Form Constancy Example 1">
166
+ <img src="2D_DoYouSeeMe/visual_form_constancy/50.png" width="30%" alt="Form Constancy Example 2">
167
+ <img src="2D_DoYouSeeMe/visual_form_constancy/100.png" width="30%" alt="Form Constancy Example 3">
168
+ </div>
169
+
170
+ *Sample Question: The figure consists of a Target image. Out of the four given options, your task is to pick the option which has the same figure as the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
171
+
172
+
173
+ ### Shape Disambiguation
174
+
175
+ Challenges the ability to identify ambiguous shapes that can be interpreted in multiple ways. Explores perceptual flexibility.
176
+
177
+ <div style="display: flex; justify-content: space-between;">
178
+ <img src="2D_DoYouSeeMe/geometric_dataset/1.png" width="30%" alt="Shape Disambiguation Example 1">
179
+ <img src="2D_DoYouSeeMe/geometric_dataset/50.png" width="30%" alt="Shape Disambiguation Example 2">
180
+ <img src="2D_DoYouSeeMe/geometric_dataset/100.png" width="30%" alt="Shape Disambiguation Example 3">
181
+ </div>
182
+
183
+ *Sample Question: Count the total number of triangles in the image, including each concentric triangle separately. For example, if there is one triangle with 2 inner concentric rings, that counts as 3 triangles. Respond with only a number.*
184
+
185
+
186
+ ### Shape Color Discrimination
187
+
188
+ Tests the ability to differentiate shapes based on color properties while controlling for other visual features.
189
+
190
+ <div style="display: flex; justify-content: space-between;">
191
+ <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/1.png" width="30%" alt="Shape Color Example 1">
192
+ <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/50.png" width="30%" alt="Shape Color Example 2">
193
+ <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/89.png" width="30%" alt="Shape Color Example 3">
194
+ </div>
195
+
196
+ *Sample Question: Count the number of star's that are red.*
197
+
198
+
199
+
200
+ ### Letter Disambiguation
201
+
202
+ Examines recognition of letters under various transformations and distortions. Evaluates robustness of character recognition.
203
+
204
+ <div style="display: grid; grid-template-columns: repeat(3, 1fr); gap: 15px; align-items: start;">
205
+ <img src="2D_DoYouSeeMe/letter_disambiguation/1.png" style="width: 100%; height: auto;" alt="Letter Disambiguation Example 1">
206
+ <img src="2D_DoYouSeeMe/letter_disambiguation/50.png" style="width: 100%; height: auto;" alt="Letter Disambiguation Example 2">
207
+ <img src="2D_DoYouSeeMe/letter_disambiguation/100.png" style="width: 100%; height: auto;" alt="Letter Disambiguation Example 3">
208
+ </div>
209
+
210
+ *Sample Question: The image shows one or more letters formed by a grid of small squares. What letter(s) can you identify in this image? Please respond with only the letter(s) you see.*\
211
+
212
+
213
+
214
+ ### Visual Closure
215
+
216
+ Tests the ability to recognize incomplete figures by mentally filling in missing information. Evaluates gestalt processing.
217
+
218
+ <div style="display: flex; justify-content: space-between;">
219
+ <img src="2D_DoYouSeeMe/visual_closure/1.png" width="30%" alt="Visual Closure Example 1">
220
+ <img src="2D_DoYouSeeMe/visual_closure/50.png" width="30%" alt="Visual Closure Example 2">
221
+ <img src="2D_DoYouSeeMe/visual_closure/100.png" width="30%" alt="Visual Closure Example 3">
222
+ </div>
223
+
224
+ *Sample Question: The figure consists of a target image which is complete, Out of the four given options (which are partially complete), your task is to pick the option which when completed matches the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
225
+
226
+ ## Citation
227
+
228
+ If you use this benchmark or dataset in your research, please cite our work as follows:
229
+ ```
230
+ @misc{kanade2025multidimensionalbenchmarkevaluating,
231
+ title={Do You See Me : A Multidimensional Benchmark for Evaluating Visual Perception in Multimodal LLMs},
232
+ author={Aditya Kanade and Tanuja Ganu},
233
+ year={2025},
234
+ eprint={2506.02022},
235
+ archivePrefix={arXiv},
236
+ primaryClass={cs.CV},
237
+ url={https://arxiv.org/abs/2506.02022},
238
+ }
239
+ ```
240
+
241
+ ## Contributing
242
+
243
+ This project welcomes contributions and suggestions. Most contributions require you to agree to a
244
+ Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
245
+ the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
246
+
247
+ When you submit a pull request, a CLA bot will automatically determine whether you need to provide
248
+ a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
249
+ provided by the bot. You will only need to do this once across all repos using our CLA.
250
+
251
+ This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
252
+ For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
253
+ contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
254
+
255
+ ## Trademarks
256
+
257
+ This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
258
+ trademarks or logos is subject to and must follow
259
+ [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
260
+ Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
261
+ Any use of third-party trademarks or logos are subject to those third-party's policies.
262
+
263
+ ## License 📜
264
+
265
+ The **code** in this repository is licensed under the [MIT License](https://opensource.org/licenses/MIT).
266
  The **dataset** is licensed under the [Community Data License Agreement - Permissive - Version 2.0 (CDLA-Permissive-2.0)](https://cdla.dev/permissive-2-0/).