curiousT commited on
Commit
b204bf5
·
verified ·
1 Parent(s): 735aa44

Add README.md

Browse files
Files changed (1) hide show
  1. README.md +245 -26
README.md CHANGED
@@ -1,26 +1,245 @@
1
- ---
2
- license: cdla-permissive-2.0
3
- configs:
4
- - config_name: default
5
- data_files:
6
- - split: train
7
- path: data/train-*
8
- dataset_info:
9
- features:
10
- - name: question
11
- dtype: string
12
- - name: answer
13
- dtype: string
14
- - name: sweep
15
- dtype: string
16
- - name: dataset_id
17
- dtype: string
18
- - name: image
19
- dtype: image
20
- splits:
21
- - name: train
22
- num_bytes: 457757897.31
23
- num_examples: 2613
24
- download_size: 435283865
25
- dataset_size: 457757897.31
26
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DoYouSeeMe
2
+ <div style="display: flex; justify-content: space-between;">
3
+ <img src="img/main_fig.png" width="100%" alt="Results on Do You See Me">
4
+ </div>
5
+
6
+
7
+ ## Overview
8
+
9
+ The DoYouSeeMe benchmark is a comprehensive evaluation framework designed to assess visual perception capabilities in Machine Learning Language Models (MLLMs). This fully automated test suite dynamically generates both visual stimuli and perception-focused questions (VPQA) with incremental difficulty levels, enabling a graded evaluation of MLLM performance across multiple perceptual dimensions. Our benchmark consists of both 2D and 3D photorealistic evaluations of MLLMs.
10
+
11
+ ## Theoretical Foundation
12
+
13
+ The dataset's structure is grounded in established human psychological frameworks that categorize visual perception into core abilities (Chalfant and Scheffelin, 1969). Drawing inspiration from standardized assessments like the Test of Visual Perception Skills (TVPS) (Gardner, 1988) and Motor-Free Visual Perception Test (MVPT) (Colarusso, 2003), DoYouSeeMe adapts these principles to create a systematic evaluation methodology for machine vision systems.
14
+
15
+ ## Perceptual Dimensions
16
+
17
+ The benchmark focuses on seven key dimensions of visual perception:
18
+
19
+ 1. **Shape Discrimination (2D and 3D)**: Evaluates the ability to recognize shapes.
20
+
21
+ 2. **Joint Shape-Color Discrimination (2D and 3D)**: Evaluates the ability to jointly recognize shapes and color.
22
+
23
+ 3. **Visual Form Constancy (2D and 3D)**: Tests MLLM ability to identify a test shape configuration from similarly placed disctractors.
24
+
25
+ 4. **Letter Disambiguation (2D and 3D)**: Tests the recognition of letters.
26
+
27
+ 5. **Visual Figure-Ground (2D)**: Evaluates the ability to distinguish the main object from its background under varying conditions.
28
+
29
+ 6. **Visual Closure (2D)**: Assesses the ability to complete partially obscured shapes by mentally filling in missing information.
30
+
31
+ 7. **Visual Spatial (2D and 3D)**: Examines the ability to perceive positions of objects relative to oneself and to other objects.
32
+
33
+
34
+ Note: While human visual perception also includes Visual Memory (the ability to remember sequences of presented images), this dimension is omitted from the benchmark as current MLLMs lack short-term visual memory capabilities beyond textual descriptions.
35
+
36
+ ## Technical Implementation
37
+
38
+ The entire dataset generation framework is implemented in Python and uses SVG representations to create visual stimuli with precisely controlled parameters. This approach allows for:
39
+
40
+ - Dynamic generation of test images with systematic variations
41
+ - Controlled difficulty progression across perception dimensions
42
+ - Reproducible evaluation conditions
43
+ - Fine-grained assessment of model performance
44
+
45
+ ### Control Parameters
46
+
47
+ <div style="display: flex; justify-content: space-between;">
48
+ <img src="img/control_param_syn_dataset.png" width="100%" alt="Results on Do You See Me">
49
+ </div>
50
+
51
+ The code and dataset are open-sourced to facilitate further research and advancement in the field of visual perception for artificial intelligence systems.
52
+
53
+
54
+ This repository contains a synthetic dataset exploring seven distinct dimensions of visual perception and processing. Each dimension examines a specific aspect of how we interpret visual information.
55
+
56
+ ## Dataset Structure
57
+
58
+ The repository is organized into two directories:
59
+ - 2D_DoYouSeeMe
60
+
61
+ - 3D_DoYouSeeMe
62
+
63
+ Each directory consists of separate dimension wise dataset:
64
+
65
+ **2D**
66
+ - 2D_DoYouSeeMe/visual_spatial
67
+ - 2D_DoYouSeeMe/visual_figure_ground
68
+ - 2D_DoYouSeeMe/visual_form_constancy
69
+ - 2D_DoYouSeeMe/shape_disambiguation
70
+ - 2D_DoYouSeeMe/shape_color_discrimination
71
+ - 2D_DoYouSeeMe/letter_disambiguation
72
+ - 2D_DoYouSeeMe/visual_closure
73
+
74
+ **3D**
75
+ - 3D_DoYouSeeMe/visual_spatial
76
+ - 3D_DoYouSeeMe/visual_form_constancy
77
+ - 3D_DoYouSeeMe/shape_disambiguation
78
+ - 3D_DoYouSeeMe/shape_color_discrimination
79
+ - 3D_DoYouSeeMe/letter_disambiguation
80
+
81
+ To generate data, run the Python file corresponding to the visual-perception dimension you are interested in. The general command structure is:
82
+
83
+ ```bash
84
+ python scripts/<dimensionality>/<dimension-name>.py
85
+ ```
86
+ * Replace `<dimensionality>` with either `2D` or `3D`.
87
+ * Replace `<dimension-name>` with the actual name of the visual-perception dimension (e.g., `visual_spatial`, `shape_disambiguation`).
88
+
89
+ **Example:** To generate data for the 2D `visual_spatial` dimension, you would execute:
90
+
91
+ ```bash
92
+ python scripts/2D/visual_spatial.py
93
+ ```
94
+ Each python file has a control towards the end, where sweeps are defined for each control parameter listed in **Table 1**, these can be changed to increase data. For 1) visual_spatial, 2) shape_disambiguation, and 3) shape_color_discrimination a *dataset_dump.csv* is created in related directory, this dump file captures all the details for each generated image, we then use a *dataset_creator.py* file (added in all the three dirs) to generate the actual dataset (dataset_info.csv), where multiple perception questions are formulated per image (refer the dataset_creator.py to change number of questions per image). Each visual-perception dim has a dataset_info.csv containing filename, question, answer, and sweep column.
95
+
96
+ We have created a dataset of around 2.6k images used and benchmarked multiple open and closed source MLLMs, performance of MLLMs is presented in the **Results** section. This benchmark dataset is released as a zip file named *dataset.zip* in the main folder.
97
+
98
+ ## Data Format
99
+
100
+ Each dimension directory contains:
101
+ - Images(`<xx>.png`): Images with controlled variations
102
+ - dataset_info.csv: Metadata file containing control parameters and ground truth answers for each image
103
+
104
+ ## Results
105
+
106
+ <div style="display: flex; justify-content: space-between;">
107
+ <img src="img/results_syn_dataset.png" width="100%" alt="Results on Do You See Me">
108
+ </div>
109
+
110
+
111
+ ## Samples
112
+
113
+ ### Visual Spatial
114
+
115
+ Tests the ability to perceive and understand spatial relationships between objects. Evaluates orientation discrimination and positional awareness.
116
+
117
+ <div style="display: flex; justify-content: space-between;">
118
+ <img src="2D_DoYouSeeMe/visual_spatial/1.png" width="30%" alt="Visual Spatial Example 1">
119
+ <img src="2D_DoYouSeeMe/visual_spatial/50.png" width="30%" alt="Visual Spatial Example 2">
120
+ <img src="2D_DoYouSeeMe/visual_spatial/100.png" width="30%" alt="Visual Spatial Example 3">
121
+ </div>
122
+
123
+ *Sample Question: Starting from the black circle at position (row 1, column 3), how many triangles are there bottom of it in the same row?*
124
+
125
+
126
+ ### Visual Figure-Ground
127
+
128
+ Examines the ability to distinguish an object from its background. Challenges perception by varying contrast, noise, and complexity.
129
+
130
+ <div style="display: flex; justify-content: space-between;">
131
+ <img src="2D_DoYouSeeMe/visual_figure_ground/1.png" width="30%" alt="Figure-Ground Example 1">
132
+ <img src="2D_DoYouSeeMe/visual_figure_ground/50.png" width="30%" alt="Figure-Ground Example 2">
133
+ <img src="2D_DoYouSeeMe/visual_figure_ground/89.png" width="30%" alt="Figure-Ground Example 3">
134
+ </div>
135
+
136
+ *Sample Question: The figure consists of a Target image, which is embedded in some background noise. Out of the four given options, your task is to pick the option which has the same figure as the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
137
+
138
+ ### Visual Form Constancy
139
+
140
+ Assesses recognition of shapes despite changes in size, orientation, or context. Tests invariance in visual perception.
141
+
142
+ <div style="display: flex; justify-content: space-between;">
143
+ <img src="2D_DoYouSeeMe/visual_form_constancy/1.png" width="30%" alt="Form Constancy Example 1">
144
+ <img src="2D_DoYouSeeMe/visual_form_constancy/50.png" width="30%" alt="Form Constancy Example 2">
145
+ <img src="2D_DoYouSeeMe/visual_form_constancy/100.png" width="30%" alt="Form Constancy Example 3">
146
+ </div>
147
+
148
+ *Sample Question: The figure consists of a Target image. Out of the four given options, your task is to pick the option which has the same figure as the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
149
+
150
+
151
+ ### Shape Disambiguation
152
+
153
+ Challenges the ability to identify ambiguous shapes that can be interpreted in multiple ways. Explores perceptual flexibility.
154
+
155
+ <div style="display: flex; justify-content: space-between;">
156
+ <img src="2D_DoYouSeeMe/geometric_dataset/1.png" width="30%" alt="Shape Disambiguation Example 1">
157
+ <img src="2D_DoYouSeeMe/geometric_dataset/50.png" width="30%" alt="Shape Disambiguation Example 2">
158
+ <img src="2D_DoYouSeeMe/geometric_dataset/100.png" width="30%" alt="Shape Disambiguation Example 3">
159
+ </div>
160
+
161
+ *Sample Question: Count the total number of triangles in the image, including each concentric triangle separately. For example, if there is one triangle with 2 inner concentric rings, that counts as 3 triangles. Respond with only a number.*
162
+
163
+
164
+ ### Shape Color Discrimination
165
+
166
+ Tests the ability to differentiate shapes based on color properties while controlling for other visual features.
167
+
168
+ <div style="display: flex; justify-content: space-between;">
169
+ <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/1.png" width="30%" alt="Shape Color Example 1">
170
+ <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/50.png" width="30%" alt="Shape Color Example 2">
171
+ <img src="2D_DoYouSeeMe/color_and_shape_disambiguation/89.png" width="30%" alt="Shape Color Example 3">
172
+ </div>
173
+
174
+ *Sample Question: Count the number of star's that are red.*
175
+
176
+
177
+
178
+ ### Letter Disambiguation
179
+
180
+ Examines recognition of letters under various transformations and distortions. Evaluates robustness of character recognition.
181
+
182
+ <div style="display: flex; justify-content: space-between;">
183
+ <img src="2D_DoYouSeeMe/letter_disambiguation/1.png" width="30%" alt="Letter Disambiguation Example 1">
184
+ <img src="2D_DoYouSeeMe/letter_disambiguation/50.png" width="30%" alt="Letter Disambiguation Example 2">
185
+ <img src="2D_DoYouSeeMe/letter_disambiguation/100.png" width="30%" alt="Letter Disambiguation Example 3">
186
+ </div>
187
+
188
+ *Sample Question: The image shows one or more letters formed by a grid of small squares. What letter(s) can you identify in this image? Please respond with only the letter(s) you see.*
189
+
190
+
191
+
192
+
193
+ ### Visual Closure
194
+
195
+ Tests the ability to recognize incomplete figures by mentally filling in missing information. Evaluates gestalt processing.
196
+
197
+ <div style="display: flex; justify-content: space-between;">
198
+ <img src="2D_DoYouSeeMe/visual_closure/1.png" width="30%" alt="Visual Closure Example 1">
199
+ <img src="2D_DoYouSeeMe/visual_closure/50.png" width="30%" alt="Visual Closure Example 2">
200
+ <img src="2D_DoYouSeeMe/visual_closure/100.png" width="30%" alt="Visual Closure Example 3">
201
+ </div>
202
+
203
+ *Sample Question: The figure consists of a target image which is complete, Out of the four given options (which are partially complete), your task is to pick the option which when completed matches the target image. Respond as follows: Option <your answer (choose between 1, 2, 3, or 4)>.*
204
+
205
+ ## Citation
206
+
207
+ If you use this benchmark or dataset in your research, please cite our work as follows:
208
+ ```
209
+ @misc{kanade2025multidimensionalbenchmarkevaluating,
210
+ title={Do You See Me : A Multidimensional Benchmark for Evaluating Visual Perception in Multimodal LLMs},
211
+ author={Aditya Kanade and Tanuja Ganu},
212
+ year={2025},
213
+ eprint={2506.02022},
214
+ archivePrefix={arXiv},
215
+ primaryClass={cs.CV},
216
+ url={https://arxiv.org/abs/2506.02022},
217
+ }
218
+ ```
219
+
220
+ ## Contributing
221
+
222
+ This project welcomes contributions and suggestions. Most contributions require you to agree to a
223
+ Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
224
+ the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
225
+
226
+ When you submit a pull request, a CLA bot will automatically determine whether you need to provide
227
+ a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
228
+ provided by the bot. You will only need to do this once across all repos using our CLA.
229
+
230
+ This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
231
+ For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
232
+ contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
233
+
234
+ ## Trademarks
235
+
236
+ This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft
237
+ trademarks or logos is subject to and must follow
238
+ [Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general).
239
+ Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship.
240
+ Any use of third-party trademarks or logos are subject to those third-party's policies.
241
+
242
+ ## License 📜
243
+
244
+ The **code** in this repository is licensed under the [MIT License](https://opensource.org/licenses/MIT).
245
+ The **dataset** is licensed under the [Community Data License Agreement - Permissive - Version 2.0 (CDLA-Permissive-2.0)](https://cdla.dev/permissive-2-0/).