Update README.md
Browse files
README.md
CHANGED
|
@@ -96,40 +96,31 @@ configs:
|
|
| 96 |
path: visual_prompting_pairs/visual_prompting_val.parquet
|
| 97 |
---
|
| 98 |
|
| 99 |
-
|
|
|
|
| 100 |
real-world tasks, such as an assistant in the physical world. While there are many dataset for object detection
|
| 101 |
and recognition, there are few that test spatial reasoning and other more targeted task such as visual prompting.
|
| 102 |
The datasets that do exist are static and publicly available, thus there is concern that current AI models could
|
| 103 |
be trained on these datasets, which makes evaluation with them unreliable. Thus we created a dataset that is
|
| 104 |
procedurally generated and synthetic, and tests spatial reasoning, visual prompting, as well as object recognition
|
| 105 |
-
and detection
|
| 106 |
-
16
|
| 107 |
benchmark can be regenerated ad infinitum to create new test sets to combat the effects of models being trained
|
| 108 |
on this data and the results being due to memorization.
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
|
|
|
|
|
|
|
|
|
| 114 |
amounts of random rotation, positional jitter, and scale.
|
|
|
|
| 115 |
There are 2 conditions “ single” and “ pairs”, for images with one and two objects. Each test set uses 20
|
| 116 |
sets of object classes (either 20 single objects or 20 pairs of objects), with four potential locations and four
|
| 117 |
backgrounds classes, and we sample 4 instances of object and background. This results in 1280 images per
|
| 118 |
condition and sub-task.
|
| 119 |
|
| 120 |
-
What are the experimental design setup dimensions
|
| 121 |
-
(e.g. settings, prompt templates, dataset subsets) for this benchmark?
|
| 122 |
-
|
| 123 |
-
This dataset has 4 variations that test:
|
| 124 |
-
|
| 125 |
-
- Object Detection
|
| 126 |
-
- Object Recognition
|
| 127 |
-
- Spatial Reasoning
|
| 128 |
-
- Visual Prompting
|
| 129 |
-
|
| 130 |
-
For each varitions, the images consist of images of pasted objects on random images.
|
| 131 |
-
For each there are 2 conditions "single" and "pairs" each test set has 1280 images and text pairs
|
| 132 |
-
|
| 133 |
__Object Detection__
|
| 134 |
|
| 135 |
Answer type: Open-ended
|
|
|
|
| 96 |
path: visual_prompting_pairs/visual_prompting_val.parquet
|
| 97 |
---
|
| 98 |
|
| 99 |
+
A key question for understanding multimodal performance is analyzing the ability for a model to have basic
|
| 100 |
+
vs. detailed understanding of images. These capabilities are needed for models to be used in
|
| 101 |
real-world tasks, such as an assistant in the physical world. While there are many dataset for object detection
|
| 102 |
and recognition, there are few that test spatial reasoning and other more targeted task such as visual prompting.
|
| 103 |
The datasets that do exist are static and publicly available, thus there is concern that current AI models could
|
| 104 |
be trained on these datasets, which makes evaluation with them unreliable. Thus we created a dataset that is
|
| 105 |
procedurally generated and synthetic, and tests spatial reasoning, visual prompting, as well as object recognition
|
| 106 |
+
and detection. The datasets are challenging for most AI models and by being procedurally generated the
|
|
|
|
| 107 |
benchmark can be regenerated ad infinitum to create new test sets to combat the effects of models being trained
|
| 108 |
on this data and the results being due to memorization.
|
| 109 |
+
|
| 110 |
+
This dataset has 4 sub-tasks: Object Recognition, Visual Prompting. Spatial Rea-
|
| 111 |
+
soning, and Object Detection.
|
| 112 |
+
|
| 113 |
+
For each sub-task, the images consist of images of pasted objects on random
|
| 114 |
+
images. The objects are from the COCO object list and are gathered from internet data. Each object is
|
| 115 |
+
masked using the DeepLabV3 object detection model and then pasted on a random background from the
|
| 116 |
+
Places365 dataset. The objects are pasted in one of four locations, top, left, bottom, and right, with small
|
| 117 |
amounts of random rotation, positional jitter, and scale.
|
| 118 |
+
|
| 119 |
There are 2 conditions “ single” and “ pairs”, for images with one and two objects. Each test set uses 20
|
| 120 |
sets of object classes (either 20 single objects or 20 pairs of objects), with four potential locations and four
|
| 121 |
backgrounds classes, and we sample 4 instances of object and background. This results in 1280 images per
|
| 122 |
condition and sub-task.
|
| 123 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 124 |
__Object Detection__
|
| 125 |
|
| 126 |
Answer type: Open-ended
|