Update README.md
Browse files
README.md
CHANGED
|
@@ -96,6 +96,27 @@ configs:
|
|
| 96 |
path: visual_prompting_pairs/visual_prompting_val.parquet
|
| 97 |
---
|
| 98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
What are the experimental design setup dimensions
|
| 100 |
(e.g. settings, prompt templates, dataset subsets) for this benchmark?
|
| 101 |
|
|
|
|
| 96 |
path: visual_prompting_pairs/visual_prompting_val.parquet
|
| 97 |
---
|
| 98 |
|
| 99 |
+
Motivation: A key question for understanding multimodal performance is analyzing the ability for a model to have basic vs. detailed understanding of images. These capabilities are needed for models to be used in
|
| 100 |
+
real-world tasks, such as an assistant in the physical world. While there are many dataset for object detection
|
| 101 |
+
and recognition, there are few that test spatial reasoning and other more targeted task such as visual prompting.
|
| 102 |
+
The datasets that do exist are static and publicly available, thus there is concern that current AI models could
|
| 103 |
+
be trained on these datasets, which makes evaluation with them unreliable. Thus we created a dataset that is
|
| 104 |
+
procedurally generated and synthetic, and tests spatial reasoning, visual prompting, as well as object recognition
|
| 105 |
+
and detection [91] . The datasets are challenging for most AI models and by being procedurally generated the
|
| 106 |
+
16
|
| 107 |
+
benchmark can be regenerated ad infinitum to create new test sets to combat the effects of models being trained
|
| 108 |
+
on this data and the results being due to memorization.
|
| 109 |
+
Benchmark Description: This dataset has 4 sub-tasks: Object Recognition, Visual Prompting. Spatial Rea-
|
| 110 |
+
soning, and Object Detection. For each sub-task, the images consist of images of pasted objects on random
|
| 111 |
+
images. The objects are from the COCO [62] object list and are gathered from internet data. Each object is
|
| 112 |
+
masked using the DeepLabV3 object detection model [22] and then pasted on a random background from the
|
| 113 |
+
Places365 dataset [132]. The objects are pasted in one of four locations, top, left, bottom, and right, with small
|
| 114 |
+
amounts of random rotation, positional jitter, and scale.
|
| 115 |
+
There are 2 conditions “ single” and “ pairs”, for images with one and two objects. Each test set uses 20
|
| 116 |
+
sets of object classes (either 20 single objects or 20 pairs of objects), with four potential locations and four
|
| 117 |
+
backgrounds classes, and we sample 4 instances of object and background. This results in 1280 images per
|
| 118 |
+
condition and sub-task.
|
| 119 |
+
|
| 120 |
What are the experimental design setup dimensions
|
| 121 |
(e.g. settings, prompt templates, dataset subsets) for this benchmark?
|
| 122 |
|