Update annotations for Iraa/paper_85.txt
Browse files
annotations/Iraa/paper_85.txt.json
CHANGED
|
@@ -6,5 +6,13 @@
|
|
| 6 |
"label": "Format",
|
| 7 |
"user": "Iraa",
|
| 8 |
"text": "(Jia et al., 2019;"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
}
|
| 10 |
]
|
|
|
|
| 6 |
"label": "Format",
|
| 7 |
"user": "Iraa",
|
| 8 |
"text": "(Jia et al., 2019;"
|
| 9 |
+
},
|
| 10 |
+
{
|
| 11 |
+
"file": "paper_85.txt",
|
| 12 |
+
"start": 636,
|
| 13 |
+
"end": 1291,
|
| 14 |
+
"label": "Lacks synthesis",
|
| 15 |
+
"user": "Iraa",
|
| 16 |
+
"text": "Language-based adversarial examples can be collected to study the robustness of vision-language models as well. Shekhar et al. (2017) introduces FOIL-COCO dataset to evaluate the visionlanguage model's decision when associating images with both correct and \"foil\" captions. Hendricks and Nematzadeh (2021) show that vision-language Transformers are worse at verb understanding than nouns. New versions of the VQA dataset (Antol et al., 2015) are proposed to study robustness of VQA models (Shah et al., 2019;Li et al., 2021). Our work is different in that we use pre-trained LMs to introduce perturbations and evaluate robustness of video-language models."
|
| 17 |
}
|
| 18 |
]
|