Datasets:
Tasks:
Image-Text-to-Text
Modalities:
Image
Formats:
imagefolder
Languages:
English
Size:
1K - 10K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -14,7 +14,7 @@ pretty_name: PopVQA
|
|
| 14 |
PopVQA is a dataset designed to study the performance gap in vision-language models (VLMs) when answering factual questions about entities presented in **images** versus **text**.
|
| 15 |
|
| 16 |
Paper: https://huggingface.co/papers/2412.14133
|
| 17 |
-
Code: https://github.com/
|
| 18 |
|
| 19 |

|
| 20 |
|
|
|
|
| 14 |
PopVQA is a dataset designed to study the performance gap in vision-language models (VLMs) when answering factual questions about entities presented in **images** versus **text**.
|
| 15 |
|
| 16 |
Paper: https://huggingface.co/papers/2412.14133
|
| 17 |
+
Code: https://github.com/ido-co/vlm-modality-gap
|
| 18 |
|
| 19 |

|
| 20 |
|