Create readme.md
Browse files
readme.md
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CVQA for VLMEvalKit
|
| 2 |
+
- [Original dataset:](https://huggingface.co/datasets/afaji/cvqa) ported to [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)
|
| 3 |
+
> From the original authors: CVQA is a culturally diverse multilingual VQA benchmark consisting of over 10,000 questions from 39 country-language pairs. The questions in CVQA are written in both the native languages and English, and are categorized into 10 diverse categories.
|
| 4 |
+
```
|
| 5 |
+
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2048x1536 at 0x7C3E0EBEEE00>,
|
| 6 |
+
'ID': '5919991144272485961_0',
|
| 7 |
+
'Subset': "('Japanese', 'Japan')",
|
| 8 |
+
'Question': '写真に写っているキャラクターの名前は? ',
|
| 9 |
+
'Translated Question': 'What is the name of the object in the picture? ',
|
| 10 |
+
'Options': ['コスモ星丸', 'ミャクミャク', ' フリービー ', 'ハイバオ'],
|
| 11 |
+
'Translated Options': ['Cosmo Hoshimaru','MYAKU-MYAKU','Freebie ','Haibao'],
|
| 12 |
+
'Label': -1,
|
| 13 |
+
'Category': 'Objects / materials / clothing',
|
| 14 |
+
'Image Type': 'Self',
|
| 15 |
+
'Image Source': 'Self-open',
|
| 16 |
+
'License': 'CC BY-SA'
|
| 17 |
+
}
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
- To support VLMEvalKit, two TSV files were created to represent the two versions of CVQA:
|
| 21 |
+
1. The localised **(LOC)** version. The questions and answer options are in the subset's original native language. For evaluating on multilingual models.
|
| 22 |
+
2. The english **(ENG)** version. Questions and answers are asked in translated English, although the topics of the question involve cultures other than English.
|
| 23 |
+
|
| 24 |
+
- TSV row data columns for **LOC** and **ENG** [VLMEvalKit](https://github.com/timothycdc/VLMEvalKit/blob/main/docs/en/Development.md):
|
| 25 |
+
- index (int, based on dataset order. Does not follow CVQA ids since they are of type str)
|
| 26 |
+
- image (base64)
|
| 27 |
+
- question
|
| 28 |
+
- A option
|
| 29 |
+
- B option
|
| 30 |
+
- C option
|
| 31 |
+
- D option
|
| 32 |
+
- l2-category (`Subset`)
|
| 33 |
+
- split (always called `test`)
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
## Info
|
| 37 |
+
- Proposed method of evaluation:
|
| 38 |
+
- Prompt the model to answer only with the correct option letter (one of `[A,B,C,D]`)
|
| 39 |
+
- Use regex or string search to locate the correct letter
|
| 40 |
+
- Alternatively, use LLM-as-a-judge to identify the correct answer letter. Although, this is a bit of an overkill.
|
| 41 |
+
- The original CVQA dataset numbers the options as `[0,1,2,3]`, however this has been changed to`[A,B,C,D]` to follow the VLMEvalKit standard. This shouldn't have much effect on performance.
|