VLMEvalKit_CVQA / README.md
timothycdc's picture
Rename readme.md to README.md
5921b88 verified
# CVQA for VLMEvalKit
- [Original dataset:](https://huggingface.co/datasets/afaji/cvqa) ported to [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)
- From the original authors:
> CVQA is a culturally diverse multilingual VQA benchmark consisting of over 10,000 questions from 39 country-language pairs. The questions in CVQA are written in both the native languages and English, and are categorized into 10 diverse categories.
```
{'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2048x1536 at 0x7C3E0EBEEE00>,
'ID': '5919991144272485961_0',
'Subset': "('Japanese', 'Japan')",
'Question': '写真に写っているキャラクターの名前は? ',
'Translated Question': 'What is the name of the object in the picture? ',
'Options': ['コスモ星丸', 'ミャクミャク', ' フリービー ', 'ハイバオ'],
'Translated Options': ['Cosmo Hoshimaru','MYAKU-MYAKU','Freebie ','Haibao'],
'Label': -1,
'Category': 'Objects / materials / clothing',
'Image Type': 'Self',
'Image Source': 'Self-open',
'License': 'CC BY-SA'
}
```
- To support VLMEvalKit, two TSV files were created to represent the two versions of CVQA:
1. The localised **(LOC)** version. The questions and answer options are in the subset's original native language. For evaluating with multilingual LLMs.
2. The english **(ENG)** version. Questions and answers are asked in translated English, although the topics of the question involve cultures other than English. For evaluating on LLMs trained primarily on English.
- TSV row data columns for **LOC** and **ENG** [VLMEvalKit](https://github.com/timothycdc/VLMEvalKit/blob/main/docs/en/Development.md):
- index (int, based on dataset order. Does not follow CVQA ids since they are of type str)
- image (base64)
- question
- A option
- B option
- C option
- D option
- l2-category (`Subset`)
- split (always called `test`)
## Info
- Proposed method of evaluation:
- Prompt the model to answer only with the correct option letter (one of `[A,B,C,D]`)
- Use regex or string search to locate the correct letter
- Alternatively, use LLM-as-a-judge to identify the correct answer letter. Although, this is a bit of an overkill.
- The original CVQA dataset numbers the options as `[0,1,2,3]`, however this has been changed to`[A,B,C,D]` to follow the VLMEvalKit standard. This shouldn't have much effect on performance.