dmarsili commited on
Commit
8bf6050
·
verified ·
1 Parent(s): 34c053d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -2
README.md CHANGED
@@ -19,13 +19,41 @@ dataset_info:
19
  dtype: string
20
  splits:
21
  - name: test
22
- num_bytes: 1018562605.0
23
  num_examples: 10000
24
  download_size: 3713743593
25
- dataset_size: 1018562605.0
26
  configs:
27
  - config_name: default
28
  data_files:
29
  - split: test
30
  path: data/test-*
 
 
 
 
 
 
 
 
 
 
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  dtype: string
20
  splits:
21
  - name: test
22
+ num_bytes: 1018562605
23
  num_examples: 10000
24
  download_size: 3713743593
25
+ dataset_size: 1018562605
26
  configs:
27
  - config_name: default
28
  data_files:
29
  - split: test
30
  path: data/test-*
31
+ license: mit
32
+ task_categories:
33
+ - visual-question-answering
34
+ tags:
35
+ - finegrained-vqa
36
+ - vqa
37
+ - visual-reasoning
38
+ pretty_name: FGVQA
39
+ size_categories:
40
+ - 10K<n<100K
41
  ---
42
+
43
+ # FGVQA
44
+ This repository contains the FGVQA benchmark suite introduced in the paper [Same or Not? Enhancing Visual Perception in Vision-Language Models]().FGVQA contains 12,000 challenging (image, question, answer) tuples emphasizing fine-grained image understanding.
45
+
46
+ The benchmark suite is composed of six sub-benchmarks:
47
+ 1) [TWIN-eval](https://glab-caltech.github.io/twin/)
48
+ 2) [ILIAS](https://vrg.fel.cvut.cz/ilias/)
49
+ 3) [Google Landmarks v2](https://github.com/cvdfoundation/google-landmark)
50
+ 4) [MET](https://cmp.felk.cvut.cz/met/)
51
+ 5) [CUB](https://www.vision.caltech.edu/datasets/cub_200_2011/)
52
+ 6) [Inquire](https://inquire-benchmark.github.io/)
53
+
54
+ ## Citation
55
+ If you use the FGVQA benchmark suite in your research, please use the following BibTeX entry.
56
+ ```
57
+
58
+
59
+ ```