Improve dataset card: Add task categories, tags, paper link, and GitHub link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +112 -103
README.md CHANGED
@@ -1,103 +1,112 @@
1
- ---
2
- license: cc-by-nc-3.0
3
- ---
4
-
5
- # What is the Visual Cognition Gap between Humans and Multimodal LLMs?
6
-
7
- ## Description:
8
-
9
- VCog-Bench is a publicly available zero-shot abstract visual reasoning (AVR) benchmark designed to evaluate Multimodal Large Language Models (MLLMs). This benchmark integrates two well-known AVR datasets from the AI community and includes a newly proposed MaRs-VQA dataset. The findings in VCog-Bench show that current state-of-the-art MLLMs and Vision-Language Models (VLMs), such as GPT-4o and LLaVA-1.6, InternVL demonstrate some basic understanding of AVR tasks. However, these models still face challenges with complex matrix reasoning tasks. This highlights the need for further exploration and development in this area. By providing a robust benchmark, we aim to encourage further innovation and progress in the field of zero-shot abstract visual reasoning.
10
-
11
- ## Benchmark Dataset Structure:
12
-
13
- ```
14
- ----vcog-bench
15
- |----cvr
16
- | |----case_name1
17
- | | |----answer
18
- | | | |----image
19
- | | | | |----x.png
20
- | | |----choice
21
- | | | |----image
22
- | | | | |----sub_image_0.png
23
- | | | | |----sub_image_1.png
24
- | | | | |----sub_image_2.png
25
- | | | | |----sub_image_3.png
26
- | |----case_name2
27
- | |----case_name3
28
- | |----case_name4
29
- | |----......
30
- |----raven
31
- | |----case_name1
32
- | | |----answer
33
- | | | |----image
34
- | | | | |----x.jpeg
35
- | | |----choice
36
- | | | |----image
37
- | | | | |----0.jpeg
38
- | | | | |----1.jpeg
39
- | | | | |----2.jpeg
40
- | | | | |----3.jpeg
41
- | | | | |----4.jpeg
42
- | | | | |----5.jpeg
43
- | | | | |----6.jpeg
44
- | | | | |----7.jpeg
45
- | | | |----text
46
- | | | | |----annotation.json
47
- | | |----question
48
- | | | |----image
49
- | | | | |----question.jpeg
50
- | |----case_name2
51
- | |----case_name3
52
- | |----case_name4
53
- | |----......
54
- |----marsvqa
55
- | |----case_name1
56
- | | |----answer
57
- | | | |----image
58
- | | | | |----xxx.jpeg
59
- | | |----choice
60
- | | | |----image
61
- | | | | |----xxx.jpeg
62
- | | | | |----xxx.jpeg
63
- | | | | |----xxx.jpeg
64
- | | | | |----xxx.jpeg
65
- | | | |----text
66
- | | | | |----annotation.json
67
- | | |----choiceX
68
- | | | |----image
69
- | | | | |----xxx.jpeg
70
- | | | | |----xxx.jpeg
71
- | | | | |----xxx.jpeg
72
- | | | | |----xxx.jpeg
73
- | | |----question
74
- | | | |----image
75
- | | | | |----xxx.jpeg
76
- | |----case_name2
77
- | |----case_name3
78
- | |----case_name4
79
- | |----......
80
- ```
81
-
82
- ## Dataset Details
83
-
84
- Content Types: VQA pairs with multiple images input
85
- Volume: 560 VQA pairs (RAVEN), 480 VQA pairs (MaRs-VQA), 309 VQA pairs (CVR)
86
- Source of Data: RAVEN dataset, MaRs-IB, CVR dataset
87
- Data Collection Method: See the paper.
88
-
89
- ## Reference
90
-
91
- ```
92
- @misc{cao2024visualcognitiongaphumans,
93
- title={What is the Visual Cognition Gap between Humans and Multimodal LLMs?},
94
- author={Xu Cao and Bolin Lai and Wenqian Ye and Yunsheng Ma and Joerg Heintz and Jintai Chen and Jianguo Cao and James M. Rehg},
95
- year={2024},
96
- eprint={2406.10424},
97
- archivePrefix={arXiv},
98
- primaryClass={cs.CV},
99
- url={https://arxiv.org/abs/2406.10424},
100
- }
101
- ```
102
-
103
-
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-3.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ tags:
6
+ - visual-question-answering
7
+ - multimodal
8
+ - matrix-reasoning
9
+ - visual-cognition
10
+ - benchmark
11
+ - reasoning
12
+ ---
13
+
14
+ # [What is the Visual Cognition Gap between Humans and Multimodal LLMs?](https://huggingface.co/papers/2406.10424)
15
+
16
+ Code: https://github.com/IrohXu/Cognition-MLLM
17
+
18
+ ## Description:
19
+
20
+ VCog-Bench is a publicly available zero-shot abstract visual reasoning (AVR) benchmark designed to evaluate Multimodal Large Language Models (MLLMs). This benchmark integrates two well-known AVR datasets from the AI community and includes a newly proposed MaRs-VQA dataset. The findings in VCog-Bench show that current state-of-the-art MLLMs and Vision-Language Models (VLMs), such as GPT-4o and LLaVA-1.6, InternVL demonstrate some basic understanding of AVR tasks. However, these models still face challenges with complex matrix reasoning tasks. This highlights the need for further exploration and development in this area. By providing a robust benchmark, we aim to encourage further innovation and progress in the field of zero-shot abstract visual reasoning.
21
+
22
+ ## Benchmark Dataset Structure:
23
+
24
+ ```
25
+ ----vcog-bench
26
+ |----cvr
27
+ | |----case_name1
28
+ | | |----answer
29
+ | | | |----image
30
+ | | | | |----x.png
31
+ | | |----choice
32
+ | | | |----image
33
+ | | | | |----sub_image_0.png
34
+ | | | | |----sub_image_1.png
35
+ | | | | |----sub_image_2.png
36
+ | | | | |----sub_image_3.png
37
+ | |----case_name2
38
+ | |----case_name3
39
+ | |----case_name4
40
+ | |----......
41
+ |----raven
42
+ | |----case_name1
43
+ | | |----answer
44
+ | | | |----image
45
+ | | | | |----x.jpeg
46
+ | | |----choice
47
+ | | | |----image
48
+ | | | | |----0.jpeg
49
+ | | | | |----1.jpeg
50
+ | | | | |----2.jpeg
51
+ | | | | |----3.jpeg
52
+ | | | | |----4.jpeg
53
+ | | | | |----5.jpeg
54
+ | | | | |----6.jpeg
55
+ | | | | |----7.jpeg
56
+ | | | |----text
57
+ | | | | |----annotation.json
58
+ | | |----question
59
+ | | | |----image
60
+ | | | | |----question.jpeg
61
+ | |----case_name2
62
+ | |----case_name3
63
+ | |----case_name4
64
+ | |----......
65
+ |----marsvqa
66
+ | |----case_name1
67
+ | | |----answer
68
+ | | | |----image
69
+ | | | | |----xxx.jpeg
70
+ | | |----choice
71
+ | | | |----image
72
+ | | | | |----xxx.jpeg
73
+ | | | | |----xxx.jpeg
74
+ | | | | |----xxx.jpeg
75
+ | | | | |----xxx.jpeg
76
+ | | | |----text
77
+ | | | | |----annotation.json
78
+ | | |----choiceX
79
+ | | | |----image
80
+ | | | | |----xxx.jpeg
81
+ | | | | |----xxx.jpeg
82
+ | | | | |----xxx.jpeg
83
+ | | | | |----xxx.jpeg
84
+ | | |----question
85
+ | | | |----image
86
+ | | | | |----xxx.jpeg
87
+ | |----case_name2
88
+ | |----case_name3
89
+ | |----case_name4
90
+ | |----......
91
+ ```
92
+
93
+ ## Dataset Details
94
+
95
+ Content Types: VQA pairs with multiple images input
96
+ Volume: 560 VQA pairs (RAVEN), 480 VQA pairs (MaRs-VQA), 309 VQA pairs (CVR)
97
+ Source of Data: RAVEN dataset, MaRs-IB, CVR dataset
98
+ Data Collection Method: See the paper.
99
+
100
+ ## Reference
101
+
102
+ ```
103
+ @misc{cao2024visualcognitiongaphumans,
104
+ title={What is the Visual Cognition Gap between Humans and Multimodal LLMs?},
105
+ author={Xu Cao and Bolin Lai and Wenqian Ye and Yunsheng Ma and Joerg Heintz and Jintai Chen and Jianguo Cao and James M. Rehg},
106
+ year={2024},
107
+ eprint={2406.10424},
108
+ archivePrefix={arXiv},
109
+ primaryClass={cs.CV},
110
+ url={https://arxiv.org/abs/2406.10424},
111
+ }
112
+ ```