Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
File size: 2,140 Bytes
29b79ee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c93836c
 
 
86cc40d
c93836c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86cc40d
29b79ee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
dataset_info:
  features:
  - name: question_id
    dtype: string
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: query_image_0
    dtype: image
  - name: query_image_1
    dtype: image
  - name: query_image_2
    dtype: image
  - name: query_image_3
    dtype: image
  - name: query_image_4
    dtype: image
  - name: query_image_5
    dtype: image
  - name: query_image_6
    dtype: image
  - name: query_image_7
    dtype: image
  - name: answer_image_a
    dtype: image
  - name: answer_image_b
    dtype: image
  - name: answer_image_c
    dtype: image
  - name: answer_image_d
    dtype: image
  - name: answer_image_e
    dtype: image
  - name: answer_image_f
    dtype: image
  splits:
  - name: test
    num_bytes: 12321743.0
    num_examples: 50
  download_size: 10640175
  dataset_size: 12321743.0
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---
# Dataset Card for "IQ50"

<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png"  width="100%" height="80%">
</p>

# Large-scale Multi-modality Models Evaluation Suite

> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`

🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)

# This Dataset

This is a formatted version of [IQ50](https://github.com/microsoft/unilm/issues/1265). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.

```
@article{huang2023language,
  title={Language is not all you need: Aligning perception with language models},
  author={Huang, Shaohan and Dong, Li and Wang, Wenhui and Hao, Yaru and Singhal, Saksham and Ma, Shuming and Lv, Tengchao and Cui, Lei and Mohammed, Owais Khan and Liu, Qiang and others},
  journal={arXiv preprint arXiv:2302.14045},
  volume={1},
  number={2},
  pages={3},
  year={2023}
}
```

[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)