Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
File size: 2,605 Bytes
6d5fb9b
 
 
 
 
c2aef97
 
fcd52bf
 
 
 
 
 
 
 
c2aef97
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b34dec1
83c2689
b34dec1
01f05d3
83c2689
01f05d3
fcd52bf
83c2689
c2aef97
 
 
b34dec1
 
01f05d3
 
088f1df
 
 
 
 
 
 
00424b7
 
b7bb2ed
 
79509da
 
 
 
00424b7
088f1df
 
 
6ce9ad3
 
 
 
 
 
f80139d
 
 
6480421
f80139d
80955cd
5ee9602
80955cd
5ee9602
f80139d
5ee9602
f80139d
5ee9602
 
f80139d
5ee9602
f80139d
5ee9602
80955cd
 
 
5ee9602
 
 
 
 
 
 
 
 
 
 
f80139d
 
00424b7
91c4fce
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
license: mit
language:
- en
pretty_name: common-o
dataset_info:
  features:
  - name: image_1
    dtype: image
  - name: image_2
    dtype: image
  - name: question
    dtype: string
  - name: answer
    dtype: string
  - name: objects_1
    dtype: string
  - name: objects_2
    dtype: string
  - name: num_objects_image_1
    dtype: int64
  - name: num_objects_image_2
    dtype: int64
  - name: question_template
    dtype: string
  - name: answer_type
    dtype: string
  - name: choices
    dtype: string
  - name: num_choices
    dtype: int64
  - name: num_ground_truth_objects
    dtype: int64
  - name: real_or_synthetic
    dtype: string
  - name: ground_truth_objects
    dtype: string
  splits:
  - name: main
    num_bytes: 5408696753
    num_examples: 10426
  - name: challenge
    num_bytes: 594218345
    num_examples: 12600
  download_size: 1102814055
  dataset_size: 6002915098
configs:
- config_name: default
  data_files:
  - split: main
    path: data/main-*
  - split: challenge
    path: data/challenge-*
---

# Common-O

> measuring multimodal reasoning across scenes


Common-O, inspired by cognitive tests for humans, probes multimodal LLMs' ability to reason across scenes by asking "what’s in common?"

![fair conference content copy.001](https://cdn-uploads.huggingface.co/production/uploads/64c17345e82e55936cf971bc/5av7avUrsBjFuMrWuOiCW.jpeg)

Common-O is comprised of household objects:


![fair conference content copy.003](https://cdn-uploads.huggingface.co/production/uploads/64c17345e82e55936cf971bc/hEvVz2uFR6z-jv1em25eY.jpeg)


We have two subsets: Common-O (3 - 8 objects) and Common-O Complex (8 - 16 objects).

## Multimodal LLMs excel at single image perception, but struggle with multi-scene reasoning


![single_vs_multi_image(1)](https://cdn-uploads.huggingface.co/production/uploads/64c17345e82e55936cf971bc/1cB9iXHrSgyvfXgK6gmGu.png)


## Evaluating a Multimodal LLM on Common-O

```python
import datasets

# get a sample
common_o = datasets.load("facebook/Common-O")["main"]
# common_o_complex = datasets.load("facebook/Common-O")["complex"]
x = common_o[3]

output: str = model(x["image_1"], x["image_2"], x["question"])

check_answer(output, x["answer"])
```

To check the answer, we use an exact match criteria:

```python
import re


def check_answer(
    generation: str,
    ground_truth: List[str]
    ):
    preds = generation.split("\n")[-1]
    preds = re.sub("Answer:", "", preds)
    preds = preds.split(",")
    preds = sorted(preds, key=lambda x: x[0])
    
    ground_truth = sorted(ground_truth)
    return preds == ground_truth
```