File size: 3,305 Bytes
405ef7c
 
 
 
2087f61
 
405ef7c
2087f61
405ef7c
2087f61
 
 
 
 
 
405ef7c
2087f61
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
405ef7c
 
 
 
7c6cc67
405ef7c
 
 
 
 
 
 
 
7a429db
405ef7c
 
 
 
 
 
 
7a429db
405ef7c
 
 
7a429db
 
405ef7c
 
 
 
 
 
7a429db
 
7c6cc67
7a429db
405ef7c
7a429db
405ef7c
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
---
license: mit
pretty_name: MMB Counterfactual Dataset
task_categories:
- visual-question-answering
- multiple-choice
language:
- en
tags:
- vision
- language
- multimodal
- counterfactual
- question-answering
- synthetic
size_categories:
- 1K<n<10K
dataset_info:
  features:
  - name: original_image
    dtype: image
  - name: counterfactual1_image
    dtype: image
  - name: counterfactual2_image
    dtype: image
  - name: counterfactual1_type
    dtype: string
  - name: counterfactual2_type
    dtype: string
  - name: counterfactual1_description
    dtype: string
  - name: counterfactual2_description
    dtype: string
  - name: original_question
    dtype: string
  - name: counterfactual1_question
    dtype: string
  - name: counterfactual2_question
    dtype: string
  - name: original_question_difficulty
    dtype: string
  - name: counterfactual1_question_difficulty
    dtype: string
  - name: counterfactual2_question_difficulty
    dtype: string
  - name: original_image_answer_to_original_question
    dtype: string
  - name: original_image_answer_to_cf1_question
    dtype: string
  - name: original_image_answer_to_cf2_question
    dtype: string
  - name: cf1_image_answer_to_original_question
    dtype: string
  - name: cf1_image_answer_to_cf1_question
    dtype: string
  - name: cf1_image_answer_to_cf2_question
    dtype: string
  - name: cf2_image_answer_to_original_question
    dtype: string
  - name: cf2_image_answer_to_cf1_question
    dtype: string
  - name: cf2_image_answer_to_cf2_question
    dtype: string
  splits:
  - name: train
    num_bytes: 29666931
    num_examples: 100
  download_size: 29653393
  dataset_size: 29666931
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# MMB Counterfactual Dataset

A counterfactual VQA dataset constructed using the CLEVR blender assets to procedurally generate both negative and normal counter factual VQA images and questions for the Multimodal Benchmark paper.
## Dataset Structure

This repository contains counterfactual visual question answering data with:

- **Original images** and **counterfactual variants** (modifications to test reasoning)
- **Questions** for each image variant
- **Answer matrices** showing how each image answers each question (9 values per scene: 3 images × 3 questions)


### Loading from Python

After pushing this repository to the Hub, load it with:

```python
from datasets import load_dataset

ds = load_dataset("scholo/MMB_dataset", split="train")
print(ds[0])
```

No `trust_remote_code=True` needed since we use standard Parquet format!

## Directory Structure

```
MMB-Dataset/
├── README.md                   # This file
├── .gitattributes              # Git LFS configuration for images
├── data/                       # Dataset files (Parquet format)
│   └── train.parquet           # Main dataset file
├── Dataset/   # Current dataset run
│   ├── images/                 # All PNG images (referenced by Parquet)
│   ├── scenes/                 # JSON scene descriptions (reference)
│   ├── image_mapping_with_questions.csv  # Original CSV (source)
│   ├── checkpoint.json         # Run metadata
│   └── run_metadata.json       # Run metadata
```

## License

MIT