Datasets:
File size: 5,395 Bytes
4230758 d25b0dc 4230758 d25b0dc 4230758 c848f4a 4230758 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 |
---
language:
- id
- sw
- ta
- tr
- zh
- en
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- visual-question-answering
pretty_name: MaRVL
dataset_info:
features:
- name: id
dtype: string
- name: hypothesis
dtype: string
- name: hypo_en
dtype: string
- name: language
dtype: string
- name: label
dtype: bool
- name: chapter
dtype: string
- name: concept
dtype: string
- name: annotator_info
struct:
- name: age
dtype: int64
- name: annotator_id
dtype: string
- name: country_of_birth
dtype: string
- name: country_of_residence
dtype: string
- name: gender
dtype: string
- name: left_img_id
dtype: string
- name: right_img_id
dtype: string
- name: left_img
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: right_img
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: resized_left_img
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: resized_right_img
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: vertically_stacked_img
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
- name: horizontally_stacked_img
struct:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
splits:
- name: id
num_bytes: 2079196646
num_examples: 1128
- name: sw
num_bytes: 899838181
num_examples: 1108
- name: ta
num_bytes: 801784098
num_examples: 1242
- name: tr
num_bytes: 1373652829
num_examples: 1180
- name: zh
num_bytes: 1193602152
num_examples: 1012
download_size: 6234764237
dataset_size: 6348073906
configs:
- config_name: default
data_files:
- split: id
path: data/id-*
- split: sw
path: data/sw-*
- split: ta
path: data/ta-*
- split: tr
path: data/tr-*
- split: zh
path: data/zh-*
---
# MaRVL
### This is a copy from the original repo: https://github.com/marvl-challenge/marvl-code
If you use this dataset, please cite the original authors:
```bibtex
@inproceedings{liu-etal-2021-visually,
title = "Visually Grounded Reasoning across Languages and Cultures",
author = "Liu, Fangyu and
Bugliarello, Emanuele and
Ponti, Edoardo Maria and
Reddy, Siva and
Collier, Nigel and
Elliott, Desmond",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-main.818",
pages = "10467--10485",
}
```
### Additional data
In addition to the data available in the original repo, this dataset contains the following columns
* `en_translation` --> English translation of the `hypothesis` created using Bing Translate
* `left_img` --> PIL Image
* `right_img`--> PIL Image
* `resized_left_img` --> PIL Image resized
* `resized_right_img` --> PIL Image resized
* `vertically_stacked_img` --> PIL image that contains the left and right resized images stacked vertically with a black gutter of `10px`
* `horizontally_stacked_img` --> PIL image that contains the left and right resized images stacked horizontally with a black gutter of `10px`
The images were resized using [`img2dataset`](https://github.com/rom1504/img2dataset/blob/main/img2dataset/resizer.py):
<details>
<summary>Show code snippet</summary>
```python
Resizer(
image_size=640,
resize_mode=ResizeMode.keep_ratio,
resize_only_if_bigger=True,
)
```
</details>
### How to read the images
Due to a [bug](https://github.com/huggingface/datasets/issues/4796), the images cannot be stored as PIL.Image.Images directly but need to be converted to dataset.Images-. Hence, to load them, this additional step is required:
```python
from datasets import Image, load_dataset
ds = load_dataset("floschne/marvl", split="sw")
ds.map(
lambda sample: {
"left_img_t": [Image().decode_example(img) for img in sample["left_img"]],
"right_img_t": [Image().decode_example(img) for img in sample["right_img"]],
"resized_left_img_t": [
Image().decode_example(img) for img in sample["resized_left_img"]
],
"resized_right_img_t": [
Image().decode_example(img) for img in sample["resized_right_img"]
],
"vertically_stacked_img_t": [
Image().decode_example(img) for img in sample["vertically_stacked_img"]
],
"horizontally_stacked_img_t": [
Image().decode_example(img) for img in sample["horizontally_stacked_img"]
],
},
remove_columns=[
"left_img",
"right_img",
"resized_left_img",
"resized_right_img",
"vertically_stacked_img",
"horizontally_stacked_img",
],
).rename_columns(
{
"left_img_t": "left_img",
"right_img_t": "right_img",
"resized_left_img_t": "resized_left_img",
"resized_right_img_t": "resized_right_img",
"vertically_stacked_img_t": "vertically_stacked_img",
"horizontally_stacked_img_t": "horizontally_stacked_img",
}
)
``` |