Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,105 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- visual-question-answering
|
| 5 |
+
language:
|
| 6 |
+
- id
|
| 7 |
+
- sw
|
| 8 |
+
- ta
|
| 9 |
+
- tr
|
| 10 |
+
- zh
|
| 11 |
+
- en
|
| 12 |
+
pretty_name: MaRVL
|
| 13 |
+
size_categories:
|
| 14 |
+
- 1K<n<10K
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# MaRVL
|
| 18 |
+
### This is a copy from the original repo: https://github.com/marvl-challenge/marvl-code
|
| 19 |
+
|
| 20 |
+
If you use this dataset, please cite the original authors:
|
| 21 |
+
```bibtex
|
| 22 |
+
@inproceedings{liu-etal-2021-visually,
|
| 23 |
+
title = "Visually Grounded Reasoning across Languages and Cultures",
|
| 24 |
+
author = "Liu, Fangyu and
|
| 25 |
+
Bugliarello, Emanuele and
|
| 26 |
+
Ponti, Edoardo Maria and
|
| 27 |
+
Reddy, Siva and
|
| 28 |
+
Collier, Nigel and
|
| 29 |
+
Elliott, Desmond",
|
| 30 |
+
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
|
| 31 |
+
month = nov,
|
| 32 |
+
year = "2021",
|
| 33 |
+
address = "Online and Punta Cana, Dominican Republic",
|
| 34 |
+
publisher = "Association for Computational Linguistics",
|
| 35 |
+
url = "https://aclanthology.org/2021.emnlp-main.818",
|
| 36 |
+
pages = "10467--10485",
|
| 37 |
+
}
|
| 38 |
+
```
|
| 39 |
+
### Additional data
|
| 40 |
+
In addition to the data available in the original repo, this dataset contains the following columns
|
| 41 |
+
* `left_img` --> PIL Image
|
| 42 |
+
* `right_img`--> PIL Image
|
| 43 |
+
* `resized_left_img` --> PIL Image resized
|
| 44 |
+
* `resized_right_img` --> PIL Image resized
|
| 45 |
+
* `vertically_stacked_img` --> PIL image that contains the left and right resized images stacked vertically with a black gutter of `10px`
|
| 46 |
+
* `horizontally_stacked_img` --> PIL image that contains the left and right resized images stacked horizontally with a black gutter of `10px`
|
| 47 |
+
|
| 48 |
+
The images were resized using [`img2dataset`](https://github.com/rom1504/img2dataset/blob/main/img2dataset/resizer.py):
|
| 49 |
+
<details>
|
| 50 |
+
<summary>Show code snippet</summary>
|
| 51 |
+
|
| 52 |
+
```python
|
| 53 |
+
Resizer(
|
| 54 |
+
image_size=640,
|
| 55 |
+
resize_mode=ResizeMode.keep_ratio,
|
| 56 |
+
resize_only_if_bigger=True,
|
| 57 |
+
)
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
</details>
|
| 61 |
+
|
| 62 |
+
### How to read the images
|
| 63 |
+
Due to a [bug](https://github.com/huggingface/datasets/issues/4796), the images cannot be stored as PIL.Image.Images directly but need to be converted to dataset.Images-. Hence, to load them, this additional step is required:
|
| 64 |
+
|
| 65 |
+
```python
|
| 66 |
+
from datasets import Image, load_dataset
|
| 67 |
+
|
| 68 |
+
ds = load_dataset("floschne/marvl", split="sw")
|
| 69 |
+
ds.map(
|
| 70 |
+
lambda sample: {
|
| 71 |
+
"left_img_t": [Image().decode_example(img) for img in sample["left_img"]],
|
| 72 |
+
"right_img_t": [Image().decode_example(img) for img in sample["right_img"]],
|
| 73 |
+
"resized_left_img_t": [
|
| 74 |
+
Image().decode_example(img) for img in sample["resized_left_img"]
|
| 75 |
+
],
|
| 76 |
+
"resized_right_img_t": [
|
| 77 |
+
Image().decode_example(img) for img in sample["resized_right_img"]
|
| 78 |
+
],
|
| 79 |
+
"vertically_stacked_img_t": [
|
| 80 |
+
Image().decode_example(img) for img in sample["vertically_stacked_img"]
|
| 81 |
+
],
|
| 82 |
+
"horizontally_stacked_img_t": [
|
| 83 |
+
Image().decode_example(img) for img in sample["horizontally_stacked_img"]
|
| 84 |
+
],
|
| 85 |
+
},
|
| 86 |
+
remove_columns=[
|
| 87 |
+
"left_img",
|
| 88 |
+
"right_img",
|
| 89 |
+
"resized_left_img",
|
| 90 |
+
"resized_right_img",
|
| 91 |
+
"vertically_stacked_img",
|
| 92 |
+
"horizontally_stacked_img",
|
| 93 |
+
],
|
| 94 |
+
).rename_columns(
|
| 95 |
+
{
|
| 96 |
+
"left_img_t": "left_img",
|
| 97 |
+
"right_img_t": "right_img",
|
| 98 |
+
"resized_left_img_t": "resized_left_img",
|
| 99 |
+
"resized_right_img_t": "resized_right_img",
|
| 100 |
+
"vertically_stacked_img_t": "vertically_stacked_img",
|
| 101 |
+
"horizontally_stacked_img_t": "horizontally_stacked_img",
|
| 102 |
+
}
|
| 103 |
+
)
|
| 104 |
+
|
| 105 |
+
```
|