Datasets:
metadata
dataset_info:
features:
- name: question_text
dtype: string
- name: background_description
sequence: string
- name: answer_text
dtype: string
- name: options
sequence: string
- name: need_image
dtype: string
- name: language
dtype: string
- name: level
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: year
dtype: string
- name: image_ids
sequence: string
- name: images
list:
- name: bytes
dtype: binary
- name: path
dtype: 'null'
splits:
- name: italian
num_bytes: 56350406
num_examples: 407
- name: javanese
num_bytes: 181707
num_examples: 5
- name: afrikaans
num_bytes: 28552878
num_examples: 163
- name: thai
num_bytes: 112113903
num_examples: 401
- name: chinese
num_bytes: 43661702
num_examples: 453
- name: swahili
num_bytes: 96790
num_examples: 4
- name: portuguese
num_bytes: 44423012
num_examples: 452
- name: vietnamese
num_bytes: 7009517
num_examples: 116
- name: english
num_bytes: 78893609
num_examples: 795
download_size: 248223963
dataset_size: 371283524
configs:
- config_name: default
data_files:
- split: italian
path: data/italian-*
- split: javanese
path: data/javanese-*
- split: afrikaans
path: data/afrikaans-*
- split: thai
path: data/thai-*
- split: chinese
path: data/chinese-*
- split: swahili
path: data/swahili-*
- split: portuguese
path: data/portuguese-*
- split: vietnamese
path: data/vietnamese-*
- split: english
path: data/english-*
task_categories:
- visual-question-answering
language:
- it
- th
- en
- jv
- sw
- vi
- zh
- pt
- af
pretty_name: Multi-Modal M3Exam
size_categories:
- 1K<n<10K
Multi-Modal M3Exam
Note that this is a copy from https://github.com/DAMO-NLP-SG/M3Exam, which includes ONLY the multi-modal questions!
It was created due to issues in the original repo and to ease access. It also includes the image features and has a uniform and joined structure.
If you use this dataset, please cite the original authors:
@article{zhang2023m3exam,
title={M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models},
author={Wenxuan Zhang and Sharifah Mahani Aljunied and Chang Gao and Yew Ken Chia and Lidong Bing},
year={2023},
eprint={2306.05179},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
How to load the image features
Due to a bug, the images cannot be stored as PIL.Image.Images directly but need to be converted to dataset.Images-. Hence, to load them, this additional step is required:
from datasets import Image, load_dataset
ds = load_dataset("floschne/multimodal-m3exam", split="english")
ds.map(
lambda sample: {
"images_t": [Image().decode_example(img) for img in sample["images"]]
},
remove_columns=["images"],
).rename_column("images_t", "images")
Show the code used to generate this dataset.
This assumes that the directory `m3exam/multimodal-question/` exists and is an exact copy from the original GitHub repository.import pandas as pd
from pathlib import Path
from datasets import Image, DatasetDict, Dataset, Value, Sequence
from PIL import Image as PILImage
from tqdm.auto import tqdm
from copy import deepcopy
from functools import partial
import re
tqdm.pandas()
def get_img_ids(row, img_base_p):
p = r"\(image\)\[image-.*\..*\]"
imgs = re.findall(p, row["question_text"])
for option in row["options"]:
imgs.extend(re.findall(p, option))
for bgdesc in row["background_description"]:
imgs.extend(re.findall(p, bgdesc))
img_ids = [img.split("[")[1].split("]")[0] for img in imgs]
# remove the last character if it is a period (eg. image-1.png. -> image-1.png)
img_ids = [img_id[:-1] if img_id[-1] == "." else img_id for img_id in img_ids]
# remove character after the last digit (eg. image-13c.png -> image-13.png)
img_ids = [re.sub(r"\D*\.", ".", img_id) for img_id in img_ids]
# remove character between dots (eg. image-13.c.png -> image-13.png)
img_ids = [re.sub(r"\.\D*\.", ".", img_id) for img_id in img_ids]
for img_id in img_ids:
if not (img_base_p / img_id).exists():
# print(f"MISSING IMAGE: {img_id=}, {imgs=}, {row.name=}")
return None
return img_ids
def load_images(img_ids, img_base_p):
if img_ids is None:
return None
img = Image()
return [
img.encode_example(deepcopy(PILImage.open(img_base_p / img_id).convert("RGB")))
for img_id in img_ids
]
if __name__ == "__main__":
dsd = DatasetDict()
img_base_p = "m3exam/multimodal-question/images-"
for p in (
pbar := tqdm(
list(Path("m3exam/multimodal-question").glob("*-questions-image.json"))
)
):
lang = p.stem.split("-")[0]
pbar.set_description(lang)
df = pd.read_json(p)
df["image_ids"] = df.apply(
partial(get_img_ids, img_base_p=Path(img_base_p + lang)), axis=1
)
df["images"] = df["image_ids"].progress_apply(
partial(load_images, img_base_p=Path(img_base_p + lang))
)
df = df[~df.image_ids.isna()]
df["year"] = df["year"].astype(str).str.strip()
df["answer_text"] = df["answer_text"].astype(str).str.strip()
df["question_text"] = df["question_text"].astype(str).str.strip()
ds = Dataset.from_pandas(df.reset_index(drop=True))
# for javanese there are no background descs thus it is interpreted as dtype null. We need to change it to string
features = ds.features.copy()
features["background_description"] = Sequence(
feature=Value(dtype="string", id=None), length=-1, id=None
)
ds = ds.cast(features)
dsd[lang] = ds
dsd.push_to_hub(
"floschne/multimodal-m3exam", token=<OMITTED>
)