Datasets:

ArXiv:
License:
File size: 4,406 Bytes
78a0279
 
004bd02
 
 
 
80c689d
e2115d3
004bd02
 
 
9789d11
004bd02
 
 
 
 
 
0d67eb1
 
bfbe84d
 
 
 
 
 
004bd02
 
a8f5509
 
004bd02
 
 
 
 
 
 
752d4ee
004bd02
 
8556640
0d67eb1
bfbe84d
 
 
d76c36b
 
 
004bd02
 
c22d407
a8f5509
004bd02
 
 
 
 
 
 
 
 
 
 
 
 
 
8f45da1
004bd02
 
 
 
 
8556640
 
7f6d95e
bfbe84d
 
 
 
 
004bd02
a8f5509
004bd02
 
 
 
 
 
 
 
 
3d6c82e
004bd02
 
d80ec62
308347a
 
d80ec62
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
license: apache-2.0
---



## Introduction
MRAMG-Bench is a comprehensive multimodal benchmark with six carefully curated English datasets. The benchmark comprises 4,346 documents, 14,190 images, and 4,800 QA pairs, sourced from three domains—Web Data, Academic Papers, and Lifestyle Data.  We believe it provides a robust evaluation framework that advances research in Multimodal Retrieval-Augmented Multimodal  Generation (MRAMG).

## **Data Structure**

The dataset consists of three major components: **Documents, Multimodal QA pairs, and Images**. Each component is structured across six different sub-datasets, ensuring a diverse and comprehensive collection of multimodal content.

---

### **1. Document Collection**
The dataset includes **six JSONL files**, each corresponding to a different data source:

| File Name          | Description | Num |
|--------------------|-------------|-------------|
| `doc_wit.jsonl`   |MRAMG-Wit documents | 639 |
| `doc_wiki.jsonl`  | MRAMG-Wiki documents | 538 |
| `doc_web.jsonl`   | MRAMG-Web documents | 1500 |
| `doc_arxiv.jsonl` | MRAMG-Arxiv documents | 101 |
| `doc_recipe.jsonl`| MRAMG-Recipe documents | 1528 |
| `doc_manual.jsonl`| MRAMG-Manual documents | 40 |



##### **Field Definitions**
- **`id` (int)**: Unique identifier for the document.  
- **`content` (str)**: The main textual content of the document. If an image is referenced, `<PIC>` is used as a placeholder indicating its position in the text.  
- **`images_list` (list[int])**: A list of **image IDs** associated with the document.  

---


### **2. Multimodal QA pairs**
The **MQA component** consists of **six JSONL files**, each corresponding to a different dataset:

| File Name          | Description | Num | 
|--------------------|-------------|-------------|
| `wit_mqa.jsonl`     |MRAMG-Wit multimodal QA pairs  | 600 |
| `wiki_mqa.jsonl`    | MRAMG-Wiki multimodal QA pairs | 500 |
| `web_mqa.jsonl`     | MRAMG-Web multimodal QA pairs | 750 |
| `arxiv_mqa.jsonl`   | MRAMG-Arxiv multimodal QA pairs | 200 |
| `recipe_mqa.jsonl`  | MRAMG-Recipe multimodal QA pairs | 2360 |
| `manual_mqa.jsonl`  | MRAMG-Manual multimodal QA pairs | 390 |


Each entry contains **a question ID, a question, provenance documents, a ground truth answer, and a list of image IDs associated with the answer**.


#### **Field Definitions**
- **`id` (str)**: Unique identifier for the question.  
- **`question` (str)**: The question text.  
- **`provenance` (list[int])**: A list of **document IDs** that serve as supporting evidence for the answer.  
- **`ground_truth` (str)**: The correct answer, which may contain `<PIC>` placeholders indicating relevant images.  
- **`images_list` (list[int])**: A list of **image IDs** directly associated with the answer.  

---

### **3. Image Metadata**
The dataset contains **a collection of images** stored under the directory:  

```

IMAGE/images/
```

Additionally, metadata about these images is provided in **six JSON files**, corresponding to each dataset:

| File Name          | Description | Num | 
|--------------------|-------------|-------------|
| `wit_imgs_collection.json`      | Image metadata from MRAMG-Wit  | 639 |
| `wiki_imgs_collection.json`     | Image metadata from MRAMG-Web | 538 |
| `web_imgs_collection.json`      | Image metadata from MRAMG-Wiki | 1500 |
| `arxiv_imgs_collection.json`    | Image metadata from MRAMG-Arxiv | 337 |
| `recipe_imgs_collection.json`   | Image metadata from MRAMG-Recipe | 8569 |
| `manual_imgs_collection.json`   | Image metadata from MRAMG-Manual | 2607 |



#### **Field Definitions**
- **`id` (int)**: Unique identifier for the image.  
- **`image_url` (str)**: The URL where the image is originally sourced from.  
- **`image_path` (str)**: The filename of the image as stored in the dataset.  
- **`image_caption` (str)**: A textual description or caption of the image.  




## Contact
If you have any questions or suggestions, please contact yuqinhan@stu.pku.edu.cn

## Citation Information
If you use this benchmark in your research, please cite the benchmark as follows:

```
@article{yu2025mramg,
  title={MRAMG-Bench: A BeyondText Benchmark for Multimodal Retrieval-Augmented Multimodal Generation},
  author={Yu, Qinhan and Xiao, Zhiyou and Li, Binghui and Wang, Zhengren and Chen, Chong and Zhang, Wentao},
  journal={arXiv preprint arXiv:2502.04176},
  year={2025}
}
```