Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ license: apache-2.0
|
|
| 5 |
|
| 6 |
|
| 7 |
## Introduction
|
| 8 |
-
MRAMG-Bench is a comprehensive multimodal benchmark with six carefully curated datasets. The
|
| 9 |
|
| 10 |
## **Data Structure**
|
| 11 |
|
|
@@ -88,7 +88,9 @@ Additionally, metadata about these images is provided in **six JSON files**, cor
|
|
| 88 |
- **`image_caption` (str)**: A textual description or caption of the image.
|
| 89 |
|
| 90 |
|
|
|
|
| 91 |
|
|
|
|
| 92 |
|
| 93 |
## Contact
|
| 94 |
If you have any questions or suggestions, please contact yuqinhan@stu.pku.edu.cn
|
|
|
|
| 5 |
|
| 6 |
|
| 7 |
## Introduction
|
| 8 |
+
MRAMG-Bench is a comprehensive multimodal benchmark with six carefully curated English datasets. The benchmark comprises 4,346 documents, 14,190 images, and 4,800 QA pairs, sourced from three domains—Web Data, Academic Papers, and Lifestyle Data. We believe it provides a robust evaluation framework that advances research in Multimodal Retrieval-Augmented Multimodal Generation (MRAMG).
|
| 9 |
|
| 10 |
## **Data Structure**
|
| 11 |
|
|
|
|
| 88 |
- **`image_caption` (str)**: A textual description or caption of the image.
|
| 89 |
|
| 90 |
|
| 91 |
+
## Upcoming Features
|
| 92 |
|
| 93 |
+
We are excited to announce that MRAMG-Bench will soon introduce a bilingual version, supporting both Chinese and English. The Chinese version will enhance the exploration and research of MRAMG.
|
| 94 |
|
| 95 |
## Contact
|
| 96 |
If you have any questions or suggestions, please contact yuqinhan@stu.pku.edu.cn
|