|
|
--- |
|
|
configs: |
|
|
- config_name: image2text_info |
|
|
data_files: image2text_info.csv |
|
|
- config_name: image2text_option |
|
|
data_files: image2text_option.csv |
|
|
- config_name: text2image_info |
|
|
data_files: text2image_info.csv |
|
|
- config_name: text2image_option |
|
|
data_files: text2image_option.csv |
|
|
|
|
|
license: cc-by-nc-sa-4.0 |
|
|
language: |
|
|
- en |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
tags: |
|
|
- benchmark |
|
|
- mllm |
|
|
- scientific |
|
|
- cover |
|
|
- live |
|
|
task_categories: |
|
|
- image-text-to-text |
|
|
--- |
|
|
|
|
|
# MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding |
|
|
|
|
|
[](https://arxiv.org/abs/2508.15802) |
|
|
[](https://github.com/mhjiang0408/MAC_Bench) |
|
|
[](https://creativecommons.org/licenses/by-nc-sa/4.0/) |
|
|
|
|
|
## π Dataset Description |
|
|
|
|
|
MAC is a comprehensive live benchmark designed to evaluate multimodal large language models (MLLMs) on scientific understanding tasks. The dataset focuses on scientific journal cover understanding, providing challenging testbeds for assessing visual-textual comprehension capabilities of MLLMs in academic domains. |
|
|
|
|
|
### π― Tasks |
|
|
|
|
|
**1. Image-to-Text Understanding** |
|
|
- **Input**: Scientific journal cover image |
|
|
- **Task**: Select the most accurate textual description from 4 multiple-choice options |
|
|
- **Question Format**: "Which of the following options best describe the cover image?" |
|
|
|
|
|
**2. Text-to-Image Understanding** |
|
|
- **Input**: Journal cover story text description |
|
|
- **Task**: Select the corresponding image from 4 visual options |
|
|
- **Question Format**: "Which of the following options best describe the cover story?" |
|
|
|
|
|
### π Dataset Statistics |
|
|
|
|
|
| Attribute | Value | |
|
|
|-----------|-------| |
|
|
| **Source Journals** | Nature, Science, Cell, ACS journals | |
|
|
| **Task Types** | 2 (Image2Text, Text2Image) | |
|
|
| **Options per Question** | 4 (A, B, C, D) | |
|
|
| **Languages** | English | |
|
|
| **Image Format** | High-resolution PNG journal covers | |
|
|
|
|
|
|
|
|
### π Quick Start |
|
|
|
|
|
#### Loading the Dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
dataset = load_dataset("mhjiang0408/MAC_Bench") |
|
|
``` |
|
|
|
|
|
#### Data Fields |
|
|
|
|
|
**Image-to-Text Task Fields** (`image2text_info.csv`): |
|
|
|
|
|
```python |
|
|
{ |
|
|
'journal': str, # Source journal name (e.g., "NATURE BIOTECHNOLOGY") |
|
|
'id': str, # Unique identifier (e.g., "42_7") |
|
|
'question': str, # Task question |
|
|
'cover_image': str, # Path to cover image |
|
|
'answer': str, # Correct answer ('A', 'B', 'C', 'D') |
|
|
'option_A': str, # Option A text |
|
|
'option_A_path': str, # Path to option A story file |
|
|
'option_A_embedding_name': str, # Embedding method name |
|
|
'option_A_embedding_id': str, # Embedding identifier |
|
|
# Similar fields for options B, C, D |
|
|
'split': str # Dataset split ('train', 'val', 'test') |
|
|
} |
|
|
``` |
|
|
|
|
|
### π§ Evaluation Framework |
|
|
|
|
|
Use the official MAC_Bench evaluation toolkit: |
|
|
|
|
|
```bash |
|
|
# Clone repository |
|
|
git clone https://github.com/mhjiang0408/MAC_Bench.git |
|
|
cd MAC_Bench |
|
|
./setup.sh |
|
|
``` |
|
|
|
|
|
|
|
|
### π Use Cases |
|
|
|
|
|
- **MLLM Evaluation**: Systematic benchmarking of multimodal large language models |
|
|
- **Scientific Vision-Language Research**: Cross-modal understanding in academic domains |
|
|
- **Educational AI**: Development of AI systems for scientific content comprehension |
|
|
- **Academic Publishing Tools**: Automated analysis of journal covers and content |
|
|
|
|
|
|
|
|
### π Citation |
|
|
|
|
|
If you use the MAC dataset in your research, please cite our paper: |
|
|
|
|
|
```bibtex |
|
|
@misc{jiang2025maclivebenchmarkmultimodal, |
|
|
title={MAC: A Live Benchmark for Multimodal Large Language Models in Scientific Understanding}, |
|
|
author={Mohan Jiang and Jin Gao and Jiahao Zhan and Dequan Wang}, |
|
|
year={2025}, |
|
|
eprint={2508.15802}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CL}, |
|
|
url={https://arxiv.org/abs/2508.15802}, |
|
|
} |
|
|
``` |
|
|
|
|
|
### π License |
|
|
|
|
|
This dataset is released under the CC BY-NC-SA 4.0 License. See [LICENSE](https://github.com/mhjiang0408/MAC_Bench/blob/main/LICENSE) for details. |
|
|
|
|
|
### π€ Contributing |
|
|
|
|
|
We welcome contributions to improve the dataset and benchmark: |
|
|
|
|
|
1. Report issues via [GitHub Issues](https://github.com/mhjiang0408/MAC_Bench/issues) |
|
|
2. Submit pull requests for improvements |
|
|
3. Join discussions in our [GitHub Discussions](https://github.com/mhjiang0408/MAC_Bench/discussions) |