| | --- |
| | pretty_name: MOAT |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: test |
| | path: MOAT.parquet |
| | task_categories: |
| | - image-text-to-text |
| | --- |
| | |
| | <h1>MOAT: Evaluating LMMs for Capability Integration and Instruction Grounding</h1> |
| |
|
| | <div align="center"> |
| | Zhoutong Ye, Mingze Sun, Huan-ang Gao, Chun Yu, Yuanchun Shi |
| | </div> |
| | |
| | <div align="center"> |
| | <a href="https://arxiv.org/abs/2503.09348" target="_blank"> |
| | <img alt="arXiv" src="https://img.shields.io/badge/arXiv-MOAT-red?logo=arxiv" height="20" /> |
| | </a> |
| | <a href="https://cambrian-yzt.github.io/MOAT/" target="_blank"> |
| | <img alt="Website" src="https://img.shields.io/badge/🌎_Website-MOAT-blue.svg" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/datasets/waltsun/MOAT" target="_blank"> |
| | <img alt="HF Dataset: MOAT" src="https://img.shields.io/badge/%F0%9F%A4%97%20_HuggingFace-MOAT-yellow" height="20" /> |
| | </a> |
| | <a href="https://github.com/Cambrian-yzt/MOAT" target="_blank"> |
| | <img alt="GitHub: MOAT" src="https://img.shields.io/badge/GitHub-MOAT-yellow?logo=github" height="20" /> |
| | </a> |
| | </div> |
| | |
| | ## Overview |
| |
|
| | **MOAT** (**M**ultimodal model **O**f **A**ll **T**rades) is a challenging benchmark for large multimodal models (LMMs). It consists of vision language (VL) tasks that require the LMM to integrate several VL capabilities and engage in human-like generalist visual problem solving. Moreover, many tasks in **MOAT** focus on LMMs' capability to ground complex text and visual instructions, which is crucial for the application of LMMs in-the-wild. Developing on the VL capability taxonomies proposed in previous benchmark papers, we define 10 fundamental VL capabilities in **MOAT**. |
| |
|
| | Please check out our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for further information. |
| |
|
| | ## Usage |
| |
|
| | Please check out our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for detail usage. |
| |
|
| | **Run Your Own Evaluation** |
| |
|
| | You can access our dataset with the following code: |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | dataset = load_dataset("waltsun/MOAT", split='test') |
| | ``` |
| |
|
| | As some questions are formatted as interleaved text and image(s), we recommend referring to the `./inference/eval_API.py` file in our [GitHub repo](https://github.com/Cambrian-yzt/MOAT) for the correct way to query the LMM. |
| |
|
| | **Column Description** |
| |
|
| | - `index`: The index of the question in the dataset. |
| | - `question`: The question text. |
| | - `choices`: A list of the answer choices. Can be empty. |
| | - `images`: The list of PIL images. |
| | - `outside_knowledge_text`: The essential information for answering the question. Optional. |
| | - `outside_knowledge_images`: The list of PIL images that are essential for answering the question. Can be empty. |
| | - `answer`: The correct answer. |
| | - `capability`: The VL capabilities required to answer the question. A list of strings. |
| | - `human_cot`: The human annotation for the CoT reasoning process. |
| |
|
| | ## Future Work |
| |
|
| | Going forward, we intend to further increase the diversity of the tasks in **MOAT**, involving more capability combinations and encompassing more domains and scenarios. Stay tuned! |