| | --- |
| | dataset_info: |
| | features: |
| | - name: pid |
| | dtype: int64 |
| | - name: question |
| | dtype: string |
| | - name: decoded_image |
| | dtype: image |
| | - name: image |
| | dtype: string |
| | - name: answer |
| | dtype: string |
| | - name: task |
| | dtype: string |
| | - name: category |
| | dtype: string |
| | - name: complexity |
| | dtype: int64 |
| | splits: |
| | - name: GRAB |
| | num_bytes: 466596459.9 |
| | num_examples: 2170 |
| | download_size: 406793109 |
| | dataset_size: 466596459.9 |
| | configs: |
| | - config_name: default |
| | data_files: |
| | - split: GRAB |
| | path: data/GRAB-* |
| | license: mit |
| | --- |
| | |
| | # GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models |
| |
|
| | ## Dataset Description |
| |
|
| | - **Homepage:** [https://grab-benchmark.github.io](https://grab-benchmark.github.io) |
| | - **Paper:** [GRAB: A Challenging GRaph Analysis Benchmark for Large Multimodal Models](https://arxiv.org/abs/2408.11817) |
| | - **Repository** [GRAB](https://github.com/jonathan-roberts1/GRAB) |
| | - **Leaderboard** [https://grab-benchmark.github.io](https://grab-benchmark.github.io) |
| |
|
| | GRAB consists of 3 splits: GRAB, GRAB-real and GRAB-lite. This is the dataset for **GRAB**. |
| |
|
| | <p align="center"> |
| | <a href="https://huggingface.co/datasets/jonathan-roberts1/GRAB"><img height="100" alt="🤗 GRAB" src="https://img.shields.io/badge/%F0%9F%A4%97%20GRAB-DDEBF7?style=for-the-badge"></a> |
| | <a href="https://huggingface.co/datasets/jonathan-roberts1/GRAB-real"><img height="100" alt="🤗 GRAB-real" src="https://img.shields.io/badge/%F0%9F%A4%97%20GRAB--real-E8F5E9?style=for-the-badge"></a> |
| | <a href="https://huggingface.co/datasets/jonathan-roberts1/GRAB-lite"><img height="100" alt="🤗 GRAB-lite" src="https://img.shields.io/badge/%F0%9F%A4%97%20GRAB--lite-FFF3E0?style=for-the-badge"></a> |
| | </p> |
| |
|
| | ### Dataset Summary |
| | Large multimodal models (LMMs) have exhibited proficiencies across many visual tasks. Although numerous benchmarks exist to evaluate model performance, they increasingly have insufficient headroom and are **unfit to evaluate the next generation of frontier LMMs**. |
| |
|
| | To overcome this, we present **GRAB**, a challenging benchmark focused on the tasks **human analysts** might typically perform when interpreting figures. Such tasks include estimating the mean, intercepts or correlations of functions and data series and performing transforms. |
| |
|
| | We evaluate a suite of **20 LMMs** on GRAB, finding it to be a challenging benchmark, with the current best model scoring just **21.0%**. |
| |
|
| | ### Example usage |
| | ```python |
| | from datasets import load_dataset |
| | |
| | # load dataset |
| | grab_dataset = load_dataset("jonathan-roberts1/GRAB", split='GRAB') |
| | """ |
| | Dataset({ |
| | features: ['pid', 'question', 'decoded_image', 'image', 'answer', 'task', 'category', 'complexity'], |
| | num_rows: 2170 |
| | }) |
| | """ |
| | # query individual questions |
| | grab_dataset[40] # e.g., the 41st element |
| | """ |
| | {'pid': 40, 'question': 'What is the value of the y-intercept of the function? Give your answer as an integer.', |
| | 'decoded_image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=5836x4842 at 0x12288EA60>, |
| | 'image': 'images/40.png', 'answer': '1', 'task': 'properties', 'category': 'Intercepts and Gradients', |
| | 'complexity': 0} |
| | """ |
| | question_40 = grab_dataset[40]['question'] # question |
| | answer_40 = grab_dataset[40]['answer'] # ground truth answer |
| | pil_image_40 = grab_dataset[0]['decoded_image'] |
| | ``` |
| | Note -- the 'image' feature corresponds to filepaths in the ```images``` dir in this repository: (https://huggingface.co/datasets/jonathan-roberts1/GRAB/tree/main/images.zip) |
| |
|
| | Please visit our [GitHub repository](https://github.com/jonathan-roberts1/GRAB) for example inference code. |
| |
|
| | ### Dataset Curators |
| |
|
| | This dataset was curated by Jonathan Roberts, Kai Han, and Samuel Albanie |
| |
|
| | ### Citation Information |
| | ``` |
| | @inproceedings{roberts2025grab, |
| | title={GRAB: A challenging graph analysis benchmark for large multimodal models}, |
| | author={Roberts, Jonathan and Han, Kai and Albanie, Samuel}, |
| | booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision}, |
| | pages={1644--1654}, |
| | year={2025} |
| | } |
| | ``` |