Datasets:
File size: 5,976 Bytes
aba7bb4 5898d26 aba7bb4 a639b25 aba7bb4 25b9bef aba7bb4 25b9bef aba7bb4 25b9bef aba7bb4 542d770 25b9bef 7d2a025 25b9bef aba7bb4 25b9bef aba7bb4 8ca7af0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
---
tags:
- Multimodal
dataset_info:
features:
- name: id
dtype: string
- name: foldername
dtype: string
- name: image1
dtype: image
- name: image2
dtype: image
- name: image3
dtype: image
split: ica
- name: image4
dtype: image
split: ica
- name: relation
dtype: string
- name: domain
dtype: string
- name: type
dtype: string
- name: culture
dtype: string
- name: language
dtype: string
- name: explanation
dtype: string
split: ria
- name: hop_count
dtype: int64
- name: reasoning
dtype: string
- name: perception
dtype: string
split: ria
- name: conception
dtype: string
split: ria
- name: img_id1
dtype: string
- name: filename1
dtype: string
- name: description1
dtype: string
- name: image_path1
dtype: string
- name: img_id2
dtype: string
- name: filename2
dtype: string
- name: description2
dtype: string
- name: image_path2
dtype: string
- name: img_id3
dtype: string
- name: filename3
dtype: string
- name: description3
dtype: string
split: ica
- name: image_path3
dtype: string
split: ica
- name: img_id4
dtype: string
split: ica
- name: filename4
dtype: string
split: ica
- name: description4
dtype: string
split: ica
- name: image_path4
dtype: string
split: ica
configs:
- config_name: default
data_files:
- split: ria
path: data/ria-*
- split: ica
path: data/ica-*
license: cc-by-4.0
---
# MM-OPERA: Multi-Modal OPen-Ended Reasoning-guided Association Benchmark π§ π
## Overview π
MM-OPERA is a benchmark designed to evaluate the open-ended association reasoning capabilities of Large Vision-Language Models (LVLMs). With 11,497 instances, it challenges models to identify and express meaningful connections across distant concepts in an open-ended format, mirroring human-like reasoning. The dataset spans diverse cultural, linguistic, and thematic contexts, making it a robust tool for advancing multimodal AI research. πβ¨
<div style="text-align: center;">
<img src="mm-opera-bench-statistics.jpg" width="80%">
</div>
<div style="text-align: center;">
<img src="mm-opera-bench-overview.jpg" width="80%">
</div>
**Key Highlights**:
- **Tasks**: Remote-Item Association (RIA) and In-Context Association (ICA)
- **Dataset Size**: 11,497 instances (8021 in RIA, 869 Γ 4 = 3476 in ICA)
- **Context Coverage**: Multilingual, multicultural, and rich thematic contexts
- **Hierarchical Ability Taxonomy**: 13 associative ability dimensions (conception/perception) and 3 relationship types
- **Structured Clarity**: Association reasoning paths for clear and structured reasoning
- **Evaluation**: Open-ended responses assessed via tailored LLM-as-a-Judge with cascading scoring rubric and process-reward reasoning scoring
- **Applications**: Enhances LVLMs for real-world tasks like knowledge synthesis and relational inference
MM-OPERA is ideal for researchers and developers aiming to push the boundaries of multi-modal association reasoning. π
## Why Open-Ended Association Reasoning? π§ π‘
**Association** is the backbone of human cognition, enabling us to connect disparate ideas, synthesize knowledge, and drive processes like memory, perception, creative thinking and rule discovery. While recent benchmarks explore association via closed-ended tasks with fixed options, they often fall short in capturing the dynamic reasoning needed for real-world AI. π
**Open-ended association reasoning** is the key to unlocking LVLMs' true potential. Here's why:
- π« **No Bias from Fixed Options**: Closed-ended tasks can subtly guide models, masking their independent reasoning abilities.
- π **Complex, Multi-Step Challenges**: Open-ended formats allow for intricate, long-form reasoning, pushing models to tackle relational inference head-on.
These insights inspired MM-OPERA, a benchmark designed to rigorously evaluate and enhance LVLMsβ associative reasoning through open-ended tasks. Ready to explore the future of multimodal reasoning? π
## Features π
π§© **Novel Tasks Aligned with Human Psychometric Principles**:
- **RIA**: Links distant concepts through structured reasoning.
- **ICA**: Evaluates pattern recognition in in-context learning scenarios.
π **Broad Coverage**: 13 associative ability dimensions, 3 relationship types, across multilingual (15 languages), multicultural contexts and 22 topic domains.
π **Rich Metrics**: Evaluates responses on Score Rate, Reasoning Score, Reasonableness, Distinctiveness, Knowledgeability, and more for nuanced insights.
β
**Open-ended Evaluation**: Free-form responses with cascading scoring rubric, avoiding bias from predefined options.
π **Process-Reward Reasoning Evaluation**: Accesses each association reasoning step towards the final outcome connections, offering insights of reasoning process that outcome-based metrics cannot capture.
## Usage Example π»
```python
from datasets import load_dataset
# Login using e.g. `huggingface-cli login` to access this dataset
ds = load_dataset("titic/MM-OPERA")
# Example of an RIA instance
ria_example = ds['ria'][0]
print(ria_example)
# Example of an ICA instance
ica_example = ds['ica'][0]
print(ica_example)
```
Explore MM-OPERA to unlock the next level of multimodal association reasoning! π
## Citation βοΈ
If you use this dataset in your work, please cite it as follows:
```bibtex
@misc{huang2025mmopera,
author = {Zimeng Huang and Jinxin Ke and Xiaoxuan Fan and Yufeng Yang and Yang Liu and Liu Zhonghan and Zedi Wang and Junteng Dai and Haoyi Jiang and Yuyu Zhou and Keze Wang and Ziliang Chen},
title = {MM-OPERA},
month = {oct},
year = {2025},
publisher = {Zenodo},
version = {1.0.0},
doi = {10.5281/zenodo.17300924},
url = {https://doi.org/10.5281/zenodo.17300924}
}
```
|