titic commited on
Commit
25b9bef
Β·
verified Β·
1 Parent(s): 05f5005

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -10
README.md CHANGED
@@ -17,9 +17,6 @@ dataset_info:
17
  split: ica
18
  - name: relation
19
  dtype: string
20
- - name: visibility
21
- dtype: string
22
- split: ria
23
  - name: domain
24
  dtype: string
25
  - name: type
@@ -88,21 +85,75 @@ configs:
88
  path: data/ica-*
89
 
90
  ---
91
- # MM-OPERA-Bench
92
 
 
93
 
94
- ## Introduction
95
- **MM-OPERA-Bench** contains 11,493 samples.
96
 
 
97
  <div style="text-align: center;">
98
  <img src="mm-opera-bench-statistics.jpg" width="80%">
99
  </div>
100
-
101
- ## Example
102
-
103
  <div style="text-align: center;">
104
  <img src="mm-opera-bench-overview.jpg" width="80%">
105
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
 
 
 
 
107
 
108
- ## Mini-Leaderboard
 
17
  split: ica
18
  - name: relation
19
  dtype: string
 
 
 
20
  - name: domain
21
  dtype: string
22
  - name: type
 
85
  path: data/ica-*
86
 
87
  ---
 
88
 
89
+ # MM-OPERA: Multi-Modal OPen-Ended Reasoning-guided Association Benchmark 🧠🌐
90
 
91
+ ## Overview πŸ“–
 
92
 
93
+ MM-OPERA is a benchmark designed to evaluate the open-ended association reasoning capabilities of Large Vision-Language Models (LVLMs). With 11,497 instances, it challenges models to identify and express meaningful connections across distant concepts in an open-ended format, mirroring human-like reasoning. The dataset spans diverse cultural, linguistic, and thematic contexts, making it a robust tool for advancing multimodal AI research. 🌍✨
94
  <div style="text-align: center;">
95
  <img src="mm-opera-bench-statistics.jpg" width="80%">
96
  </div>
 
 
 
97
  <div style="text-align: center;">
98
  <img src="mm-opera-bench-overview.jpg" width="80%">
99
  </div>
100
+ **Key Highlights**:
101
+
102
+ - **Tasks**: Remote-Item Association (RIA) and In-Context Association (ICA)
103
+ - **Dataset Size**: 11,497 instances (8021 in RIA, 3476 in ICA)
104
+ - **Context Coverage**: Multilingual, multicultural, and rich thematic contexts
105
+ - **Hierarchical Ability Taxonomy**: 13 associative ability dimensions (conception/perception) and 3 relationship types
106
+ - **Structured Clarity**: Association reasoning paths for clear and structured reasoning
107
+ - **Evaluation**: Open-ended responses assessed via tailored LLM-as-a-Judge with cascading scoring rubric and process-reward reasoning scoring
108
+ - **Applications**: Enhances LVLMs for real-world tasks like knowledge synthesis and relational inference
109
+
110
+ MM-OPERA is ideal for researchers and developers aiming to push the boundaries of multi-modal association reasoning. πŸš€
111
+
112
+
113
+ ## Why Open-Ended Association Reasoning? πŸ§ πŸ’‘
114
+
115
+ **Association** is the backbone of human cognition, enabling us to connect disparate ideas, synthesize knowledge, and drive processes like memory, perception, creative thinking and rule discovery. While recent benchmarks explore association via closed-ended tasks with fixed options, they often fall short in capturing the dynamic reasoning needed for real-world AI. πŸ˜•
116
+
117
+ **Open-ended association reasoning** is the key to unlocking LVLMs' true potential. Here's why:
118
+
119
+ - 🚫 **No Bias from Fixed Options**: Closed-ended tasks can subtly guide models, masking their independent reasoning abilities.
120
+ - 🌟 **Complex, Multi-Step Challenges**: Open-ended formats allow for intricate, long-form reasoning, pushing models to tackle relational inference head-on.
121
+
122
+ These insights inspired MM-OPERA, a benchmark designed to rigorously evaluate and enhance LVLMs’ associative reasoning through open-ended tasks. Ready to explore the future of multimodal reasoning? πŸš€
123
+
124
+
125
+ ## Features πŸ”
126
+
127
+ 🧩 **Novel Tasks Aligned with Human Psychometric Principles**:
128
+ - **RIA**: Links distant concepts through structured reasoning.
129
+ - **ICA**: Evaluates pattern recognition in in-context learning scenarios.
130
+
131
+ 🌐 **Broad Coverage**: 13 associative ability dimensions, 3 relationship types, across multilingual (15 languages), multicultural contexts and 22 topic domains.
132
+
133
+ πŸ“Š **Rich Metrics**: Evaluates responses on Score Rate, Reasoning Score, Reasonableness, Distinctiveness, Knowledgeability, and more for nuanced insights.
134
+
135
+ βœ… **Open-ended Evaluation**: Free-form responses with cascading scoring rubric, avoiding bias from predefined options.
136
+
137
+ πŸ“ˆ **Process-Reward Reasoning Evaluation**: Accesses each association reasoning step towards the final outcome connections, offering insights of reasoning process that outcome-based metrics cannot capture.
138
+
139
+ ## Usage Example πŸ’»
140
+
141
+ ```python
142
+ import os
143
+ from datasets import load_dataset
144
+
145
+ os.environ['HF_DATASETS_CACHE'] = "/Your/Cache/Path"
146
+
147
+ # Login using e.g. `huggingface-cli login` to access this dataset
148
+ ds = load_dataset("titic/MM-OPERA")
149
+
150
+ # Example of an RIA instance
151
+ ria_example = ds['ria'][0]
152
+ print(ria_example)
153
 
154
+ # Example of an ICA instance
155
+ ica_example = ds['ica'][0]
156
+ print(ica_example)
157
+ ```
158
 
159
+ Explore MM-OPERA to unlock the next level of multimodal association reasoning! 🌟