nielsr HF Staff commited on
Commit
e64a36a
·
verified ·
1 Parent(s): 672520e

Improve dataset card: add metadata, links, description, and sample usage

Browse files

Hi! I'm Niels from the Hugging Face community science team. This pull request improves the dataset card for MIRAGE by:
- Updating metadata with the correct task category (`image-text-to-text`) and license (`cc-by-sa-4.0`).
- Adding links to the research paper, project page, and GitHub repository.
- Providing a summary of the benchmark's components (MMST and MMMT).
- Including a sample usage section with the `datasets` library as found in the official repository.
- Adding the BibTeX citation.

Files changed (1) hide show
  1. README.md +80 -45
README.md CHANGED
@@ -1,18 +1,19 @@
1
  ---
2
- metadata:
3
- license: apache-2.0
4
- language:
5
- - en
 
 
 
6
  dataset_info:
7
  - config_name: MMST_Standard
8
- description: |
9
- MIRAGE-MMST Standard Configuration: standard benchmark (train + test).
10
- citation: |
11
- @misc{mirage2025,
12
- title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
13
- author={},
14
- year={2025},
15
- }
16
  features:
17
  - name: id
18
  dtype: string
@@ -49,14 +50,12 @@ dataset_info:
49
  - name: test
50
  num_examples: 8188
51
  - config_name: MMST_Contextual
52
- description: |
53
- MIRAGE-MMST Contextual Configuration: contextual benchmark (test only).
54
- citation: |
55
- @misc{mirage2025,
56
- title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
57
- author={},
58
- year={2025},
59
- }
60
  features:
61
  - name: id
62
  dtype: string
@@ -99,15 +98,13 @@ dataset_info:
99
  - name: test
100
  num_examples: 3934
101
  - config_name: MMMT_Direct
102
- description: >
103
- MIRAGE-MMMT Direct Configuration: direct-response dialog benchmark with
104
- three splits (train, dev, test).
105
- citation: |
106
- @misc{mirage2025,
107
- title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
108
- author={},
109
- year={2025},
110
- }
111
  features:
112
  - name: id
113
  dtype: string
@@ -133,15 +130,13 @@ dataset_info:
133
  - name: test
134
  num_examples: 861
135
  - config_name: MMMT_Decomp
136
- description: >
137
- MIRAGE-MMMT Decomp Configuration: decomposed-dialog benchmark, with
138
  known/missing goals.
139
- citation: |
140
- @misc{mirage2025,
141
- title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
142
- author={},
143
- year={2025},
144
- }
145
  features:
146
  - name: id
147
  dtype: string
@@ -199,11 +194,6 @@ configs:
199
  path: MMMT_Decomp/dev/*.arrow
200
  - split: test
201
  path: MMMT_Decomp/test/*.arrow
202
- license: cc
203
- task_categories:
204
- - visual-question-answering
205
- language:
206
- - en
207
  modalities:
208
  - Image
209
  - Text
@@ -211,6 +201,51 @@ tags:
211
  - biology
212
  - agriculture
213
  - Long-Form Question Answering
214
- size_categories:
215
- - 10K<n<100K
216
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-sa-4.0
5
+ size_categories:
6
+ - 10K<n<100K
7
+ task_categories:
8
+ - image-text-to-text
9
  dataset_info:
10
  - config_name: MMST_Standard
11
+ description: 'MIRAGE-MMST Standard Configuration: standard benchmark (train + test).
12
+
13
+ '
14
+ citation: "@misc{mirage2025,\n title={MIRAGE: A Benchmark for Multimodal Information-Seeking\
15
+ \ and Reasoning in Agricultural Expert-Guided Conversations},\n author={},\n\
16
+ \ year={2025},\n}\n"
 
 
17
  features:
18
  - name: id
19
  dtype: string
 
50
  - name: test
51
  num_examples: 8188
52
  - config_name: MMST_Contextual
53
+ description: 'MIRAGE-MMST Contextual Configuration: contextual benchmark (test only).
54
+
55
+ '
56
+ citation: "@misc{mirage2025,\n title={MIRAGE: A Benchmark for Multimodal Information-Seeking\
57
+ \ and Reasoning in Agricultural Expert-Guided Conversations},\n author={},\n\
58
+ \ year={2025},\n}\n"
 
 
59
  features:
60
  - name: id
61
  dtype: string
 
98
  - name: test
99
  num_examples: 3934
100
  - config_name: MMMT_Direct
101
+ description: 'MIRAGE-MMMT Direct Configuration: direct-response dialog benchmark
102
+ with three splits (train, dev, test).
103
+
104
+ '
105
+ citation: "@misc{mirage2025,\n title={MIRAGE: A Benchmark for Multimodal Information-Seeking\
106
+ \ and Reasoning in Agricultural Expert-Guided Conversations},\n author={},\n\
107
+ \ year={2025},\n}\n"
 
 
108
  features:
109
  - name: id
110
  dtype: string
 
130
  - name: test
131
  num_examples: 861
132
  - config_name: MMMT_Decomp
133
+ description: 'MIRAGE-MMMT Decomp Configuration: decomposed-dialog benchmark, with
 
134
  known/missing goals.
135
+
136
+ '
137
+ citation: "@misc{mirage2025,\n title={MIRAGE: A Benchmark for Multimodal Information-Seeking\
138
+ \ and Reasoning in Agricultural Expert-Guided Conversations},\n author={},\n\
139
+ \ year={2025},\n}\n"
 
140
  features:
141
  - name: id
142
  dtype: string
 
194
  path: MMMT_Decomp/dev/*.arrow
195
  - split: test
196
  path: MMMT_Decomp/test/*.arrow
 
 
 
 
 
197
  modalities:
198
  - Image
199
  - Text
 
201
  - biology
202
  - agriculture
203
  - Long-Form Question Answering
204
+ ---
205
+
206
+ # MIRAGE Benchmark
207
+
208
+ [**Project Page**](https://mirage-benchmark.github.io/) | [**Paper**](https://huggingface.co/papers/2506.20100) | [**GitHub**](https://github.com/MIRAGE-Benchmark/MIRAGE-Benchmark)
209
+
210
+ MIRAGE is a benchmark for multimodal expert-level reasoning and decision-making in consultative interaction settings, specifically designed for the agriculture domain. It captures the complexity of expert consultations by combining natural user queries, expert-authored responses, and image-based context.
211
+
212
+ The benchmark spans diverse crop health, pest diagnosis, and crop management scenarios, including more than 7,000 unique biological entities.
213
+
214
+ ## Overview
215
+
216
+ The benchmark consists of two main components:
217
+ - **MMST (Multi-Modal Single-Turn)**: Single-turn multimodal reasoning tasks.
218
+ - **MMMT (Multi-Modal Multi-Turn)**: Multi-turn conversational tasks with visual context.
219
+
220
+ ## Sample Usage
221
+
222
+ You can load the various configurations of the dataset using the `datasets` library:
223
+
224
+ ```python
225
+ from datasets import load_dataset
226
+
227
+ # Load MMST datasets
228
+ ds_standard = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMST_Standard")
229
+ ds_contextual = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMST_Contextual")
230
+
231
+ # Load MMMT dataset
232
+ ds_mmmt_direct = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMMT_Direct")
233
+ ds_mmmt_decomp = load_dataset("MIRAGE-Benchmark/MIRAGE", "MMMT_Decomp")
234
+ ```
235
+
236
+ ## Citation
237
+
238
+ If you use our benchmark in your research, please cite our paper:
239
+
240
+ ```bibtex
241
+ @article{dongre2025mirage,
242
+ title={MIRAGE: A Benchmark for Multimodal Information-Seeking and Reasoning in Agricultural Expert-Guided Conversations},
243
+ author={Dongre, Vardhan and Gui, Chi and Garg, Shubham and Nayyeri, Hooshang and Tur, Gokhan and Hakkani-T{\"{u}}r, Dilek and Adve, Vikram S},
244
+ journal={arXiv preprint arXiv:2506.20100},
245
+ year={2025}
246
+ }
247
+ ```
248
+
249
+ ## License
250
+
251
+ This project is licensed under the [Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/).