Cie1 commited on
Commit
1d00bd8
·
verified ·
1 Parent(s): 1f3ef57

Update MMSearch-Plus dataset with encrypted data files

Browse files
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - question-answering
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - Multimodal Search
9
+ - Multimodal Long Context
10
+ size_categories:
11
+ - n<1K
12
+ configs:
13
+ - config_name: default
14
+ data_files:
15
+ - split: train
16
+ path: "*.arrow"
17
+ dataset_info:
18
+ features:
19
+ - name: question
20
+ dtype: string
21
+ - name: answer
22
+ sequence: string
23
+ - name: num_images
24
+ dtype: int64
25
+ - name: arxiv_id
26
+ dtype: string
27
+ - name: video_url
28
+ dtype: string
29
+ - name: category
30
+ dtype: string
31
+ - name: difficulty
32
+ dtype: string
33
+ - name: subtask
34
+ dtype: string
35
+ - name: img_1
36
+ dtype: image
37
+ - name: img_2
38
+ dtype: image
39
+ - name: img_3
40
+ dtype: image
41
+ - name: img_4
42
+ dtype: image
43
+ - name: img_5
44
+ dtype: image
45
+ splits:
46
+ - name: train
47
+ num_examples: 311
48
+ ---
49
+ # MMSearch-Plus✨: Benchmarking Provenance-Aware Search for Multimodal Browsing Agents
50
+
51
+ Official repository for the paper "[MMSearch-Plus: Benchmarking Provenance-Aware Search for Multimodal Browsing Agents](https://arxiv.org/abs/2508.21475)".
52
+
53
+ 🌟 For more details, please refer to the project page with dataset exploration and visualization tools: [https://mmsearch-plus.github.io/](https://mmsearch-plus.github.io).
54
+
55
+
56
+ [[🌐 Webpage](https://mmsearch-plus.github.io/)] [[📖 Paper](https://arxiv.org/pdf/2508.21475)] [[🤗 Huggingface Dataset](https://huggingface.co/datasets/Cie1/MMSearch-Plus)] [[🏆 Leaderboard](https://mmsearch-plus.github.io/#leaderboard)]
57
+
58
+
59
+ ## 💥 News
60
+
61
+ - **[2025.09.26]** 🔥 We update the [arXiv paper](https://arxiv.org/abs/2508.21475) and release all MMSearch-Plus data samples in [huggingface dataset](https://huggingface.co/datasets/Cie1/MMSearch-Plus).
62
+ - **[2025.08.29]** 🚀 We release the [arXiv paper](https://arxiv.org/abs/2508.21475).
63
+
64
+ ## 📌 ToDo
65
+
66
+ - Agentic rollout framework code
67
+ - Evaluation script
68
+ - Set-of-Mark annotations
69
+
70
+ ## Usage
71
+
72
+ **⚠️ Important: This dataset is encrypted to prevent data contamination. However, decryption is handled transparently by the dataset loader.**
73
+
74
+ ### Simple Usage (Recommended)
75
+
76
+ Load the dataset with automatic decryption using your canary string:
77
+
78
+ ```python
79
+ import os
80
+ from datasets import load_dataset
81
+
82
+ # Set the canary string (hint: it's the name of this repo)
83
+ os.environ['MMSEARCH_CANARY'] = 'your_canary_string'
84
+
85
+ # Load dataset with transparent decryption
86
+ dataset = load_dataset("Cie1/MMSearch-Plus", trust_remote_code=True)
87
+
88
+ # Explore the dataset - everything is already decrypted!
89
+ print(f"Dataset size: {len(dataset['train'])}")
90
+ print(f"Features: {list(dataset['train'].features.keys())}")
91
+
92
+ # Access a sample
93
+ sample = dataset['train'][0]
94
+ print(f"Question: {sample['question']}")
95
+ print(f"Answer: {sample['answer']}")
96
+ print(f"Category: {sample['category']}")
97
+ print(f"Number of images: {sample['num_images']}")
98
+
99
+ # Access images (PIL Image objects)
100
+ sample['img_1'].show() # Display the first image
101
+ ```
102
+
103
+ ## 👀 About MMSearch-Plus
104
+
105
+ MMSearch-Plus is a challenging benchmark designed to test multimodal browsing agents' ability to perform genuine visual reasoning. Unlike existing benchmarks where many tasks can be solved with text-only approaches, MMSearch-Plus requires models to extract and use fine-grained visual cues through iterative image-text retrieval.
106
+
107
+ ### Key Features
108
+
109
+ 🔍 **Genuine Multimodal Reasoning**: 311 carefully curated tasks that cannot be solved without visual understanding
110
+
111
+ 🎯 **Fine-grained Visual Analysis**: Questions require extracting spatial cues and temporal traces from images to find out-of-image facts like events, dates, and venues
112
+
113
+ 🛠️ **Agent Framework**: Model-agnostic web agent with standard browsing tools (text search, image search, zoom-in)
114
+
115
+ 📍 **Set-of-Mark (SoM) Module**: Enables provenance-aware cropping and targeted searches with human-verified bounding box annotations
116
+
117
+ ### Dataset Structure
118
+
119
+ Each sample contains:
120
+ - Quuestion text and images
121
+ - Ground truth answers and alternative valid responses
122
+ - Metadata including arXiv id (if an event is a paper), video URL (if an event is a video), area and subfield
123
+
124
+ ### Performance Results
125
+
126
+ Evaluation of closed- and open-source MLLMs shows:
127
+ - Best accuracy is achieved by o3 with full rollout: **36.0%** (indicating significant room for improvement)
128
+ - SoM integration provides consistent gains up to **+3.9 points**
129
+ - Models struggle with multi-step visual reasoning and cross-modal information integration
130
+
131
+ <p align="center">
132
+ <img src="https://raw.githubusercontent.com/mmsearch-plus/mmsearch-plus.github.io/main/static/images/teaser.png" width="75%"> <br>
133
+ The overview of three paradigms for multimodal browsing tasks that demand fine-grained visual reasoning.
134
+ </p>
135
+
136
+
137
+
138
+ <p align="center">
139
+ <img src="https://raw.githubusercontent.com/mmsearch-plus/mmsearch-plus.github.io/main/static/images/real-teaser.jpg" width="60%"> <br>
140
+ The overview of an example trajectory for a task in <b>MMSearch-Plus</b>.
141
+ </p>
142
+
143
+ ## 🏆 Leaderboard
144
+
145
+ ### Contributing to the Leaderboard
146
+
147
+ 🚨 The [Leaderboard](https://mmsearch-plus.github.io/#leaderboard) is continuously being updated, welcoming the contribution of your excellent LMMs!
148
+
149
+
150
+ ## 🔖 Citation
151
+
152
+ If you find **MMSearch-Plus** useful for your research and applications, please kindly cite using this BibTeX:
153
+
154
+ ```latex
155
+ @article{tao2025mmsearch,
156
+ title={MMSearch-Plus: A Simple Yet Challenging Benchmark for Multimodal Browsing Agents},
157
+ author={Tao, Xijia and Teng, Yihua and Su, Xinxing and Fu, Xinyu and Wu, Jihao and Tao, Chaofan and Liu, Ziru and Bai, Haoli and Liu, Rui and Kong, Lingpeng},
158
+ journal={arXiv preprint arXiv:2508.21475},
159
+ year={2025}
160
+ }
161
+ ```
data-00000-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d303f1ca1a9cd9470d401a538f55e1d4d70f9ca07aed8b2ab8d23f635e59831a
3
+ size 419738728
data-00001-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f070af1555340c0202102f292ef2f7920fccce0434b65c6fd384de407db9b9da
3
+ size 466499832
data-00002-of-00003.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7d59c12bdf310ad6399d4f94fc9cda5cabcd3a059e528accb8653a253e466adc
3
+ size 345386360
dataset_info.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "",
4
+ "features": {
5
+ "question": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "answer": {
10
+ "feature": {
11
+ "dtype": "string",
12
+ "_type": "Value"
13
+ },
14
+ "_type": "Sequence"
15
+ },
16
+ "num_images": {
17
+ "dtype": "int64",
18
+ "_type": "Value"
19
+ },
20
+ "arxiv_id": {
21
+ "dtype": "string",
22
+ "_type": "Value"
23
+ },
24
+ "video_url": {
25
+ "dtype": "string",
26
+ "_type": "Value"
27
+ },
28
+ "category": {
29
+ "dtype": "string",
30
+ "_type": "Value"
31
+ },
32
+ "difficulty": {
33
+ "dtype": "string",
34
+ "_type": "Value"
35
+ },
36
+ "subtask": {
37
+ "dtype": "string",
38
+ "_type": "Value"
39
+ },
40
+ "img_1": {
41
+ "dtype": "string",
42
+ "_type": "Value"
43
+ },
44
+ "img_5": {
45
+ "dtype": "string",
46
+ "_type": "Value"
47
+ },
48
+ "img_4": {
49
+ "dtype": "string",
50
+ "_type": "Value"
51
+ },
52
+ "img_2": {
53
+ "dtype": "string",
54
+ "_type": "Value"
55
+ },
56
+ "img_3": {
57
+ "dtype": "string",
58
+ "_type": "Value"
59
+ }
60
+ },
61
+ "homepage": "",
62
+ "license": ""
63
+ }
mmsearch_plus.py ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """MMSearch-Plus dataset with transparent decryption."""
2
+
3
+ import base64
4
+ import hashlib
5
+ import io
6
+ import os
7
+ from typing import Dict, Any, List
8
+ import datasets
9
+ from PIL import Image
10
+
11
+ _CITATION = """\
12
+ @article{tao2025mmsearch,
13
+ title={MMSearch-Plus: A Simple Yet Challenging Benchmark for Multimodal Browsing Agents},
14
+ author={Tao, Xijia and Teng, Yihua and Su, Xinxing and Fu, Xinyu and Wu, Jihao and Tao, Chaofan and Liu, Ziru and Bai, Haoli and Liu, Rui and Kong, Lingpeng},
15
+ journal={arXiv preprint arXiv:2508.21475},
16
+ year={2025}
17
+ }
18
+ """
19
+
20
+ _DESCRIPTION = """\
21
+ MMSearch-Plus is a challenging benchmark designed to test multimodal browsing agents' ability to perform genuine visual reasoning.
22
+ Unlike existing benchmarks where many tasks can be solved with text-only approaches, MMSearch-Plus requires models to extract
23
+ and use fine-grained visual cues through iterative image-text retrieval.
24
+ """
25
+
26
+ _HOMEPAGE = "https://mmsearch-plus.github.io/"
27
+
28
+ _LICENSE = "CC BY-NC 4.0"
29
+
30
+ _URLS = {
31
+ "train": [
32
+ "data-00000-of-00003.arrow",
33
+ "data-00001-of-00003.arrow",
34
+ "data-00002-of-00003.arrow"
35
+ ]
36
+ }
37
+
38
+ def derive_key(password: str, length: int) -> bytes:
39
+ """Derive encryption key from password using SHA-256."""
40
+ hasher = hashlib.sha256()
41
+ hasher.update(password.encode())
42
+ key = hasher.digest()
43
+ return key * (length // len(key)) + key[: length % len(key)]
44
+
45
+ def decrypt_image(ciphertext_b64: str, password: str) -> Image.Image:
46
+ """Decrypt base64-encoded encrypted image bytes back to PIL Image."""
47
+ if not ciphertext_b64:
48
+ return None
49
+
50
+ try:
51
+ encrypted = base64.b64decode(ciphertext_b64)
52
+ key = derive_key(password, len(encrypted))
53
+ decrypted = bytes([a ^ b for a, b in zip(encrypted, key)])
54
+
55
+ # Convert bytes back to PIL Image
56
+ img_buffer = io.BytesIO(decrypted)
57
+ image = Image.open(img_buffer)
58
+ return image
59
+ except Exception:
60
+ return None
61
+
62
+ def decrypt_text(ciphertext_b64: str, password: str) -> str:
63
+ """Decrypt base64-encoded ciphertext using XOR cipher with derived key."""
64
+ if not ciphertext_b64:
65
+ return ciphertext_b64
66
+
67
+ try:
68
+ encrypted = base64.b64decode(ciphertext_b64)
69
+ key = derive_key(password, len(encrypted))
70
+ decrypted = bytes([a ^ b for a, b in zip(encrypted, key)])
71
+ return decrypted.decode('utf-8')
72
+ except Exception:
73
+ return ciphertext_b64
74
+
75
+ class MmsearchPlus(datasets.GeneratorBasedBuilder):
76
+ """MMSearch-Plus dataset with transparent decryption."""
77
+
78
+ VERSION = datasets.Version("1.0.0")
79
+
80
+ def _info(self):
81
+ features = datasets.Features({
82
+ "question": datasets.Value("string"),
83
+ "answer": datasets.Sequence(datasets.Value("string")),
84
+ "num_images": datasets.Value("int64"),
85
+ "arxiv_id": datasets.Value("string"),
86
+ "video_url": datasets.Value("string"),
87
+ "category": datasets.Value("string"),
88
+ "difficulty": datasets.Value("string"),
89
+ "subtask": datasets.Value("string"),
90
+ "img_1": datasets.Image(),
91
+ "img_2": datasets.Image(),
92
+ "img_3": datasets.Image(),
93
+ "img_4": datasets.Image(),
94
+ "img_5": datasets.Image(),
95
+ })
96
+
97
+ return datasets.DatasetInfo(
98
+ description=_DESCRIPTION,
99
+ features=features,
100
+ homepage=_HOMEPAGE,
101
+ license=_LICENSE,
102
+ citation=_CITATION,
103
+ )
104
+
105
+ def _split_generators(self, dl_manager):
106
+ # Get canary from environment variable or kwargs
107
+ canary = os.environ.get("MMSEARCH_CANARY")
108
+
109
+ # Check if passed in the builder's initialization
110
+ if hasattr(self, 'canary'):
111
+ canary = self.canary
112
+
113
+ if not canary:
114
+ raise ValueError(
115
+ "Canary string is required for decryption. Either set the MMSEARCH_CANARY "
116
+ "environment variable or pass it via the dataset loading kwargs. "
117
+ "Example: load_dataset('path/to/dataset', trust_remote_code=True) after setting "
118
+ "os.environ['MMSEARCH_CANARY'] = 'your_canary_string'"
119
+ )
120
+
121
+ # Download files
122
+ urls = _URLS["train"]
123
+ downloaded_files = dl_manager.download(urls)
124
+
125
+ return [
126
+ datasets.SplitGenerator(
127
+ name=datasets.Split.TRAIN,
128
+ gen_kwargs={
129
+ "filepaths": downloaded_files,
130
+ "canary": canary,
131
+ },
132
+ ),
133
+ ]
134
+
135
+ def _generate_examples(self, filepaths, canary):
136
+ """Generate examples with transparent decryption."""
137
+ key = 0
138
+
139
+ for filepath in filepaths:
140
+ # Load the arrow file
141
+ arrow_dataset = datasets.Dataset.from_file(filepath)
142
+
143
+ for idx in range(len(arrow_dataset)):
144
+ example = arrow_dataset[idx]
145
+
146
+ # Decrypt text fields
147
+ if example.get("question"):
148
+ example["question"] = decrypt_text(example["question"], canary)
149
+
150
+ if example.get("answer"):
151
+ decrypted_answers = []
152
+ for answer in example["answer"]:
153
+ if answer:
154
+ decrypted_answers.append(decrypt_text(answer, canary))
155
+ else:
156
+ decrypted_answers.append(answer)
157
+ example["answer"] = decrypted_answers
158
+
159
+ # Decrypt image fields
160
+ image_fields = ["img_1", "img_2", "img_3", "img_4", "img_5"]
161
+ for field in image_fields:
162
+ if example.get(field):
163
+ example[field] = decrypt_image(example[field], canary)
164
+
165
+ yield key, example
166
+ key += 1
state.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00003.arrow"
5
+ },
6
+ {
7
+ "filename": "data-00001-of-00003.arrow"
8
+ },
9
+ {
10
+ "filename": "data-00002-of-00003.arrow"
11
+ }
12
+ ],
13
+ "_fingerprint": "f322dd0e15bea130",
14
+ "_format_columns": null,
15
+ "_format_kwargs": {},
16
+ "_format_type": null,
17
+ "_output_all_columns": false,
18
+ "_split": null
19
+ }