File size: 3,088 Bytes
0322010
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1bd0d1
0322010
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3ae13e0
0322010
 
c6f5c36
0322010
c6f5c36
 
0322010
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: mit
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
dataset_info:
  features:
  - name: sample_id
    dtype: string
  - name: query
    dtype: string
  - name: query_image
    dtype: image
  - name: ground_truth
    dtype: string
  - name: difficulty
    dtype: string
  - name: category
    dtype: string
  splits:
  - name: train
    num_bytes: 2215159257.0
    num_examples: 305
  download_size: 2215158720
  dataset_size: 2215159257.0
---
## Dataset Description

**HR-MMSearch** is a benchmark designed to evaluate the **Agentic Reasoning** and **Search** capabilities of Multimodal Large Language Models in complex visual tasks.

This dataset was introduced by **SenseTime Research** in the paper *SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning*.

### Key Features:
* **High-Resolution Images:** Contains high-resolution image inputs, requiring the model to possess fine-grained visual perception capabilities.
* **Knowledge-Intensive:** Questions often cannot be answered solely by looking at the image; they require the model to combine visual information with external knowledge.
* **Search-Driven:** Designed to assess the model's ability to use tools (such as search engines and image cropping tools) to acquire information and perform reasoning.
* **Multi-Domain Coverage:** Covers various vertical domains including Sports, Entertainment \& Culture, Science \& Technology, Business \& Finance, Games, Academic Research, Geography \& Travel, and Others.

## Data Fields

The dataset typically follows a JSON structure. Below are the descriptions of the main fields in each sample:

* `sample_id` (string): A unique identifier for the sample.
* `query` (string): The user's query text.
* `query_image` (string): The file path to the image corresponding to the query.
* `ground_truth` (string): The ground truth answer to the question.
* `difficulty` (string): The difficulty level of the question (e.g., `hard`, `easy`).
* `category` (string): The domain category of the question (e.g., `sports`, `technology`).

## Data Example

Here is an example of a data entry from `HR-MMSearch`:

```json
{
  "sample_id": "sample_0000",
  "query": "How many seats will this team's home stadium have in 2025?",
  "query_image": "images/sports/train_data_251015_H21.png",
  "ground_truth": "66210",
  "difficulty": "hard",
  "category": "sports"
}
```

## Data Usage

You can load this dataset by:

```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("sensenova/HR-MMSearch")
# View the first sample
print(dataset['train'][0])
```

## 📝 Citation

```bibtex
@article{SenseNova-MARS,
  title={SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning},
  author={Yong Xien Chng and Tao Hu and Wenwen Tong and Xueheng Li and Jiandong Chen and Haojia Yu and Jiefan Lu and Hewei Guo and Hanming Deng and Chengjun Xie and Gao Huang and Dahua Lin and Lewei Lu},
  journal={arXiv preprint arXiv:2512.24330},
  year={2025}
}
```