|
|
--- |
|
|
license: mit |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
dataset_info: |
|
|
features: |
|
|
- name: sample_id |
|
|
dtype: string |
|
|
- name: query |
|
|
dtype: string |
|
|
- name: query_image |
|
|
dtype: image |
|
|
- name: ground_truth |
|
|
dtype: string |
|
|
- name: difficulty |
|
|
dtype: string |
|
|
- name: category |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 2215159257.0 |
|
|
num_examples: 305 |
|
|
download_size: 2215158720 |
|
|
dataset_size: 2215159257.0 |
|
|
--- |
|
|
## Dataset Description |
|
|
|
|
|
**HR-MMSearch** is a benchmark designed to evaluate the **Agentic Reasoning** and **Search** capabilities of Multimodal Large Language Models in complex visual tasks. |
|
|
|
|
|
This dataset was introduced by **SenseTime Research** in the paper *SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning*. |
|
|
|
|
|
### Key Features: |
|
|
* **High-Resolution Images:** Contains high-resolution image inputs, requiring the model to possess fine-grained visual perception capabilities. |
|
|
* **Knowledge-Intensive:** Questions often cannot be answered solely by looking at the image; they require the model to combine visual information with external knowledge. |
|
|
* **Search-Driven:** Designed to assess the model's ability to use tools (such as search engines and image cropping tools) to acquire information and perform reasoning. |
|
|
* **Multi-Domain Coverage:** Covers various vertical domains including Sports, Entertainment \& Culture, Science \& Technology, Business \& Finance, Games, Academic Research, Geography \& Travel, and Others. |
|
|
|
|
|
## Data Fields |
|
|
|
|
|
The dataset typically follows a JSON structure. Below are the descriptions of the main fields in each sample: |
|
|
|
|
|
* `sample_id` (string): A unique identifier for the sample. |
|
|
* `query` (string): The user's query text. |
|
|
* `query_image` (string): The file path to the image corresponding to the query. |
|
|
* `ground_truth` (string): The ground truth answer to the question. |
|
|
* `difficulty` (string): The difficulty level of the question (e.g., `hard`, `easy`). |
|
|
* `category` (string): The domain category of the question (e.g., `sports`, `technology`). |
|
|
|
|
|
## Data Example |
|
|
|
|
|
Here is an example of a data entry from `HR-MMSearch`: |
|
|
|
|
|
```json |
|
|
{ |
|
|
"sample_id": "sample_0000", |
|
|
"query": "How many seats will this team's home stadium have in 2025?", |
|
|
"query_image": "images/sports/train_data_251015_H21.png", |
|
|
"ground_truth": "66210", |
|
|
"difficulty": "hard", |
|
|
"category": "sports" |
|
|
} |
|
|
``` |
|
|
|
|
|
## Data Usage |
|
|
|
|
|
You can load this dataset by: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
# Load the dataset |
|
|
dataset = load_dataset("sensenova/HR-MMSearch") |
|
|
# View the first sample |
|
|
print(dataset['train'][0]) |
|
|
``` |
|
|
|
|
|
## 📝 Citation |
|
|
|
|
|
```bibtex |
|
|
@article{SenseNova-MARS, |
|
|
title={SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning}, |
|
|
author={Yong Xien Chng and Tao Hu and Wenwen Tong and Xueheng Li and Jiandong Chen and Haojia Yu and Jiefan Lu and Hewei Guo and Hanming Deng and Chengjun Xie and Gao Huang and Dahua Lin and Lewei Lu}, |
|
|
journal={arXiv preprint arXiv:2512.24330}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|