Add paper link, code link and task category

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +13 -4
README.md CHANGED
@@ -1,5 +1,9 @@
1
  ---
2
  license: mit
 
 
 
 
3
  configs:
4
  - config_name: default
5
  data_files:
@@ -26,17 +30,22 @@ dataset_info:
26
  download_size: 2215158720
27
  dataset_size: 2215159257.0
28
  ---
 
 
 
 
 
29
  ## Dataset Description
30
 
31
  **HR-MMSearch** is a benchmark designed to evaluate the **Agentic Reasoning** and **Search** capabilities of Multimodal Large Language Models in complex visual tasks.
32
 
33
- This dataset was introduced by **SenseTime Research** in the paper *SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning*.
34
 
35
  ### Key Features:
36
  * **High-Resolution Images:** Contains high-resolution image inputs, requiring the model to possess fine-grained visual perception capabilities.
37
  * **Knowledge-Intensive:** Questions often cannot be answered solely by looking at the image; they require the model to combine visual information with external knowledge.
38
  * **Search-Driven:** Designed to assess the model's ability to use tools (such as search engines and image cropping tools) to acquire information and perform reasoning.
39
- * **Multi-Domain Coverage:** Covers various vertical domains including Sports, Entertainment \& Culture, Science \& Technology, Business \& Finance, Games, Academic Research, Geography \& Travel, and Others.
40
 
41
  ## Data Fields
42
 
@@ -44,7 +53,7 @@ The dataset typically follows a JSON structure. Below are the descriptions of th
44
 
45
  * `sample_id` (string): A unique identifier for the sample.
46
  * `query` (string): The user's query text.
47
- * `query_image` (string): The file path to the image corresponding to the query.
48
  * `ground_truth` (string): The ground truth answer to the question.
49
  * `difficulty` (string): The difficulty level of the question (e.g., `hard`, `easy`).
50
  * `category` (string): The domain category of the question (e.g., `sports`, `technology`).
@@ -85,4 +94,4 @@ print(dataset['train'][0])
85
  journal={arXiv preprint arXiv:2512.24330},
86
  year={2025}
87
  }
88
- ```
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - image-text-to-text
5
+ language:
6
+ - en
7
  configs:
8
  - config_name: default
9
  data_files:
 
30
  download_size: 2215158720
31
  dataset_size: 2215159257.0
32
  ---
33
+
34
+ # HR-MMSearch
35
+
36
+ [**Paper**](https://huggingface.co/papers/2512.24330) | [**Code**](https://github.com/OpenSenseNova/SenseNova-MARS)
37
+
38
  ## Dataset Description
39
 
40
  **HR-MMSearch** is a benchmark designed to evaluate the **Agentic Reasoning** and **Search** capabilities of Multimodal Large Language Models in complex visual tasks.
41
 
42
+ This dataset was introduced by **SenseTime Research** in the paper [SenseNova-MARS: Empowering Multimodal Agentic Reasoning and Search via Reinforcement Learning](https://huggingface.co/papers/2512.24330).
43
 
44
  ### Key Features:
45
  * **High-Resolution Images:** Contains high-resolution image inputs, requiring the model to possess fine-grained visual perception capabilities.
46
  * **Knowledge-Intensive:** Questions often cannot be answered solely by looking at the image; they require the model to combine visual information with external knowledge.
47
  * **Search-Driven:** Designed to assess the model's ability to use tools (such as search engines and image cropping tools) to acquire information and perform reasoning.
48
+ * **Multi-Domain Coverage:** Covers various vertical domains including Sports, Entertainment & Culture, Science & Technology, Business & Finance, Games, Academic Research, Geography & Travel, and Others.
49
 
50
  ## Data Fields
51
 
 
53
 
54
  * `sample_id` (string): A unique identifier for the sample.
55
  * `query` (string): The user's query text.
56
+ * `query_image` (image): The image corresponding to the query.
57
  * `ground_truth` (string): The ground truth answer to the question.
58
  * `difficulty` (string): The difficulty level of the question (e.g., `hard`, `easy`).
59
  * `category` (string): The domain category of the question (e.g., `sports`, `technology`).
 
94
  journal={arXiv preprint arXiv:2512.24330},
95
  year={2025}
96
  }
97
+ ```