Improve model card with pipeline tag, library name, and Github link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +24 -8
README.md CHANGED
@@ -1,14 +1,26 @@
1
  ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- - zh
6
  base_model:
7
  - Qwen/Qwen2.5-VL-3B-Instruct
8
  datasets:
9
  - justairr/VQA-Verify
 
 
 
 
 
 
10
  ---
11
 
 
 
 
 
 
 
 
 
 
 
12
  This is the official implementation from the paper *SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and Verifiable Rewards*. [Arxiv Here.](https://arxiv.org/abs/2505.19094)
13
 
14
  SATORI is a vision-language model fine-tuned from Qwen2.5-VL to perform structured visual reasoning for Visual Question Answering (VQA). It generates:
@@ -38,9 +50,12 @@ messages = [
38
  {
39
  "role": "system",
40
  "content": (
41
- "Given an image and a question, follow these steps:\n"
42
- "1. Generate a brief image caption describing the overall scene inside <caption>...</caption>.\n"
43
- "2. Determine the most relevant image regions, output their coordinates inside <bbox>...</bbox>.\n"
 
 
 
44
  "3. Provide the final answer inside <answer>...</answer>."
45
  ),
46
  },
@@ -54,7 +69,8 @@ messages = [
54
  {
55
  "type": "text",
56
  "text": (
57
- "What's the girl playing with?\n"
 
58
  "First, provide an image caption inside <caption>...</caption>, "
59
  "then bounding boxes inside <bbox>...</bbox>, and finally <answer>...</answer>."
60
  ),
 
1
  ---
 
 
 
 
2
  base_model:
3
  - Qwen/Qwen2.5-VL-3B-Instruct
4
  datasets:
5
  - justairr/VQA-Verify
6
+ language:
7
+ - en
8
+ - zh
9
+ license: apache-2.0
10
+ pipeline_tag: image-text-to-text
11
+ library_name: transformers
12
  ---
13
 
14
+ # SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and Verifiable Rewards
15
+
16
+ The model was presented in the paper [SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and Verifiable Rewards](https://huggingface.co/papers/2505.19094).
17
+
18
+ # Paper abstract
19
+
20
+ DeepSeek-R1 has demonstrated powerful reasoning capabilities in the text domain through stable reinforcement learning (RL). Recently, in the multimodal domain, works have begun to directly apply RL to generate R1-like free-form reasoning for Visual Question Answering (VQA) tasks. However, multimodal tasks share an intrinsically different nature from textual tasks, which heavily rely on the understanding of the input image to solve the problem. Therefore, such free-form reasoning faces two critical limitations in the VQA task: (1) Extended reasoning chains diffuse visual focus away from task-critical regions, degrading answer accuracy. (2) Unverifiable intermediate steps amplify policy-gradient variance and computational costs overhead. To address these issues, in this paper, we introduce SATORI ($\textbf{S}patially$ $\textbf{A}nchored$ $\textbf{T}ask$ $\textbf{O}ptimization$ with $\textbf{R}e\textbf{I}nforcement$ Learning), which decomposes VQA into three verifiable stages, including global image captioning, region localization, and answer prediction, each supplying explicit reward signals. Furthermore, we also introduce VQA-Verify, a 12k dataset annotated with answer-aligned captions and bounding-boxes to facilitate training. Experiments demonstrate consistent performance improvements across seven VQA benchmarks, achieving up to $15.7\%$ improvement in accuracy in accuracy compared to the R1-like baseline. Our analysis of the attention map confirms enhanced focus on critical regions, which brings improvements in accuracy. Our code is available at [this Github URL](https://github.com/AILab-TJU/SATORI-R1).
21
+
22
+ # Content
23
+
24
  This is the official implementation from the paper *SATORI-R1: Incentivizing Multimodal Reasoning with Spatial Grounding and Verifiable Rewards*. [Arxiv Here.](https://arxiv.org/abs/2505.19094)
25
 
26
  SATORI is a vision-language model fine-tuned from Qwen2.5-VL to perform structured visual reasoning for Visual Question Answering (VQA). It generates:
 
50
  {
51
  "role": "system",
52
  "content": (
53
+ "Given an image and a question, follow these steps:
54
+ "
55
+ "1. Generate a brief image caption describing the overall scene inside <caption>...</caption>.
56
+ "
57
+ "2. Determine the most relevant image regions, output their coordinates inside <bbox>...</bbox>.
58
+ "
59
  "3. Provide the final answer inside <answer>...</answer>."
60
  ),
61
  },
 
69
  {
70
  "type": "text",
71
  "text": (
72
+ "What's the girl playing with?
73
+ "
74
  "First, provide an image caption inside <caption>...</caption>, "
75
  "then bounding boxes inside <bbox>...</bbox>, and finally <answer>...</answer>."
76
  ),