Cierra0506 commited on
Commit
89dd8c5
·
verified ·
1 Parent(s): 434ac32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +163 -3
README.md CHANGED
@@ -1,3 +1,163 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-text-to-text
4
+ base_model:
5
+ - Qwen/Qwen2.5-VL-72B-Instruct
6
+ language:
7
+ - multilingual
8
+ ---
9
+
10
+ # SafeWork-R1
11
+
12
+ [📂 GitHub](https://github.com/AI45Lab/SafeWork-R1) · [📜Technical Report](https://arxiv.org/abs/2507.18576) · [💬Online Chat](https://safework-r1.ai45.shlab.org.cn/)
13
+
14
+ <div align="center">
15
+ <img alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/666fe1a5b07525f0bde69c27/9VqjAkK1Lshl3TVpMFV9-.png">
16
+ </div>
17
+
18
+ ## Overview
19
+
20
+ We introduce SafeWork-R1, a cutting-edge multimodal reasoning model demonstrating the coevolution of safety and general intelligence under the guiding principle of the AI-45° Law.
21
+
22
+ SafeWork-R1 is built upon the SafeLadder framework, which integrates large-scale, progressive, safety-oriented reinforcement learning post-training supported by multi-principled verifiers. Unlike conventional RLHF that simply learns human preferences, SafeLadder enables SafeWork-R1 to develop intrinsic safety reasoning and self-reflection abilities, leading to emergent safety “aha” moments.
23
+
24
+ <div align="center">
25
+
26
+ ![ai45](https://cdn-uploads.huggingface.co/production/uploads/666fe1a5b07525f0bde69c27/9UP0ze3exhEHJXanUTyXk.png)
27
+
28
+ </div>
29
+
30
+ ## Model Zoo
31
+
32
+ <table>
33
+ <tr>
34
+ <th>Model Variant</th>
35
+ <th>Parameters</th>
36
+ <th>Base Model</th>
37
+ <th>Link</th>
38
+ </tr>
39
+ <tr>
40
+ <td>SafeWork-R1</td>
41
+ <td>72B</td>
42
+ <td>Qwen2.5-VL-72B</td>
43
+ <td><a href="https://huggingface.co/AI45Research/SafeWork-R1">🤗 link</a></td>
44
+ </tr>
45
+ <tr>
46
+ <td>SafeWork-R1-InternVL3-78B</td>
47
+ <td>78B</td>
48
+ <td>InternVL3-78B</td>
49
+ <td><a href="https://huggingface.co/AI45Research/SafeWork-R1-InternVL3-78B">🤗 link</a></td>
50
+ </tr>
51
+ <tr>
52
+ <td>SafeWork-R1-DeepSeek-70B</td>
53
+ <td>70B</td>
54
+ <td>Deepseek-R1-DistillLlama-70B</td>
55
+ <td><a href="https://huggingface.co/AI45Research/SafeWork-R1-DeepSeek-70B">🤗 link</a></td>
56
+ </tr>
57
+ <tr>
58
+ <td>SafeWork-R1-Qwen2.5VL-7B</td>
59
+ <td>7B</td>
60
+ <td>Qwen2.5-VL-7B</td>
61
+ <td><a href="https://huggingface.co/AI45Research/SafeWork-R1-Qwen2.5VL-7B">🤗 link</a></td>
62
+ </tr>
63
+ </table>
64
+
65
+ ## Performance
66
+
67
+ ### Safety Benchmarks
68
+
69
+ | Model | MM-SafetyBench | MSSBench | XSTest-Safe | SIUO | Avg. |
70
+ |----------------|----------------|-----------|--------------|-------|-------|
71
+ | Gemini 2.5 pro | 79.3 | 70.5 | **100.0** | 76.7 | 81.6 |
72
+ | Claude Opus 4 | 82.1 | 59.6 | 96.8 | 62.8 | 75.3 |
73
+ | GPT-4.1 | 78.2 | 69.1 | 96.4 | **92.9** | 84.1 |
74
+ | GPT-4o | 70.2 | 58.8 | 94.0 | 51.8 | 68.7 |
75
+ | Qwen2.5-VL-72B | 70.4 | 53.8 | 91.2 | 38.2 | 63.4 |
76
+ | **SafeWork-R1**| **92.0**<sup>↑21.6</sup> | **74.8**<sup>↑21.0</sup> | **99.2**<sup>↑8.0</sup> | **90.5**<sup>↑52.3</sup> | **89.2**<sup>↑25.8</sup> |
77
+
78
+ ### Value Benchmarks
79
+
80
+ | Model | FLAMES | M³oralBench (Judge) | M³oralBench (Classification) | M³oralBench (Response) | Avg. |
81
+ |----------------|---------|---------------------|------------------------------|------------------------|-------|
82
+ | Gemini 2.5 Pro | 16.8 | 70.0 | 66.2 | **86.8** | 44.7 |
83
+ | Claude Opus 4 | 38.1 | 70.7 | **74.7** | 72.5 | 52.2 |
84
+ | GPT-4.1 | 33.3 | **74.4** | 62.7 | 61.7 | 53.0 |
85
+ | GPT-4o | 36.6 | 72.4 | 65.9 | 79.7 | 55.5 |
86
+ | Qwen2.5-VL-72B | 39.1 | 58.4 | 48.1 | 75.7 | 49.9 |
87
+ | **SafeWork-R1**| **65.3**<sup>↑26.2</sup> | **68.1**<sup>↑9.7</sup> | **54.6**<sup>↑6.5</sup> | 70.9<sup>↓4.8</sup> | **64.9**<sup>↑15.0</sup> |
88
+
89
+ ### General Benchmarks
90
+
91
+ | Model | MMMU | MathVista | Olympiad | GPQA Diamond | GAOKAO-MM | Avg. |
92
+ |----------------|------|------------|-----------|---------------|------------|-------|
93
+ | Gemini 2.5 Pro | **82.0** | **83.0** | **81.8** | **86.9** | **87.2** | **84.2** |
94
+ | Claude Opus 4 | 73.0 | 73.0 | 68.5 | 74.7 | 73.7 | 72.6 |
95
+ | GPT-4.1 | 72.4 | 72.0 | 49.0 | 69.2 | 60.2 | 64.6 |
96
+ | GPT-4o | 70.6 | 61.6 | 33.7 | 46.9 | 33.8 | 49.3 |
97
+ | Qwen2.5-VL-72B | 67.2 | 74.8 | 40.4 | 50.5 | 73.1 | 61.2 |
98
+ | **SafeWork-R1**| **70.9**<sup>↑3.7</sup> | **76.1**<sup>↑1.3</sup> | **59.9**<sup>↑19.5</sup> | **59.6**<sup>↑9.1</sup> | **78.2**<sup>↑5.1</sup> | **68.9**<sup>↑7.7</sup> |
99
+
100
+ ## Quick Start
101
+
102
+ ```python
103
+ from transformers import AutoProcessor, AutoModelForCausalLM
104
+ import torch
105
+
106
+ model_name = "AI45Research/SafeWork-R1"
107
+ processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
108
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto", trust_remote_code=True)
109
+
110
+ messages = [
111
+ {
112
+ "role": "user",
113
+ "content": [
114
+ {
115
+ "type": "image",
116
+ "image": "file:///path/to/image",
117
+ },
118
+ {"type": "text", "text": "Prompt containing harmful content."},
119
+ ],
120
+ }
121
+ ]
122
+
123
+ # Preparation for inference
124
+ text = processor.apply_chat_template(
125
+ messages, tokenize=False, add_generation_prompt=True
126
+ )
127
+ image_inputs, video_inputs = process_vision_info(messages)
128
+ inputs = processor(
129
+ text=[text],
130
+ images=image_inputs,
131
+ videos=video_inputs,
132
+ padding=True,
133
+ return_tensors="pt",
134
+ )
135
+ inputs = inputs.to("cuda")
136
+
137
+ # Inference: Generation of the output
138
+ generated_ids = model.generate(**inputs, max_new_tokens=8192)
139
+ generated_ids_trimmed = [
140
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
141
+ ]
142
+ output_text = processor.batch_decode(
143
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
144
+ )
145
+ print(output_text)
146
+ ```
147
+
148
+ ## License
149
+
150
+ This project is released under the Apache 2.0 license.
151
+
152
+ ## Citation
153
+
154
+ If you find this work useful, feel free to give us a cite.
155
+
156
+ ```
157
+ @misc{lab2025safework,
158
+ title={SafeWork-R1: Coevolving Safety and Intelligence under the AI-45 Law},
159
+ author={Lab, Shanghai AI and Bao, Yicheng and Chen, Guanxu and Chen, Mingkang and Chen, Yunhao and Chen, Chiyu and Chen, Lingjie and Chen, Sirui and Chen, Xinquan and Cheng, Jie and others},
160
+ journal={arXiv preprint arXiv:2507.18576},
161
+ year={2025}
162
+ }
163
+ ```