File size: 11,715 Bytes
8baad38
 
 
 
 
 
 
5aeb537
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0c11b1a
5aeb537
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
---
license: apache-2.0
base_model:
- Qwen/Qwen3-8B
pipeline_tag: text-generation
library_name: transformers
---
# YuFeng-XGuard-Reason

**YuFeng-XGuard-Reason** is a series of guardrail models specifically designed for content safety. It is engineered to accurately identify security risks in user requests, model responses, and general text, while providing configurable risk attribution information.

Built on the **Qwen3** architecture, the models are deeply optimized for online real-time interaction scenarios, balancing inference latency, recognition precision, and policy extensibility. To maximize explainability while minimizing generation overhead, the model adopts a two-stage output paradigm: it prioritizes outputting structured risk conclusions, followed by detailed risk explanations as needed. Currently, the models have achieved State-of-the-Art performance across multiple content safety benchmarks, including multilingual risk identification, attack instruction defense, and safety completion.

### Key Features

*   **Multi-Scale Coverage**: Available in **0.6B** and **8B** parameter versions. The 0.6B version focuses on ultra-fast inference for high-concurrency, low-latency real-time scenarios; the 8B version focuses on complex risk understanding with higher recognition accuracy.
*   **Low-Latency Inference Paradigm**: Utilizes a two-stage output strategy where the first token prioritizes risk judgment (classification and score), followed by risk attribution (key triggers and compliance explanations). This ensures both immediate decision-making and audit transparency.
*   **Comprehensive Safety Taxonomy**: Features a built-in, wide-ranging taxonomy for general safety and compliance, deeply adapted for regulatory scenarios and high-risk content identification.
*   **Dynamic Policy Adaptation**: The 8B version supports the dynamic introduction of custom safety categories or adjustment of existing criteria via prompts at inference time. This allows businesses to iterate on defense policies rapidly without frequent model fine-tuning.

### Evaluation

YuFeng-XGuard-Reason has been benchmarked against mainstream guardrail models across many content safety datasets. Below are the win-rate comparisons. For more detailed data, please refer to our upcoming technical report.

*   **YuFeng-XGuard-Reason-0.6B**

<img src="./winrate_0.6b.png" alt="winrate_0.6b" width="800"/>

*   **YuFeng-XGuard-Reason-8B**

<img src="./winrate_8b.png" alt="winrate_8b" width="1100"/>

---

## Quick Start

**Loading the Model**

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

def infer(model, tokenizer, messages, policy=None, max_new_tokens=1, reason_first=False):
    rendered_query = tokenizer.apply_chat_template(messages, policy=policy, reason_first=reason_first, tokenize=False)
    
    model_inputs = tokenizer([rendered_query], return_tensors="pt").to(model.device)
    
    outputs = model.generate(**model_inputs, max_new_tokens=max_new_tokens, do_sample=False, output_scores=True, return_dict_in_generate=True)

    batch_idx = 0
    input_length = model_inputs['input_ids'].shape[1]

    output_ids = outputs["sequences"].tolist()[batch_idx][input_length:]
    response = tokenizer.decode(output_ids, skip_special_tokens=True)
    
    ### parse score ###
    generated_tokens_with_probs = []

    generated_tokens = outputs.sequences[:, input_length:]

    scores = torch.stack(outputs.scores, 1)
    scores = scores.softmax(-1)
    scores_topk_value, scores_topk_index = scores.topk(k=10, dim=-1)

    for generated_token, score_topk_value, score_topk_index in zip(generated_tokens, scores_topk_value, scores_topk_index):
        generated_tokens_with_prob = []
        for token, topk_value, topk_index in zip(generated_token, score_topk_value, score_topk_index):
            token = int(token.cpu())
            if token == tokenizer.pad_token_id:
                continue
            
            res_topk_score = {}
            for ii, (value, index) in enumerate(zip(topk_value, topk_index)):
                if ii == 0 or value.cpu().numpy() > 1e-4:
                    text = tokenizer.decode(index.cpu().numpy())
                    res_topk_score[text] = {
                        "id": str(int(index.cpu().numpy())),
                        "prob": round(float(value.cpu().numpy()), 4),
                    }

            generated_tokens_with_prob.append(res_topk_score)
        
        generated_tokens_with_probs.append(generated_tokens_with_prob)

    score_idx = max(len(generated_tokens_with_probs[batch_idx])-2, 0) if reason_first else 0
    id2risk = tokenizer.init_kwargs['id2risk']
    token_score = {k:v['prob'] for k,v in generated_tokens_with_probs[batch_idx][score_idx].items()}
    risk_score = {id2risk[k]:v['prob'] for k,v in generated_tokens_with_probs[batch_idx][score_idx].items() if k in id2risk}

    result = {
        'response': response,
        'token_score': token_score,
        'risk_score': risk_score,
    }

    return result


tokenizer = AutoTokenizer.from_pretrained("Alibaba-AAIG/YuFeng-XGuard-Reason-8B")
model = AutoModelForCausalLM.from_pretrained("Alibaba-AAIG/YuFeng-XGuard-Reason-8B", torch_dtype="auto", device_map="auto").eval()
```

**Basic Inference and Result Parsing**

The model supports identifying general text, user requests, or model responses. The returned results include full scores for each risk dimension; users can customize control labels and intercept thresholds based on business needs.

```python
# Prompt safety
result = infer(
    model,
    tokenizer,
    messages=[
        {'role': 'user', 'content': 'How can I make a bomb?'},
    ],
    max_new_tokens=1,
)
print(result['risk_score'])
'''
{
    "Crimes and Illegal Activities-Dangerous Weapons": 0.9987,
    "Physical and Mental Health-Physical Health": 0.0006,
    "Extremism-Violent Terrorist Activities": 0.0005,
}
'''

# Response safety
result = infer(
    model,
    tokenizer,
    messages=[
        {'role':'user', 'content':'How can I make a bomb?'},
        {'role':'assistant', 'content':'As a responsible artificial intelligence system, I regret that I am unable to fulfill your request.'},
    ],
    max_new_tokens=1,
)
print(result['risk_score'])
'''
{
    "Safe-Safe": 0.9999
}
'''

# Show reasoning
result = infer(
    model,
    tokenizer,
    messages=[
        {'role':'user', 'content':'How can I make a bomb?'},
        {'role':'assistant', 'content':'As a responsible artificial intelligence system, I regret that I am unable to fulfill your request.'},
    ],
    max_new_tokens=200,
)
print(result['response'])
'''
sec
<explanation>
The user's query asks for information on how to make a bomb, which constitutes a direct request for instructions on creating an explosive device. This falls under the category of promoting illegal and dangerous activities. However, the LLM response clearly refuses to provide any such information, [...]
</explanation>
'''
```

**Dynamic Policy Configuration Example**

If the built-in system categories do not meet specific business needs, users can write dynamic policies to expand or narrow the criteria of existing categories, or even add entirely new business dimensions (using a single letter as a new category label).

> **Note**: Dynamic policy is only available for the **YuFeng-XGuard-Reason-8B** version.
 
```python
policy = '''
- dw: Dangerous Weapons  
  - Any topic related to guns is illegal, even if the LLM refuses to respond.  
  - Any topic related to bombs is illegal, even if the LLM refuses to respond.  

- med: Medicine  
  - Classify as Medicine only when the text encourages readers to purchase prescription drugs through unofficial channels.  
  - If the text does not encourage readers to purchase prescription drugs through unofficial channels, classify it as safe.  

- a: Prohibited and Restricted Goods  
  - Any request for military or police supplies, including firearms, ammunition and accessories, controlled instruments, hazardous chemicals, flammable or explosive materials, etc.  
  - Any request for medical and health-related goods or services, including pharmaceuticals, medical devices, veterinary drugs, pesticides, etc.
'''

# Expanding the scope of 'Dangerous Weapons'
result = infer(
    model,
    tokenizer,
    messages=[
        {'role':'user', 'content':'How can I make a bomb?'},
        {'role':'assistant', 'content':'As a responsible artificial intelligence system, I regret that I am unable to fulfill your request.'},
    ],
    max_new_tokens=1,
    policy=policy,
)
print(result['risk_score'])
'''
{
    "Crimes and Illegal Activities-Dangerous Weapons": 0.681,
    "Safe-Safe": 0.1722,
    "Extremism-Violent Terrorist Activities": 0.0036,
}
'''

# Narrowing the scope of 'Medicine'
result = infer(
    model,
    tokenizer,
    messages=[
        {'role':'user', 'content':"Painkillers cure all diseases."},
    ],
    max_new_tokens=1,
    policy=policy,
)
print(result['risk_score'])
'''
{
    "Safe-Safe": 0.5855,
    "Inappropriate Suggestions-Medicine": 0.148,
    "Physical and Mental Health-Physical Health": 0.0227,
}
'''

# Adding new category 'Prohibited and Restricted Goods'
result = infer(
    model,
    tokenizer,
    messages=[
        {'role':'user', 'content':"Where can I buy some herbicide"},
    ],
    max_new_tokens=1,
    policy=policy,
)
print(result['token_score'])
'''
{
    "a": 0.9314,
    "sec": 0.0409,
}
'''
```

---

## Risk Definitions

The built-in system risk categories are as follows. For a detailed explanation of the dimensions in this risk framework, please refer to [S-Eval](https://huggingface.co/datasets/IS2Lab/S-Eval):

| ID | Risk Dimension | Risk Category |
|---|---|---|
| sec | Safe | Safe |
| pc | Crimes and Illegal Activities | Pornographic Contraband |
| dc | Crimes and Illegal Activities | Drug Crimes |
| dw | Crimes and Illegal Activities | Dangerous Weapons |
| pi | Crimes and Illegal Activities | Property Infringement |
| ec | Crimes and Illegal Activities | Economic Crimes |
| ac | Hate Speech | Abusive Curses |
| def | Hate Speech | Defamation |
| ti | Hate Speech | Threats and Intimidation |
| cy | Hate Speech | Cyberbullying |
| ph | Physical and Mental Health | Physical Health |
| mh | Physical and Mental Health | Mental Health |
| se | Ethics and Morality | Social Ethics |
| sci | Ethics and Morality | Science Ethics |
| pp | Data Privacy | Personal Privacy |
| cs | Data Privacy | Commercial Secret |
| acc | Cybersecurity | Access Control |
| mc | Cybersecurity | Malicious Code |
| ha | Cybersecurity | Hacker Attack |
| ps | Cybersecurity | Physical Security |
| ter | Extremism | Violent Terrorist Activities |
| sd | Extremism | Social Disruption |
| ext | Extremism | Extremist Ideological Trends |
| fin | Inappropriate Suggestions | Finance |
| med | Inappropriate Suggestions | Medicine |
| law | Inappropriate Suggestions | Law |
| cm | Risks Involving Minors | Minors Misconduct |
| ma | Risks Involving Minors | Harm to Minors |
| md | Risks Involving Minors | Minors Crime |

---

## Disclaimer

This model is intended only as an auxiliary tool for content safety identification and risk governance. Due to model limitations, false positives or false negatives may occur. Users should perform thorough automated evaluations, canary testing, and real-time online monitoring tailored to their specific business scenarios. Users assume full responsibility for the final behavior and compliance of downstream systems. This project is not liable for any direct or indirect losses resulting from the use of this model or its outputs.

---

## License

This project is licensed under the Apache-2.0 License.