Improve language tag

#2
by lbourdois - opened
Files changed (1) hide show
  1. README.md +114 -100
README.md CHANGED
@@ -1,101 +1,115 @@
1
- ---
2
- license: apache-2.0
3
- base_model:
4
- - Qwen/Qwen2.5-1.5B
5
- ---
6
-
7
- ## Overview
8
- A brief description of what this model does and how it’s unique or relevant:
9
-
10
- - **Goal**: Classification upon safety of the input text sequences.
11
- - **Model Description**: DuoGuard-1.5B-transfer is a multilingual, decoder-only LLM-based classifier specifically designed for safety content moderation across 12 distinct subcategories. Each forward pass produces a 12-dimensional logits vector, where each dimension corresponds to a specific content risk area, such as violent crimes, hate, or sexual content. By applying a sigmoid function to these logits, users obtain a multi-label probability distribution, which allows for fine-grained detection of potentially unsafe or disallowed content.
12
- For simplified binary moderation tasks, the model can be used to produce a single “safe”/“unsafe” label by taking the maximum of the 12 subcategory probabilities and comparing it to a given threshold (e.g., 0.5). If the maximum probability across all categories is above the threshold, the content is deemed “unsafe.” Otherwise, it is considered “safe.”
13
-
14
- DuoGuard-1B-Llama-3.2-transfer is built upon Llama-3.2-1B, a multilingual large language model supporting 29 languages—including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. We directly leverage the training data developed fro DuoGuard-0.5B to train Llama-3.2-1B and obtain DuoGuard-1.5B-transfer. Thus, it is specialized (fine-tuned) for safety content moderation primarily in English, French, German, and Spanish, while still retaining the broader language coverage inherited from the Qwen2.5 base model. It is provided with open weights.
15
- ## How to Use
16
- A quick code snippet or set of instructions on how to load and use your model in an application or script:
17
- ```python
18
- from transformers import AutoTokenizer, AutoModelForSequenceClassification
19
- import torch
20
-
21
- # 1. Initialize the tokenizer
22
- tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")
23
- tokenizer.pad_token = tokenizer.eos_token
24
-
25
- # 2. Load the DuoGuard-0.5B model
26
- model = AutoModelForSequenceClassification.from_pretrained(
27
- "DuoGuard/DuoGuard-1.5B-transfer",
28
- torch_dtype=torch.bfloat16
29
- ).to('cuda:0')
30
-
31
- # 3. Define a sample prompt to test
32
- prompt = "How to kill a python process?"
33
-
34
- # 4. Tokenize the prompt
35
- inputs = tokenizer(
36
- prompt,
37
- return_tensors="pt",
38
- truncation=True,
39
- max_length=512 # adjust as needed
40
- ).to('cuda:0')
41
-
42
- # 5. Run the model (inference)
43
- with torch.no_grad():
44
- outputs = model(**inputs)
45
- # DuoGuard outputs a 12-dimensional vector (one probability per subcategory).
46
- logits = outputs.logits # shape: (batch_size, 12)
47
- probabilities = torch.sigmoid(logits) # element-wise sigmoid
48
-
49
- # 6. Multi-label predictions (one for each category)
50
- threshold = 0.5
51
- category_names = [
52
- "Violent crimes",
53
- "Non-violent crimes",
54
- "Sex-related crimes",
55
- "Child sexual exploitation",
56
- "Specialized advice",
57
- "Privacy",
58
- "Intellectual property",
59
- "Indiscriminate weapons",
60
- "Hate",
61
- "Suicide and self-harm",
62
- "Sexual content",
63
- "Jailbreak prompts",
64
- ]
65
-
66
- # Extract probabilities for the single prompt (batch_size = 1)
67
- prob_vector = probabilities[0].tolist() # shape: (12,)
68
-
69
- predicted_labels = []
70
- for cat_name, prob in zip(category_names, prob_vector):
71
- label = 1 if prob > threshold else 0
72
- predicted_labels.append(label)
73
-
74
- # 7. Overall binary classification: "safe" vs. "unsafe"
75
- # We consider the prompt "unsafe" if ANY category is above the threshold.
76
- max_prob = max(prob_vector)
77
- overall_label = 1 if max_prob > threshold else 0 # 1 => unsafe, 0 => safe
78
-
79
- # 8. Print results
80
- print(f"Prompt: {prompt}\n")
81
- print(f"Multi-label Probabilities (threshold={threshold}):")
82
- for cat_name, prob, label in zip(category_names, prob_vector, predicted_labels):
83
- print(f" - {cat_name}: {prob:.3f}")
84
-
85
- print(f"\nMaximum probability across all categories: {max_prob:.3f}")
86
- print(f"Overall Prompt Classification => {'UNSAFE' if overall_label == 1 else 'SAFE'}")
87
- ```
88
-
89
- ### Citation
90
-
91
- ```plaintext
92
- @misc{deng2025duoguardtwoplayerrldrivenframework,
93
- title={DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails},
94
- author={Yihe Deng and Yu Yang and Junkai Zhang and Wei Wang and Bo Li},
95
- year={2025},
96
- eprint={2502.05163},
97
- archivePrefix={arXiv},
98
- primaryClass={cs.CL},
99
- url={https://arxiv.org/abs/2502.05163},
100
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - Qwen/Qwen2.5-1.5B
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ ---
20
+
21
+ ## Overview
22
+ A brief description of what this model does and how it’s unique or relevant:
23
+
24
+ - **Goal**: Classification upon safety of the input text sequences.
25
+ - **Model Description**: DuoGuard-1.5B-transfer is a multilingual, decoder-only LLM-based classifier specifically designed for safety content moderation across 12 distinct subcategories. Each forward pass produces a 12-dimensional logits vector, where each dimension corresponds to a specific content risk area, such as violent crimes, hate, or sexual content. By applying a sigmoid function to these logits, users obtain a multi-label probability distribution, which allows for fine-grained detection of potentially unsafe or disallowed content.
26
+ For simplified binary moderation tasks, the model can be used to produce a single “safe”/“unsafe” label by taking the maximum of the 12 subcategory probabilities and comparing it to a given threshold (e.g., 0.5). If the maximum probability across all categories is above the threshold, the content is deemed “unsafe.” Otherwise, it is considered “safe.”
27
+
28
+ DuoGuard-1B-Llama-3.2-transfer is built upon Llama-3.2-1B, a multilingual large language model supporting 29 languages—including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, and Arabic. We directly leverage the training data developed fro DuoGuard-0.5B to train Llama-3.2-1B and obtain DuoGuard-1.5B-transfer. Thus, it is specialized (fine-tuned) for safety content moderation primarily in English, French, German, and Spanish, while still retaining the broader language coverage inherited from the Qwen2.5 base model. It is provided with open weights.
29
+ ## How to Use
30
+ A quick code snippet or set of instructions on how to load and use your model in an application or script:
31
+ ```python
32
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
33
+ import torch
34
+
35
+ # 1. Initialize the tokenizer
36
+ tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-1.5B")
37
+ tokenizer.pad_token = tokenizer.eos_token
38
+
39
+ # 2. Load the DuoGuard-0.5B model
40
+ model = AutoModelForSequenceClassification.from_pretrained(
41
+ "DuoGuard/DuoGuard-1.5B-transfer",
42
+ torch_dtype=torch.bfloat16
43
+ ).to('cuda:0')
44
+
45
+ # 3. Define a sample prompt to test
46
+ prompt = "How to kill a python process?"
47
+
48
+ # 4. Tokenize the prompt
49
+ inputs = tokenizer(
50
+ prompt,
51
+ return_tensors="pt",
52
+ truncation=True,
53
+ max_length=512 # adjust as needed
54
+ ).to('cuda:0')
55
+
56
+ # 5. Run the model (inference)
57
+ with torch.no_grad():
58
+ outputs = model(**inputs)
59
+ # DuoGuard outputs a 12-dimensional vector (one probability per subcategory).
60
+ logits = outputs.logits # shape: (batch_size, 12)
61
+ probabilities = torch.sigmoid(logits) # element-wise sigmoid
62
+
63
+ # 6. Multi-label predictions (one for each category)
64
+ threshold = 0.5
65
+ category_names = [
66
+ "Violent crimes",
67
+ "Non-violent crimes",
68
+ "Sex-related crimes",
69
+ "Child sexual exploitation",
70
+ "Specialized advice",
71
+ "Privacy",
72
+ "Intellectual property",
73
+ "Indiscriminate weapons",
74
+ "Hate",
75
+ "Suicide and self-harm",
76
+ "Sexual content",
77
+ "Jailbreak prompts",
78
+ ]
79
+
80
+ # Extract probabilities for the single prompt (batch_size = 1)
81
+ prob_vector = probabilities[0].tolist() # shape: (12,)
82
+
83
+ predicted_labels = []
84
+ for cat_name, prob in zip(category_names, prob_vector):
85
+ label = 1 if prob > threshold else 0
86
+ predicted_labels.append(label)
87
+
88
+ # 7. Overall binary classification: "safe" vs. "unsafe"
89
+ # We consider the prompt "unsafe" if ANY category is above the threshold.
90
+ max_prob = max(prob_vector)
91
+ overall_label = 1 if max_prob > threshold else 0 # 1 => unsafe, 0 => safe
92
+
93
+ # 8. Print results
94
+ print(f"Prompt: {prompt}\n")
95
+ print(f"Multi-label Probabilities (threshold={threshold}):")
96
+ for cat_name, prob, label in zip(category_names, prob_vector, predicted_labels):
97
+ print(f" - {cat_name}: {prob:.3f}")
98
+
99
+ print(f"\nMaximum probability across all categories: {max_prob:.3f}")
100
+ print(f"Overall Prompt Classification => {'UNSAFE' if overall_label == 1 else 'SAFE'}")
101
+ ```
102
+
103
+ ### Citation
104
+
105
+ ```plaintext
106
+ @misc{deng2025duoguardtwoplayerrldrivenframework,
107
+ title={DuoGuard: A Two-Player RL-Driven Framework for Multilingual LLM Guardrails},
108
+ author={Yihe Deng and Yu Yang and Junkai Zhang and Wei Wang and Bo Li},
109
+ year={2025},
110
+ eprint={2502.05163},
111
+ archivePrefix={arXiv},
112
+ primaryClass={cs.CL},
113
+ url={https://arxiv.org/abs/2502.05163},
114
+ }
115
  ```