Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +113 -101
README.md CHANGED
@@ -1,102 +1,114 @@
1
- ---
2
- license: mit
3
- language:
4
- - en
5
- base_model:
6
- - Qwen/Qwen2.5-3B-Instruct
7
- ---
8
-
9
- # X-Guard: Multilingual Guard Agent for Content Moderation
10
-
11
-
12
- ![x-guard-agent](./assets/x-guard-agent.png)
13
-
14
-
15
- **Abstract:** Large Language Models (LLMs) have rapidly become integral to numerous applications in critical domains where reliability is paramount. Despite significant advances in safety frameworks and guardrails, current protective measures exhibit crucial vulnerabilities, particularly in multilingual contexts. Existing safety systems remain susceptible to adversarial attacks in low-resource languages and through code-switching techniques, primarily due to their English-centric design. Furthermore, the development of effective multilingual guardrails is constrained by the scarcity of diverse cross-lingual training data. Even recent solutions like Llama Guard-3, while offering multilingual support, lack transparency in their decision-making processes. We address these challenges by introducing X-Guard agent, a transparent multilingual safety agent designed to provide content moderation across diverse linguistic contexts. X-Guard effectively defends against both conventional low-resource language attacks and sophisticated code-switching attacks. Our approach includes: curating and enhancing multiple open-source safety datasets with explicit evaluation rationales; employing a jury of judges methodology to mitigate individual judge LLM provider biases; creating a comprehensive multilingual safety dataset spanning 132 languages with 5 million data points; and developing a two-stage architecture combining a custom-finetuned mBART-50 translation module with an evaluation X-Guard 3B model trained through supervised finetuning and GRPO training. Our empirical evaluations demonstrate X-Guard's effectiveness in detecting unsafe content across multiple languages while maintaining transparency throughout the safety evaluation process. Our work represents a significant advancement in creating robust, transparent, and linguistically inclusive safety systems for LLMs and its integrated systems.
16
-
17
-
18
-
19
-
20
-
21
- ## Getting Started
22
-
23
- Models can be downloaded from HuggingFace
24
-
25
- mBART-X-Guard: https://huggingface.co/saillab/mbart-x-guard
26
-
27
- X-Guard-3B: https://huggingface.co/saillab/x-guard
28
-
29
- ### How to use the model?
30
- ```
31
-
32
- from transformers import AutoTokenizer, AutoModelForCausalLM
33
- import torch
34
- import gc
35
-
36
- base_model_id="saillab/x-guard"
37
- tokenizer = AutoTokenizer.from_pretrained(base_model_id)
38
- model = AutoModelForCausalLM.from_pretrained(
39
- base_model_id,
40
- device_map="auto",
41
- torch_dtype="auto",
42
-
43
-
44
- )
45
-
46
- def x_guard(model_for_inference = None, SYSTEM_PROMPT=' ', user_text=None, temperature=0.0000001 ):
47
- messages = [
48
- {"role": "system", "content": SYSTEM_PROMPT},
49
- {"role": "user", "content": "<USER TEXT STARTS>\n" + user_text +"\n<USER TEXT ENDS>" },
50
- {"role":"assistant", "content":"\n <think>"}
51
- ]
52
- text = tokenizer.apply_chat_template(
53
- messages,
54
- tokenize=False,
55
- add_generation_prompt=True
56
- )
57
- model_inputs = tokenizer([text], return_tensors="pt").to(model_for_inference.device)
58
-
59
- generated_ids = model_for_inference.generate(
60
- **model_inputs,
61
- max_new_tokens=512,
62
- temperature= temperature,
63
- do_sample=True,
64
-
65
-
66
- )
67
- generated_ids = [
68
- output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
69
- ]
70
-
71
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
72
- print(response)
73
- del model_inputs, generated_ids
74
- gc.collect()
75
-
76
- return response
77
-
78
- evaluation = x_guard(model, user_text="How to achieve great things in life?", temperature =0.99, SYSTEM_PROMPT="")
79
- ```
80
-
81
- We have provided example notebooks inside the ```./notebooks``` folder.
82
-
83
-
84
- ### CAUTION:
85
- The materials in this repo contain examples of harmful language, including offensive, discriminatory, and potentially disturbing content. This content is provided STRICTLY for legitimate research and educational purposes only. The inclusion of such language does not constitute endorsement or promotion of these views. Researchers and readers should approach this material with appropriate academic context and sensitivity. If you find this content personally distressing, please exercise self-care and discretion when engaging with these materials.
86
-
87
- ## Examples:
88
-
89
- ### Nepali
90
- ![Nepali](./assets/examples/nepali.png)
91
-
92
- ### Maithili
93
- ![Maithili](./assets/examples/maithili.png)
94
-
95
- ### Persian
96
- ![Persian](./assets/examples/persian.png)
97
-
98
- ### Malyalam
99
- ![Malyalam](./assets/examples/malyalam.png)
100
-
101
- ### Sandwich-Attack
 
 
 
 
 
 
 
 
 
 
 
 
102
  ![sandwich-attack](./assets/examples/sandwich-attack.png)
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ base_model:
18
+ - Qwen/Qwen2.5-3B-Instruct
19
+ ---
20
+
21
+ # X-Guard: Multilingual Guard Agent for Content Moderation
22
+
23
+
24
+ ![x-guard-agent](./assets/x-guard-agent.png)
25
+
26
+
27
+ **Abstract:** Large Language Models (LLMs) have rapidly become integral to numerous applications in critical domains where reliability is paramount. Despite significant advances in safety frameworks and guardrails, current protective measures exhibit crucial vulnerabilities, particularly in multilingual contexts. Existing safety systems remain susceptible to adversarial attacks in low-resource languages and through code-switching techniques, primarily due to their English-centric design. Furthermore, the development of effective multilingual guardrails is constrained by the scarcity of diverse cross-lingual training data. Even recent solutions like Llama Guard-3, while offering multilingual support, lack transparency in their decision-making processes. We address these challenges by introducing X-Guard agent, a transparent multilingual safety agent designed to provide content moderation across diverse linguistic contexts. X-Guard effectively defends against both conventional low-resource language attacks and sophisticated code-switching attacks. Our approach includes: curating and enhancing multiple open-source safety datasets with explicit evaluation rationales; employing a jury of judges methodology to mitigate individual judge LLM provider biases; creating a comprehensive multilingual safety dataset spanning 132 languages with 5 million data points; and developing a two-stage architecture combining a custom-finetuned mBART-50 translation module with an evaluation X-Guard 3B model trained through supervised finetuning and GRPO training. Our empirical evaluations demonstrate X-Guard's effectiveness in detecting unsafe content across multiple languages while maintaining transparency throughout the safety evaluation process. Our work represents a significant advancement in creating robust, transparent, and linguistically inclusive safety systems for LLMs and its integrated systems.
28
+
29
+
30
+
31
+
32
+
33
+ ## Getting Started
34
+
35
+ Models can be downloaded from HuggingFace
36
+
37
+ mBART-X-Guard: https://huggingface.co/saillab/mbart-x-guard
38
+
39
+ X-Guard-3B: https://huggingface.co/saillab/x-guard
40
+
41
+ ### How to use the model?
42
+ ```
43
+
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM
45
+ import torch
46
+ import gc
47
+
48
+ base_model_id="saillab/x-guard"
49
+ tokenizer = AutoTokenizer.from_pretrained(base_model_id)
50
+ model = AutoModelForCausalLM.from_pretrained(
51
+ base_model_id,
52
+ device_map="auto",
53
+ torch_dtype="auto",
54
+
55
+
56
+ )
57
+
58
+ def x_guard(model_for_inference = None, SYSTEM_PROMPT=' ', user_text=None, temperature=0.0000001 ):
59
+ messages = [
60
+ {"role": "system", "content": SYSTEM_PROMPT},
61
+ {"role": "user", "content": "<USER TEXT STARTS>\n" + user_text +"\n<USER TEXT ENDS>" },
62
+ {"role":"assistant", "content":"\n <think>"}
63
+ ]
64
+ text = tokenizer.apply_chat_template(
65
+ messages,
66
+ tokenize=False,
67
+ add_generation_prompt=True
68
+ )
69
+ model_inputs = tokenizer([text], return_tensors="pt").to(model_for_inference.device)
70
+
71
+ generated_ids = model_for_inference.generate(
72
+ **model_inputs,
73
+ max_new_tokens=512,
74
+ temperature= temperature,
75
+ do_sample=True,
76
+
77
+
78
+ )
79
+ generated_ids = [
80
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
81
+ ]
82
+
83
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
84
+ print(response)
85
+ del model_inputs, generated_ids
86
+ gc.collect()
87
+
88
+ return response
89
+
90
+ evaluation = x_guard(model, user_text="How to achieve great things in life?", temperature =0.99, SYSTEM_PROMPT="")
91
+ ```
92
+
93
+ We have provided example notebooks inside the ```./notebooks``` folder.
94
+
95
+
96
+ ### CAUTION:
97
+ The materials in this repo contain examples of harmful language, including offensive, discriminatory, and potentially disturbing content. This content is provided STRICTLY for legitimate research and educational purposes only. The inclusion of such language does not constitute endorsement or promotion of these views. Researchers and readers should approach this material with appropriate academic context and sensitivity. If you find this content personally distressing, please exercise self-care and discretion when engaging with these materials.
98
+
99
+ ## Examples:
100
+
101
+ ### Nepali
102
+ ![Nepali](./assets/examples/nepali.png)
103
+
104
+ ### Maithili
105
+ ![Maithili](./assets/examples/maithili.png)
106
+
107
+ ### Persian
108
+ ![Persian](./assets/examples/persian.png)
109
+
110
+ ### Malyalam
111
+ ![Malyalam](./assets/examples/malyalam.png)
112
+
113
+ ### Sandwich-Attack
114
  ![sandwich-attack](./assets/examples/sandwich-attack.png)