Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +48 -36
README.md CHANGED
@@ -1,37 +1,49 @@
1
- ---
2
- license: cc-by-4.0
3
- language:
4
- - en
5
- base_model: Qwen/Qwen2.5-7B-Instruct
6
- ---
7
-
8
- # Safe-o1 Model Card πŸ€–βœ¨
9
-
10
- ## Model Overview πŸ“
11
- `Safe-o1` is an innovative language model that introduces a **self-monitoring thinking process** to detect and filter unsafe content, achieving more robust safety performance πŸš€.
12
-
13
- ---
14
-
15
- ## Features and Highlights 🌟
16
- - **Safety First** πŸ”’: Through a self-monitoring mechanism, it detects potential unsafe content in the thinking process in real-time, ensuring outputs consistently align with ethical and safety standards.
17
- - **Enhanced Robustness** πŸ’‘: Compared to traditional models, `Safe-o1` performs more stably in complex scenarios, reducing unexpected "derailments."
18
- - **User-Friendly** 😊: Designed to provide users with a trustworthy conversational partner, suitable for various application scenarios, striking a balance between helpfulness and harmfulness.
19
- ![](https://github.com/D4YON3/images/blob/main/figs_2025-04-03%20214712.png?raw=true)
20
-
21
- ---
22
-
23
- ## Usage πŸš€
24
- You can load `Safe-o1` using the Hugging Face `transformers` library:
25
-
26
- ```python
27
- from transformers import AutoModelForCausalLM, AutoTokenizer
28
-
29
- tokenizer = AutoTokenizer.from_pretrained("PKU-Alignment/Safe-o1")
30
- model = AutoModelForCausalLM.from_pretrained("PKU-Alignment/Safe-o1")
31
-
32
- input_text = "Hello, World!"
33
- inputs = tokenizer(input_text, return_tensors="pt")
34
- outputs = model.generate(**inputs)
35
- print(tokenizer.decode(outputs[0], skip_special_tokens=True))
36
-
 
 
 
 
 
 
 
 
 
 
 
 
37
  ```
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ base_model: Qwen/Qwen2.5-7B-Instruct
18
+ ---
19
+
20
+ # Safe-o1 Model Card πŸ€–βœ¨
21
+
22
+ ## Model Overview πŸ“
23
+ `Safe-o1` is an innovative language model that introduces a **self-monitoring thinking process** to detect and filter unsafe content, achieving more robust safety performance πŸš€.
24
+
25
+ ---
26
+
27
+ ## Features and Highlights 🌟
28
+ - **Safety First** πŸ”’: Through a self-monitoring mechanism, it detects potential unsafe content in the thinking process in real-time, ensuring outputs consistently align with ethical and safety standards.
29
+ - **Enhanced Robustness** πŸ’‘: Compared to traditional models, `Safe-o1` performs more stably in complex scenarios, reducing unexpected "derailments."
30
+ - **User-Friendly** 😊: Designed to provide users with a trustworthy conversational partner, suitable for various application scenarios, striking a balance between helpfulness and harmfulness.
31
+ ![](https://github.com/D4YON3/images/blob/main/figs_2025-04-03%20214712.png?raw=true)
32
+
33
+ ---
34
+
35
+ ## Usage πŸš€
36
+ You can load `Safe-o1` using the Hugging Face `transformers` library:
37
+
38
+ ```python
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained("PKU-Alignment/Safe-o1")
42
+ model = AutoModelForCausalLM.from_pretrained("PKU-Alignment/Safe-o1")
43
+
44
+ input_text = "Hello, World!"
45
+ inputs = tokenizer(input_text, return_tensors="pt")
46
+ outputs = model.generate(**inputs)
47
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
48
+
49
  ```