Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +101 -89
README.md CHANGED
@@ -1,90 +1,102 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- pipeline_tag: text-generation
6
- tags:
7
- - role-play
8
- - fine-tuned
9
- - qwen2.5
10
- base_model:
11
- - Qwen/Qwen2.5-14B-Instruct
12
- library_name: transformers
13
- ---
14
-
15
- ![Oxy 1 Small](https://cdn-uploads.huggingface.co/production/uploads/64fb80c8bb362cbf2ff96c7e/tTIVIblPUbTYnlvHQQjXB.png)
16
-
17
- ## Introduction
18
-
19
- **Oxy 1 Small** is a fine-tuned version of the [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) language model, specialized for **role-play** scenarios. Despite its small size, it delivers impressive performance in generating engaging dialogues and interactive storytelling.
20
-
21
- Developed by **Oxygen (oxyapi)**, with contributions from **TornadoSoftwares**, Oxy 1 Small aims to provide an accessible and efficient language model for creative and immersive role-play experiences.
22
-
23
- ## Model Details
24
-
25
- - **Model Name**: Oxy 1 Small
26
- - **Model ID**: [oxyapi/oxy-1-small](https://huggingface.co/oxyapi/oxy-1-small)
27
- - **Base Model**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B)
28
- - **Model Type**: Chat Completions
29
- - **Prompt Format**: ChatML
30
- - **License**: Apache-2.0
31
- - **Language**: English
32
- - **Tokenizer**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
33
- - **Max Input Tokens**: 32,768
34
- - **Max Output Tokens**: 8,192
35
-
36
- ### Features
37
-
38
- - **Fine-tuned for Role-Play**: Specially trained to generate dynamic and contextually rich role-play dialogues.
39
- - **Efficient**: Compact model size allows for faster inference and reduced computational resources.
40
- - **Parameter Support**:
41
- - `temperature`
42
- - `top_p`
43
- - `top_k`
44
- - `frequency_penalty`
45
- - `presence_penalty`
46
- - `max_tokens`
47
-
48
- ### Metadata
49
-
50
- - **Owned by**: Oxygen (oxyapi)
51
- - **Contributors**: TornadoSoftwares
52
- - **Description**: A Qwen/Qwen2.5-14B-Instruct fine-tune for role-play trained on custom datasets
53
-
54
- ## Usage
55
-
56
- To utilize Oxy 1 Small for text generation in role-play scenarios, you can load the model using the Hugging Face Transformers library:
57
-
58
- ```python
59
- from transformers import AutoModelForCausalLM, AutoTokenizer
60
-
61
- tokenizer = AutoTokenizer.from_pretrained("oxyapi/oxy-1-small")
62
- model = AutoModelForCausalLM.from_pretrained("oxyapi/oxy-1-small")
63
-
64
- prompt = "You are a wise old wizard in a mystical land. A traveler approaches you seeking advice."
65
- inputs = tokenizer(prompt, return_tensors="pt")
66
- outputs = model.generate(**inputs, max_length=500)
67
- response = tokenizer.decode(outputs[0], skip_special_tokens=True)
68
- print(response)
69
- ```
70
-
71
- ## Performance
72
-
73
- Performance benchmarks for Oxy 1 Small are not available at this time. Future updates may include detailed evaluations on relevant datasets.
74
-
75
- ## License
76
-
77
- This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
78
-
79
- ## Citation
80
-
81
- If you find Oxy 1 Small useful in your research or applications, please cite it as:
82
-
83
- ```
84
- @misc{oxy1small2024,
85
- title={Oxy 1 Small: A Fine-Tuned Qwen2.5-14B-Instruct Model for Role-Play},
86
- author={Oxygen (oxyapi)},
87
- year={2024},
88
- howpublished={\url{https://huggingface.co/oxyapi/oxy-1-small}},
89
- }
 
 
 
 
 
 
 
 
 
 
 
 
90
  ```
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ pipeline_tag: text-generation
18
+ tags:
19
+ - role-play
20
+ - fine-tuned
21
+ - qwen2.5
22
+ base_model:
23
+ - Qwen/Qwen2.5-14B-Instruct
24
+ library_name: transformers
25
+ ---
26
+
27
+ ![Oxy 1 Small](https://cdn-uploads.huggingface.co/production/uploads/64fb80c8bb362cbf2ff96c7e/tTIVIblPUbTYnlvHQQjXB.png)
28
+
29
+ ## Introduction
30
+
31
+ **Oxy 1 Small** is a fine-tuned version of the [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) language model, specialized for **role-play** scenarios. Despite its small size, it delivers impressive performance in generating engaging dialogues and interactive storytelling.
32
+
33
+ Developed by **Oxygen (oxyapi)**, with contributions from **TornadoSoftwares**, Oxy 1 Small aims to provide an accessible and efficient language model for creative and immersive role-play experiences.
34
+
35
+ ## Model Details
36
+
37
+ - **Model Name**: Oxy 1 Small
38
+ - **Model ID**: [oxyapi/oxy-1-small](https://huggingface.co/oxyapi/oxy-1-small)
39
+ - **Base Model**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B)
40
+ - **Model Type**: Chat Completions
41
+ - **Prompt Format**: ChatML
42
+ - **License**: Apache-2.0
43
+ - **Language**: English
44
+ - **Tokenizer**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
45
+ - **Max Input Tokens**: 32,768
46
+ - **Max Output Tokens**: 8,192
47
+
48
+ ### Features
49
+
50
+ - **Fine-tuned for Role-Play**: Specially trained to generate dynamic and contextually rich role-play dialogues.
51
+ - **Efficient**: Compact model size allows for faster inference and reduced computational resources.
52
+ - **Parameter Support**:
53
+ - `temperature`
54
+ - `top_p`
55
+ - `top_k`
56
+ - `frequency_penalty`
57
+ - `presence_penalty`
58
+ - `max_tokens`
59
+
60
+ ### Metadata
61
+
62
+ - **Owned by**: Oxygen (oxyapi)
63
+ - **Contributors**: TornadoSoftwares
64
+ - **Description**: A Qwen/Qwen2.5-14B-Instruct fine-tune for role-play trained on custom datasets
65
+
66
+ ## Usage
67
+
68
+ To utilize Oxy 1 Small for text generation in role-play scenarios, you can load the model using the Hugging Face Transformers library:
69
+
70
+ ```python
71
+ from transformers import AutoModelForCausalLM, AutoTokenizer
72
+
73
+ tokenizer = AutoTokenizer.from_pretrained("oxyapi/oxy-1-small")
74
+ model = AutoModelForCausalLM.from_pretrained("oxyapi/oxy-1-small")
75
+
76
+ prompt = "You are a wise old wizard in a mystical land. A traveler approaches you seeking advice."
77
+ inputs = tokenizer(prompt, return_tensors="pt")
78
+ outputs = model.generate(**inputs, max_length=500)
79
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
80
+ print(response)
81
+ ```
82
+
83
+ ## Performance
84
+
85
+ Performance benchmarks for Oxy 1 Small are not available at this time. Future updates may include detailed evaluations on relevant datasets.
86
+
87
+ ## License
88
+
89
+ This model is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
90
+
91
+ ## Citation
92
+
93
+ If you find Oxy 1 Small useful in your research or applications, please cite it as:
94
+
95
+ ```
96
+ @misc{oxy1small2024,
97
+ title={Oxy 1 Small: A Fine-Tuned Qwen2.5-14B-Instruct Model for Role-Play},
98
+ author={Oxygen (oxyapi)},
99
+ year={2024},
100
+ howpublished={\url{https://huggingface.co/oxyapi/oxy-1-small}},
101
+ }
102
  ```