XiaoY1 commited on
Commit
1dc9ab5
·
verified ·
1 Parent(s): f36efac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -1
README.md CHANGED
@@ -1,3 +1,49 @@
1
  ---
2
- license: apache-2.0
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - zh
4
  ---
5
+
6
+ # COIG-Kun PrimaryChatModel
7
+
8
+ ## Model Details
9
+ - **Name:** COIG-Kun PrimaryChatModel
10
+ - **Release Date:** 2024.04.08
11
+ - **Github URL:** [COIG-Kun](https://github.com/Zheng0428/COIG-Kun)
12
+ - **Developers:** Tianyu Zheng*, Shuyue Guo*, Xingwei Qu, Xinrun Du, Wenhu Chen, Jie Fu, Wenhao Huang, Ge Zhang
13
+
14
+ ## Model Description
15
+ The PrimaryChatModel is a model used in the Kun project to transform raw data into a standard response format. It can read through the raw data using a reading comprehension paradigm and answer questions generated by the Label model. This model has been specially fine-tuned to better suit the required tasks, making it one of the core processes in Kun.
16
+
17
+ ## Intended Use
18
+ - **Primary Use:** The PrimaryChatModel is designed to transform raw data into a standard response format based on generated instructions.
19
+ - **Target Users:** Researchers and developers in NLP and ML, particularly those working on language model training and data augmentation.
20
+
21
+ ## Training Data
22
+ The PrimaryChatModel is trained using ten thousand high-quality seed instructions.These instructions were meticulously curated to ensure the effectiveness of the training process and to produce high-quality outputs for use as instructional data.
23
+
24
+ ## Training Process
25
+ - **Base Model:** Yi-34B
26
+ - **Epochs:** 2
27
+ - **Learning Rate:** 1e-5
28
+ - **Fine-Tuning Method:** The model was fine-tuned on high-quality seed instructions, with the responses to these instructions used as outputs and the instructions themselves as inputs.
29
+
30
+ ## Evaluation
31
+ The PrimaryChatModel was evaluated on its ability to transform raw data into a standard response format, focusing on the relevancy, clarity, and usability of the instructions for language model training.
32
+
33
+ ## Ethical Considerations
34
+ - Users should be aware of potential biases in the training data, which could be reflected in the model's outputs.
35
+ - The model should not be used for generating harmful or misleading content.
36
+
37
+ ## Citing the Model
38
+ To cite the PrimaryChatModel in academic work, please use the following reference:
39
+
40
+ ```bibtex
41
+ @misc{COIG-Kun,
42
+ title={Kun: Answer Polishment Saves Your Time for Using Intruction Backtranslation on Self-Alignment},
43
+ author={Tianyu, Zheng* and Shuyue, Guo* and Xingwei, Qu and Xinrun, Du and Wenhu, Chen and Jie, Fu and Wenhao, Huang and Ge, Zhang},
44
+ year={2023},
45
+ publisher={GitHub},
46
+ journal={GitHub repository},
47
+ howpublished={https://github.com/Zheng0428/COIG-Kun}
48
+ }
49
+ ```