Update README.md
Browse files
README.md
CHANGED
|
@@ -25,7 +25,7 @@ pipeline_tag: text-generation
|
|
| 25 |
|
| 26 |
基于SylvanL/ChatTCM-7B-Pretrain, 在llamafactory框架上,
|
| 27 |
|
| 28 |
-
使用SylvanL/Traditional-Chinese-Medicine-Dataset-SFT进行了
|
| 29 |
|
| 30 |
在不出现明显指令丢失或灾难性遗忘的前提下,使模型具备以下能力:
|
| 31 |
|
|
@@ -82,7 +82,7 @@ llamafactory-cli train \
|
|
| 82 |
--dataset SFT_medicalKnowledge_source1_548404,SFT_medicalKnowledge_source2_99334,SFT_medicalKnowledge_source3_556540,SFT_nlpDiseaseDiagnosed_61486,SFT_nlpSyndromeDiagnosed_48665,SFT_structGeneral_310860,SFT_structPrescription_92896,_SFT_traditionalTrans_1959542.json,{BAAI/COIG},{m-a-p/COIG-CQIA} \
|
| 83 |
--cutoff_len 1024 \
|
| 84 |
--learning_rate 5e-05 \
|
| 85 |
-
--num_train_epochs
|
| 86 |
--max_samples 1000000 \
|
| 87 |
--per_device_train_batch_size 28 \
|
| 88 |
--gradient_accumulation_steps 4 \
|
|
|
|
| 25 |
|
| 26 |
基于SylvanL/ChatTCM-7B-Pretrain, 在llamafactory框架上,
|
| 27 |
|
| 28 |
+
使用SylvanL/Traditional-Chinese-Medicine-Dataset-SFT进行了2个epoch的全参数量有监督微调(full Supervised Fine-tuning).
|
| 29 |
|
| 30 |
在不出现明显指令丢失或灾难性遗忘的前提下,使模型具备以下能力:
|
| 31 |
|
|
|
|
| 82 |
--dataset SFT_medicalKnowledge_source1_548404,SFT_medicalKnowledge_source2_99334,SFT_medicalKnowledge_source3_556540,SFT_nlpDiseaseDiagnosed_61486,SFT_nlpSyndromeDiagnosed_48665,SFT_structGeneral_310860,SFT_structPrescription_92896,_SFT_traditionalTrans_1959542.json,{BAAI/COIG},{m-a-p/COIG-CQIA} \
|
| 83 |
--cutoff_len 1024 \
|
| 84 |
--learning_rate 5e-05 \
|
| 85 |
+
--num_train_epochs 2.0 \
|
| 86 |
--max_samples 1000000 \
|
| 87 |
--per_device_train_batch_size 28 \
|
| 88 |
--gradient_accumulation_steps 4 \
|