minpeter commited on
Commit
2e9f534
·
verified ·
1 Parent(s): 9d14824

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  library_name: transformers
3
- base_model: minpeter/pretrained-tiny-ko
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
@@ -26,7 +26,7 @@ should probably proofread and complete it, then remove this comment. -->
26
 
27
  axolotl version: `0.10.0.dev0`
28
  ```yaml
29
- base_model: minpeter/pretrained-tiny-ko
30
 
31
  hub_model_id: minpeter/tiny-ko-sft
32
  output_dir: ./outputs/tiny-ko-sft
@@ -155,7 +155,7 @@ weight_decay: 0.0
155
 
156
  # tiny-ko-sft
157
 
158
- This model is a fine-tuned version of [minpeter/pretrained-tiny-ko](https://huggingface.co/minpeter/pretrained-tiny-ko) on the lemon-mint/Korean-FineTome-100k, the lemon-mint/smol-koreantalk, the heegyu/open-korean-instructions-v20231020, the FreedomIntelligence/evol-instruct-korean, the FreedomIntelligence/alpaca-gpt4-korean, the FreedomIntelligence/sharegpt-korean, the coastral/korean-writing-style-instruct and the devngho/korean-instruction-mix datasets.
159
  It achieves the following results on the evaluation set:
160
  - Loss: 1.4059
161
 
 
1
  ---
2
  library_name: transformers
3
+ base_model: minpeter/tiny-ko-base
4
  tags:
5
  - axolotl
6
  - generated_from_trainer
 
26
 
27
  axolotl version: `0.10.0.dev0`
28
  ```yaml
29
+ base_model: minpeter/tiny-ko-base
30
 
31
  hub_model_id: minpeter/tiny-ko-sft
32
  output_dir: ./outputs/tiny-ko-sft
 
155
 
156
  # tiny-ko-sft
157
 
158
+ This model is a fine-tuned version of [minpeter/minpeter/tiny-ko-base](https://huggingface.co/minpeter/tiny-ko-base) on the lemon-mint/Korean-FineTome-100k, the lemon-mint/smol-koreantalk, the heegyu/open-korean-instructions-v20231020, the FreedomIntelligence/evol-instruct-korean, the FreedomIntelligence/alpaca-gpt4-korean, the FreedomIntelligence/sharegpt-korean, the coastral/korean-writing-style-instruct and the devngho/korean-instruction-mix datasets.
159
  It achieves the following results on the evaluation set:
160
  - Loss: 1.4059
161