minpeter commited on
Commit
10b687c
·
verified ·
1 Parent(s): 47e2c44

End of training

Browse files
Files changed (1) hide show
  1. README.md +148 -0
README.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: minpeter/tiny-ko-124m-base
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ datasets:
8
+ - lemon-mint/smol-koreantalk
9
+ model-index:
10
+ - name: tiny-ko-124m-sft
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
18
+ <details><summary>See axolotl config</summary>
19
+
20
+ axolotl version: `0.11.0.dev0`
21
+ ```yaml
22
+ base_model: minpeter/tiny-ko-124m-base
23
+
24
+ hub_model_id: minpeter/tiny-ko-124m-sft
25
+ output_dir: ./outputs/tiny-ko-124m-sft
26
+ wandb_project: "axolotl"
27
+ wandb_entity: "kasfiekfs-e"
28
+
29
+ model_type: LlamaForCausalLM
30
+ tokenizer_type: AutoTokenizer
31
+
32
+ strict: false
33
+
34
+ chat_template: chatml
35
+ datasets:
36
+ - path: lemon-mint/smol-koreantalk
37
+ type: chat_template
38
+ split: train
39
+ field_messages: messages
40
+ message_property_mappings:
41
+ role: role
42
+ content: content
43
+
44
+ dataset_prepared_path: last_run_prepared
45
+ val_set_size: 0.001
46
+ save_safetensors: true
47
+ sequence_len: 2048
48
+ sample_packing: false
49
+ pad_to_sequence_len: false
50
+ use_pose: true
51
+ pose_max_context_len: 65536
52
+
53
+ overrides_of_model_config:
54
+ rope_theta: 10000.0
55
+ max_position_embeddings: 65536
56
+
57
+ gradient_accumulation_steps: 8
58
+ micro_batch_size: 32
59
+ num_epochs: 1
60
+ optimizer: adamw_bnb_8bit
61
+ lr_scheduler: cosine
62
+ learning_rate: 3e-4
63
+
64
+ train_on_inputs: false
65
+ group_by_length: false
66
+ bf16: true
67
+ fp16:
68
+ tf32: true
69
+
70
+ gradient_checkpointing: false
71
+ gradient_checkpointing_kwargs:
72
+ use_reentrant: true
73
+ early_stopping_patience:
74
+ resume_from_checkpoint:
75
+ local_rank:
76
+ logging_steps: 1
77
+ xformers_attention:
78
+ flash_attention: true
79
+ sdp_attention:
80
+ s2_attention:
81
+
82
+ save_steps: 200
83
+ warmup_steps: 20
84
+ eval_steps: 200
85
+ debug:
86
+ deepspeed:
87
+ weight_decay: 0.0
88
+ fsdp:
89
+ fsdp_config:
90
+
91
+ ```
92
+
93
+ </details><br>
94
+
95
+ # tiny-ko-124m-sft
96
+
97
+ This model is a fine-tuned version of [minpeter/tiny-ko-124m-base](https://huggingface.co/minpeter/tiny-ko-124m-base) on the lemon-mint/smol-koreantalk dataset.
98
+ It achieves the following results on the evaluation set:
99
+ - Loss: 1.8151
100
+
101
+ ## Model description
102
+
103
+ More information needed
104
+
105
+ ## Intended uses & limitations
106
+
107
+ More information needed
108
+
109
+ ## Training and evaluation data
110
+
111
+ More information needed
112
+
113
+ ## Training procedure
114
+
115
+ ### Training hyperparameters
116
+
117
+ The following hyperparameters were used during training:
118
+ - learning_rate: 0.0003
119
+ - train_batch_size: 32
120
+ - eval_batch_size: 32
121
+ - seed: 42
122
+ - distributed_type: multi-GPU
123
+ - num_devices: 2
124
+ - gradient_accumulation_steps: 8
125
+ - total_train_batch_size: 512
126
+ - total_eval_batch_size: 64
127
+ - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
128
+ - lr_scheduler_type: cosine
129
+ - lr_scheduler_warmup_steps: 20
130
+ - training_steps: 887
131
+
132
+ ### Training results
133
+
134
+ | Training Loss | Epoch | Step | Validation Loss |
135
+ |:-------------:|:------:|:----:|:---------------:|
136
+ | No log | 0 | 0 | 2.8035 |
137
+ | 2.0195 | 0.2256 | 200 | 1.9871 |
138
+ | 1.8857 | 0.4513 | 400 | 1.8815 |
139
+ | 1.8013 | 0.6769 | 600 | 1.8270 |
140
+ | 1.8489 | 0.9026 | 800 | 1.8151 |
141
+
142
+
143
+ ### Framework versions
144
+
145
+ - Transformers 4.52.4
146
+ - Pytorch 2.6.0+cu124
147
+ - Datasets 3.6.0
148
+ - Tokenizers 0.21.1