--- library_name: transformers base_model: minpeter/tiny-ko-124m-base tags: - axolotl - generated_from_trainer datasets: - lemon-mint/Korean-FineTome-100k - lemon-mint/smol-koreantalk - heegyu/open-korean-instructions-v20231020 - trillionlabs/multisystem-curated - allenai/tulu-3-sft-personas-instruction-following - coastral/korean-writing-style-instruct - devngho/korean-instruction-mix - youjunhyeok/Magpie-Pro-300K-Filtered-ko - youjunhyeok/smoltalk-ko-translate model-index: - name: tiny-ko-124m-sft results: [] license: apache-2.0 --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.11.0.dev0` ```yaml base_model: minpeter/tiny-ko-124m-base hub_model_id: minpeter/tiny-ko-124m-sft output_dir: ./outputs/tiny-ko-124m-sft wandb_project: "axolotl" wandb_entity: "kasfiekfs-e" model_type: LlamaForCausalLM tokenizer_type: AutoTokenizer strict: false chat_template: chatml datasets: - path: lemon-mint/Korean-FineTome-100k type: chat_template split: train field_messages: messages message_property_mappings: role: role content: content - path: lemon-mint/smol-koreantalk type: chat_template split: train field_messages: messages message_property_mappings: role: role content: content - path: heegyu/open-korean-instructions-v20231020 type: chat_template split: train field_messages: conversations message_property_mappings: role: from content: value roles: user: ["human", "user"] assistant: ["gpt", "assistant", "bot"] system: ["system", "input"] - path: trillionlabs/multisystem-curated type: chat_template split: train field_messages: messages message_property_mappings: role: role content: content - path: allenai/tulu-3-sft-personas-instruction-following type: chat_template split: train field_messages: messages message_property_mappings: role: role content: content - path: coastral/korean-writing-style-instruct type: chat_template split: train field_messages: conversations message_property_mappings: role: from content: value - path: devngho/korean-instruction-mix type: chat_template split: train field_messages: messages message_property_mappings: role: from content: value - path: youjunhyeok/Magpie-Pro-300K-Filtered-ko type: chat_template split: train field_messages: conversations message_property_mappings: role: from content: value - path: youjunhyeok/smoltalk-ko-translate type: chat_template split: train name: merge_filtered field_messages: conversations message_property_mappings: role: role content: content dataset_prepared_path: last_run_prepared val_set_size: 0.001 save_safetensors: true sequence_len: 2048 sample_packing: false pad_to_sequence_len: false use_pose: true pose_max_context_len: 65536 overrides_of_model_config: rope_theta: 10000.0 max_position_embeddings: 65536 gradient_accumulation_steps: 8 micro_batch_size: 32 num_epochs: 1 optimizer: adamw_bnb_8bit lr_scheduler: cosine learning_rate: 3e-4 train_on_inputs: false group_by_length: false bf16: true fp16: tf32: true gradient_checkpointing: false gradient_checkpointing_kwargs: use_reentrant: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 1 xformers_attention: flash_attention: true sdp_attention: s2_attention: save_steps: 200 warmup_steps: 20 eval_steps: 200 debug: deepspeed: weight_decay: 0.0 fsdp: fsdp_config: ```

# tiny-ko-124m-sft This model is a fine-tuned version of [minpeter/tiny-ko-124m-base](https://huggingface.co/minpeter/tiny-ko-124m-base) on the lemon-mint/Korean-FineTome-100k, the lemon-mint/smol-koreantalk, the heegyu/open-korean-instructions-v20231020, the trillionlabs/multisystem-curated, the allenai/tulu-3-sft-personas-instruction-following, the coastral/korean-writing-style-instruct, the devngho/korean-instruction-mix, the youjunhyeok/Magpie-Pro-300K-Filtered-ko and the youjunhyeok/smoltalk-ko-translate datasets. It achieves the following results on the evaluation set: - Loss: 1.7098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - total_eval_batch_size: 64 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 20 - training_steps: 5042 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0 | 0 | 2.7016 | | 2.1419 | 0.0397 | 200 | 2.1320 | | 2.0675 | 0.0793 | 400 | 2.0446 | | 2.0252 | 0.1190 | 600 | 1.9864 | | 1.9304 | 0.1587 | 800 | 1.9468 | | 1.9536 | 0.1983 | 1000 | 1.9145 | | 1.8692 | 0.2380 | 1200 | 1.8879 | | 1.8556 | 0.2777 | 1400 | 1.8645 | | 1.8421 | 0.3174 | 1600 | 1.8433 | | 1.9118 | 0.3570 | 1800 | 1.8256 | | 1.7791 | 0.3967 | 2000 | 1.8090 | | 1.8162 | 0.4364 | 2200 | 1.7934 | | 1.796 | 0.4760 | 2400 | 1.7795 | | 1.749 | 0.5157 | 2600 | 1.7661 | | 1.7536 | 0.5554 | 2800 | 1.7540 | | 1.7672 | 0.5950 | 3000 | 1.7432 | | 1.7523 | 0.6347 | 3200 | 1.7336 | | 1.7074 | 0.6744 | 3400 | 1.7259 | | 1.7218 | 0.7141 | 3600 | 1.7202 | | 1.6928 | 0.7537 | 3800 | 1.7158 | | 1.7184 | 0.7934 | 4000 | 1.7127 | | 1.761 | 0.8331 | 4200 | 1.7109 | | 1.7481 | 0.8727 | 4400 | 1.7101 | | 1.7245 | 0.9124 | 4600 | 1.7098 | | 1.7076 | 0.9521 | 4800 | 1.7097 | | 1.7403 | 0.9917 | 5000 | 1.7098 | ### Framework versions - Transformers 4.52.4 - Pytorch 2.6.0+cu124 - Datasets 3.6.0 - Tokenizers 0.21.1