tiny-ko-sft / README.md
minpeter's picture
End of training
9cd5dcc verified
|
raw
history blame
3.27 kB
metadata
library_name: transformers
base_model: minpeter/pretrained-tiny-ko
tags:
  - axolotl
  - generated_from_trainer
datasets:
  - lemon-mint/Korean-FineTome-100k
  - lemon-mint/smol-koreantalk
model-index:
  - name: ko-tiny-exp
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.10.0.dev0

base_model: minpeter/pretrained-tiny-ko

chat_template: chatml
datasets:
  - path: lemon-mint/Korean-FineTome-100k
    type: chat_template
    split: train[:20%]
    field_messages: messages
    message_property_mappings:
      role: role
      content: content
  - path: lemon-mint/smol-koreantalk
    type: chat_template
    split: train[:20%]
    field_messages: messages
    message_property_mappings:
      role: role
      content: content
dataset_prepared_path: last_run_prepared
val_set_size: 0.05

hub_model_id: minpeter/ko-tiny-exp
output_dir: ./ouputs/ko-tiny-exp
wandb_project: "axolotl"
wandb_entity: "kasfiekfs-e"

save_steps: 200
warmup_steps: 100
eval_steps: 200

sequence_len: 1024
sample_packing: true
pad_to_sequence_len: true

gradient_accumulation_steps: 4
micro_batch_size: 32

optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 2e-5

bf16: auto
tf32: false

added_tokens_overrides:
  128001: "<|im_end|>"
  128002: "<|im_start|>"

special_tokens:
  bos_token: <|begin_of_text|>
  eos_token: <|im_end|>
  pad_token: <|im_end|>

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: false
resume_from_checkpoint:
logging_steps: 1
flash_attention: true

num_epochs: 2
weight_decay: 0.0

ko-tiny-exp

This model is a fine-tuned version of minpeter/pretrained-tiny-ko on the lemon-mint/Korean-FineTome-100k and the lemon-mint/smol-koreantalk datasets. It achieves the following results on the evaluation set:

  • Loss: 3.6038

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 512
  • total_eval_batch_size: 128
  • optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 102

Training results

Training Loss Epoch Step Validation Loss
3.5674 0.0193 1 3.6038

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.1
  • Tokenizers 0.21.1