AlexHung29629's picture
Update README.md
4311ad3 verified
metadata
library_name: transformers
tags:
  - generated_from_trainer
datasets:
  - AlexHung29629/train_0415_input_output
model-index:
  - name: placeholder_sft/
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.8.1

base_model: ./placeholder_embed/merged/

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true

datasets:
  - path: AlexHung29629/train_0415_input_output
    type: input_output
dataset_prepared_path: ./sft_dataprep/
val_set_size: 0
output_dir: ./placeholder_sft/
shuffle_merged_datasets: false

#eval_steps: 10
#eval_strategy:

sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

wandb_project: Reasoning_TP1_2025
wandb_entity:
wandb_watch:
wandb_name: Mistral-24B-SFT-Reasoning-250414_sft
wandb_log_model:

gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 5
optimizer: adamw_torch_fused
lr_scheduler: cosine
learning_rate: 1e-5
max_grad_norm: 1.0

adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-8

bf16: true
tf32: false

gradient_checkpointing: true
gradient_checkpointing_kwargs:
  use_reentrant: true
logging_steps: 1
flash_attention: true
xformers_attention: false
sdp_attention: false

warmup_ratio: 0.05
saves_per_epoch: 1
save_total_limit: 5
weight_decay: 0.1
deepspeed: /mnt/shared/twsc/alex/reasoning/zero3_bf16.json
special_tokens:
  pad_token: "<pad>"
#added_tokens_overrides:  # Dict[int, str]
#  20: "<think>"
#  21: "</think>"

seed: 42

placeholder_sft/

This model was trained from scratch on the AlexHung29629/train_0415_input_output dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 64
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • total_eval_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 246
  • num_epochs: 5.0

Training results

Framework versions

  • Transformers 4.51.0
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1