model_step1 / README.md
AlexHung29629's picture
Model save
3851031 verified
metadata
library_name: transformers
base_model: AlexHung29629/Mistral-Small-3.1-24B-Instruct-2503-text
tags:
  - axolotl
  - generated_from_trainer
model-index:
  - name: model_step1
    results: []

Built with Axolotl

See axolotl config

axolotl version: 0.8.0

base_model: AlexHung29629/Mistral-Small-3.1-24B-Instruct-2503-text

plugins:
  - axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_glu_activation: true
liger_fused_linear_cross_entropy: true

unfrozen_parameters:
  - lm_head.weight
  - model.embed_tokens.weight

datasets:
  - path: AlexHung29629/train_0415_input_output
    type: input_output
  - path: AlexHung29629/glaive-function-calling-v2-mistral
    type:
      system_prompt: ""
      field_system: system
      field_instruction: input
      field_output: output
      format: "{instruction}"
      no_input_format: "{instruction}"

dataset_prepared_path: ./sft_dataprep/
val_set_size: 0
output_dir: ./placeholder_embed/
shuffle_merged_datasets: false

sequence_len: 32768
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true

hub_model_id: AlexHung29629/model_step1

wandb_project: TP1_2025_05
wandb_entity:
wandb_watch:
wandb_name: Mistral-24B-SFT-250522_embed
wandb_log_model: checkpoint
use_tensorboard: true

gradient_accumulation_steps: 1
micro_batch_size: 1
num_epochs: 1
max_steps: 1000
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 2e-5
max_grad_norm: 1.0

bf16: true
tf32: false

#gradient_checkpointing: false
#gradient_checkpointing_kwargs:
#  use_reentrant: false
logging_steps: 1
flash_attention: true
xformers_attention: false
sdp_attention: false

warmup_ratio: 0.01
saves_per_epoch:
save_steps: 1000
weight_decay: 0
#deepspeed: deepspeed_configs/zero3_bf16_cpuoffload_all.json

fsdp:
  - full_shard
  - auto_wrap
fsdp_config:
  fsdp_limit_all_gathers: true
  fsdp_sync_module_states: true
  fsdp_offload_params: true
  fsdp_use_orig_params: true
  fsdp_cpu_ram_efficient_loading: true
  fsdp_activation_checkpointing: true
  fsdp_transformer_layer_cls_to_wrap: MistralDecoderLayer
  fsdp_state_dict_type: FULL_STATE_DICT
  fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP


special_tokens:
  pad_token: "<pad>"
added_tokens_overrides:  # Dict[int, str]
  20: "<think>"
  21: "</think>"

seed: 42

Visualize in Weights & Biases

model_step1

This model is a fine-tuned version of AlexHung29629/Mistral-Small-3.1-24B-Instruct-2503-text on an unknown dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 8
  • total_eval_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 10
  • training_steps: 1000

Framework versions

  • Transformers 4.51.0
  • Pytorch 2.7.0+cu128
  • Datasets 3.5.0
  • Tokenizers 0.21.1