GLM4_32B_LoRA / README.md
TOTORONG's picture
Upload folder using huggingface_hub
d08a0e1 verified
metadata
library_name: peft
license: other
base_model: THUDM/GLM-4-32B-Base-0414
tags:
  - llama-factory
  - lora
  - generated_from_trainer
model-index:
  - name: train_2025-05-09-12-42-39
    results: []

train_2025-05-09-12-42-39

This model is a fine-tuned version of THUDM/GLM-4-32B-Base-0414 on the combined_train250428 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3562
  • Num Input Tokens Seen: 7631160

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 25
  • num_epochs: 3.0

Training results

Training Loss Epoch Step Validation Loss Input Tokens Seen
1.5879 0.2846 100 1.5276 717216
1.3262 0.5693 200 1.4325 1461152
1.3266 0.8539 300 1.3622 2180304
0.8971 1.1366 400 1.3433 2898704
0.9203 1.4213 500 1.3057 3633296
0.8591 1.7059 600 1.2764 4368752
0.8213 1.9906 700 1.2493 5075424
0.471 2.2733 800 1.3562 5787128
0.4611 2.5579 900 1.3563 6517896
0.461 2.8426 1000 1.3560 7242520

Framework versions

  • PEFT 0.15.1
  • Transformers 4.51.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.5.0
  • Tokenizers 0.21.1