C2_12k_random_sample

This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on the C2_12k_random_sample dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2589

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 4
  • total_eval_batch_size: 4
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
0.3904 0.0418 100 0.3497
0.3128 0.0836 200 0.3307
0.2616 0.1255 300 0.3212
0.2342 0.1673 400 0.3136
0.2543 0.2091 500 0.3083
0.3405 0.2509 600 0.3062
0.2475 0.2928 700 0.3003
0.3254 0.3346 800 0.2890
0.2794 0.3764 900 0.2863
0.2511 0.4182 1000 0.2890
0.2998 0.4601 1100 0.2855
0.2563 0.5019 1200 0.2773
0.2902 0.5437 1300 0.2755
0.2236 0.5855 1400 0.2724
0.2059 0.6274 1500 0.2706
0.207 0.6692 1600 0.2668
0.261 0.7110 1700 0.2655
0.2599 0.7528 1800 0.2637
0.2684 0.7946 1900 0.2624
0.3109 0.8365 2000 0.2608
0.2679 0.8783 2100 0.2598
0.2271 0.9201 2200 0.2586
0.2401 0.9619 2300 0.2591

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.6.0+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
2
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lemonhat/Qwen2.5-7B-Instruct-C2_12k_random_sample

Base model

Qwen/Qwen2.5-7B
Finetuned
(3292)
this model