| --- |
| library_name: transformers |
| license: other |
| base_model: Qwen/Qwen2.5-7B-Instruct |
| tags: |
| - llama-factory |
| - full |
| - generated_from_trainer |
| model-index: |
| - name: LLM_Rec_Qwen2.5_7B_full_sft |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # LLM_Rec_Qwen2.5_7B_full_sft |
| |
| This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the llm_rec dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.0229 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 5e-06 |
| - train_batch_size: 1 |
| - eval_batch_size: 4 |
| - seed: 42 |
| - distributed_type: multi-GPU |
| - num_devices: 8 |
| - gradient_accumulation_steps: 2 |
| - total_train_batch_size: 16 |
| - total_eval_batch_size: 32 |
| - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
| - lr_scheduler_type: cosine |
| - lr_scheduler_warmup_ratio: 0.05 |
| - num_epochs: 3.0 |
| - mixed_precision_training: Native AMP |
| |
| ### Training results |
| |
| | Training Loss | Epoch | Step | Validation Loss | |
| |:-------------:|:------:|:----:|:---------------:| |
| | 0.0968 | 0.0746 | 5 | 0.0690 | |
| | 0.0346 | 0.1493 | 10 | 0.0366 | |
| | 0.0256 | 0.2239 | 15 | 0.0321 | |
| | 0.0328 | 0.2985 | 20 | 0.0302 | |
| | 0.0287 | 0.3731 | 25 | 0.0280 | |
| | 0.0221 | 0.4478 | 30 | 0.0275 | |
| | 0.0347 | 0.5224 | 35 | 0.0265 | |
| | 0.0212 | 0.5970 | 40 | 0.0255 | |
| | 0.0245 | 0.6716 | 45 | 0.0254 | |
| | 0.0209 | 0.7463 | 50 | 0.0244 | |
| | 0.0242 | 0.8209 | 55 | 0.0239 | |
| | 0.0226 | 0.8955 | 60 | 0.0236 | |
| | 0.0208 | 0.9701 | 65 | 0.0232 | |
| | 0.0135 | 1.0448 | 70 | 0.0229 | |
| | 0.0141 | 1.1194 | 75 | 0.0234 | |
| | 0.0165 | 1.1940 | 80 | 0.0235 | |
| | 0.0173 | 1.2687 | 85 | 0.0233 | |
| | 0.0123 | 1.3433 | 90 | 0.0231 | |
| | 0.0145 | 1.4179 | 95 | 0.0232 | |
| | 0.0154 | 1.4925 | 100 | 0.0226 | |
| | 0.0147 | 1.5672 | 105 | 0.0224 | |
| | 0.0132 | 1.6418 | 110 | 0.0228 | |
| | 0.0155 | 1.7164 | 115 | 0.0227 | |
| | 0.0149 | 1.7910 | 120 | 0.0221 | |
| | 0.0169 | 1.8657 | 125 | 0.0219 | |
| | 0.0136 | 1.9403 | 130 | 0.0218 | |
| | 0.0139 | 2.0149 | 135 | 0.0219 | |
| | 0.0101 | 2.0896 | 140 | 0.0220 | |
| | 0.0087 | 2.1642 | 145 | 0.0222 | |
| | 0.0089 | 2.2388 | 150 | 0.0223 | |
| | 0.0112 | 2.3134 | 155 | 0.0225 | |
| | 0.0083 | 2.3881 | 160 | 0.0227 | |
| | 0.008 | 2.4627 | 165 | 0.0227 | |
| | 0.0109 | 2.5373 | 170 | 0.0228 | |
| | 0.0103 | 2.6119 | 175 | 0.0228 | |
| | 0.008 | 2.6866 | 180 | 0.0229 | |
| | 0.0103 | 2.7612 | 185 | 0.0229 | |
| | 0.008 | 2.8358 | 190 | 0.0229 | |
| | 0.0109 | 2.9104 | 195 | 0.0229 | |
| | 0.009 | 2.9851 | 200 | 0.0229 | |
| |
| |
| ### Framework versions |
| |
| - Transformers 4.46.1 |
| - Pytorch 2.5.0 |
| - Datasets 3.1.0 |
| - Tokenizers 0.20.3 |
| |