idt5-base-mcqg-RACE-id

This model is a fine-tuned version of muchad/idt5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1419
  • Rouge1: 0.3828
  • Rouge2: 0.2464
  • Rougel: 0.3635
  • Rougelsum: 0.3687
  • Bleu: 0.2053
  • Rouge All: {'rouge1': 0.38280297280337106, 'rouge2': 0.2463717712635165, 'rougeL': 0.36348416422814667, 'rougeLsum': 0.3686870784292893}
  • Bleu All: {'bleu': 0.20529625464163276, 'precisions': [0.544206419519389, 0.4080415448721935, 0.3448035357165733, 0.29241227961620014], 'brevity_penalty': 0.5307282267142313, 'length_ratio': 0.6121804805088057, 'translation_length': 181727, 'reference_length': 296852}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bleu Rouge All Bleu All
1.5379 1.0 36284 1.2265 0.3371 0.2094 0.3205 0.3253 0.1983 {'rouge1': 0.33711648528419197, 'rouge2': 0.20935180236052028, 'rougeL': 0.3205203116477598, 'rougeLsum': 0.32529545761604484} {'bleu': 0.19831282890007307, 'precisions': [0.5217314036276394, 0.3946142969363708, 0.33979358871868, 0.29601867266779885], 'brevity_penalty': 0.5227719650702349, 'length_ratio': 0.6065716249174673, 'translation_length': 180062, 'reference_length': 296852}
1.3644 2.0 72568 1.1698 0.3630 0.2295 0.3447 0.3498 0.2039 {'rouge1': 0.3629660167163082, 'rouge2': 0.2295266351540305, 'rougeL': 0.3446608461843371, 'rougeLsum': 0.34976374071944816} {'bleu': 0.20392626154873386, 'precisions': [0.5382410077317733, 0.4070086261940831, 0.3470977823014178, 0.2976925955182306], 'brevity_penalty': 0.52574250563914, 'length_ratio': 0.6086635764623449, 'translation_length': 180683, 'reference_length': 296852}
1.2742 3.0 108852 1.1465 0.3728 0.2366 0.3537 0.3587 0.2035 {'rouge1': 0.3728293856857868, 'rouge2': 0.2365878815089809, 'rougeL': 0.35374960442490494, 'rougeLsum': 0.3587096713960251} {'bleu': 0.20345989395248457, 'precisions': [0.5398174482439315, 0.40446504788430504, 0.34243731944791184, 0.2920075779511404], 'brevity_penalty': 0.5293013871683313, 'length_ratio': 0.6111732445797906, 'translation_length': 181428, 'reference_length': 296852}
1.1868 4.0 145136 1.1392 0.3788 0.2418 0.3597 0.3647 0.2045 {'rouge1': 0.37877643387013765, 'rouge2': 0.2417733244099297, 'rougeL': 0.35970895267024183, 'rougeLsum': 0.3646545306105341} {'bleu': 0.20448179885293769, 'precisions': [0.5452928078502008, 0.4092357483817759, 0.34584093592358056, 0.2932385759829968], 'brevity_penalty': 0.5272049129237822, 'length_ratio': 0.6096943931656179, 'translation_length': 180989, 'reference_length': 296852}
1.1429 5.0 181420 1.1419 0.3828 0.2464 0.3635 0.3687 0.2053 {'rouge1': 0.38280297280337106, 'rouge2': 0.2463717712635165, 'rougeL': 0.36348416422814667, 'rougeLsum': 0.3686870784292893} {'bleu': 0.20529625464163276, 'precisions': [0.544206419519389, 0.4080415448721935, 0.3448035357165733, 0.29241227961620014], 'brevity_penalty': 0.5307282267142313, 'length_ratio': 0.6121804805088057, 'translation_length': 181727, 'reference_length': 296852}

Framework versions

  • Transformers 4.56.2
  • Pytorch 2.8.0+cu129
  • Datasets 4.2.0
  • Tokenizers 0.22.1
Downloads last month
-
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hawalurahman/idt5-base-mcqg-RACE-id

Base model

muchad/idt5-base
Finetuned
(26)
this model