ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run3_AugV5_k5_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3097
  • Qwk: 0.5708
  • Mse: 1.3097
  • Rmse: 1.1444

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.0952 2 2.2127 0.0505 2.2127 1.4875
No log 0.1905 4 1.5235 0.1352 1.5235 1.2343
No log 0.2857 6 1.5569 0.2115 1.5569 1.2477
No log 0.3810 8 1.6518 0.1952 1.6518 1.2852
No log 0.4762 10 1.6372 0.3114 1.6372 1.2795
No log 0.5714 12 1.5877 0.3147 1.5877 1.2600
No log 0.6667 14 1.5221 0.3678 1.5221 1.2337
No log 0.7619 16 1.5477 0.4165 1.5477 1.2441
No log 0.8571 18 1.4099 0.4027 1.4099 1.1874
No log 0.9524 20 1.3755 0.3988 1.3755 1.1728
No log 1.0476 22 1.4916 0.4315 1.4916 1.2213
No log 1.1429 24 1.5353 0.4262 1.5353 1.2391
No log 1.2381 26 1.5557 0.4225 1.5557 1.2473
No log 1.3333 28 1.4451 0.4281 1.4451 1.2021
No log 1.4286 30 1.3278 0.4405 1.3278 1.1523
No log 1.5238 32 1.3095 0.4308 1.3095 1.1444
No log 1.6190 34 1.2677 0.5006 1.2677 1.1259
No log 1.7143 36 1.4050 0.4463 1.4050 1.1853
No log 1.8095 38 1.3605 0.4727 1.3605 1.1664
No log 1.9048 40 1.1406 0.5189 1.1406 1.0680
No log 2.0 42 1.0212 0.4917 1.0212 1.0105
No log 2.0952 44 0.9574 0.5412 0.9574 0.9785
No log 2.1905 46 0.9566 0.5159 0.9566 0.9781
No log 2.2857 48 1.0169 0.5172 1.0169 1.0084
No log 2.3810 50 1.1335 0.5537 1.1335 1.0646
No log 2.4762 52 1.2471 0.5292 1.2471 1.1167
No log 2.5714 54 1.1569 0.5397 1.1569 1.0756
No log 2.6667 56 1.1601 0.5650 1.1601 1.0771
No log 2.7619 58 1.1899 0.5462 1.1899 1.0908
No log 2.8571 60 1.1537 0.5821 1.1537 1.0741
No log 2.9524 62 1.0254 0.5955 1.0254 1.0126
No log 3.0476 64 1.1521 0.5868 1.1521 1.0734
No log 3.1429 66 1.5385 0.5666 1.5385 1.2404
No log 3.2381 68 1.5761 0.5169 1.5761 1.2554
No log 3.3333 70 1.3900 0.5249 1.3900 1.1790
No log 3.4286 72 1.3083 0.5657 1.3083 1.1438
No log 3.5238 74 1.2615 0.5726 1.2615 1.1232
No log 3.6190 76 1.2670 0.5838 1.2670 1.1256
No log 3.7143 78 1.4406 0.5584 1.4406 1.2002
No log 3.8095 80 1.4132 0.5542 1.4132 1.1888
No log 3.9048 82 1.1833 0.5573 1.1833 1.0878
No log 4.0 84 1.1200 0.5704 1.1200 1.0583
No log 4.0952 86 1.3352 0.5368 1.3352 1.1555
No log 4.1905 88 1.5819 0.5206 1.5819 1.2577
No log 4.2857 90 1.4818 0.5266 1.4818 1.2173
No log 4.3810 92 1.3903 0.5339 1.3903 1.1791
No log 4.4762 94 1.0930 0.5905 1.0930 1.0455
No log 4.5714 96 0.8030 0.6417 0.8030 0.8961
No log 4.6667 98 0.7705 0.6524 0.7705 0.8778
No log 4.7619 100 0.9603 0.6417 0.9603 0.9799
No log 4.8571 102 1.1692 0.5668 1.1692 1.0813
No log 4.9524 104 1.3011 0.5423 1.3011 1.1406
No log 5.0476 106 1.6031 0.5345 1.6031 1.2661
No log 5.1429 108 1.7746 0.5219 1.7746 1.3322
No log 5.2381 110 1.7425 0.5441 1.7425 1.3200
No log 5.3333 112 1.6005 0.5568 1.6005 1.2651
No log 5.4286 114 1.4668 0.5596 1.4668 1.2111
No log 5.5238 116 1.4628 0.5612 1.4628 1.2094
No log 5.6190 118 1.6692 0.5449 1.6692 1.2920
No log 5.7143 120 1.8710 0.5256 1.8710 1.3678
No log 5.8095 122 1.6790 0.5531 1.6790 1.2958
No log 5.9048 124 1.2319 0.5688 1.2319 1.1099
No log 6.0 126 0.8256 0.6510 0.8256 0.9086
No log 6.0952 128 0.7282 0.6860 0.7282 0.8533
No log 6.1905 130 0.7903 0.6496 0.7903 0.8890
No log 6.2857 132 1.0661 0.6160 1.0661 1.0325
No log 6.3810 134 1.5196 0.5501 1.5196 1.2327
No log 6.4762 136 1.7253 0.5230 1.7253 1.3135
No log 6.5714 138 1.6472 0.5400 1.6472 1.2834
No log 6.6667 140 1.4409 0.5606 1.4409 1.2004
No log 6.7619 142 1.3506 0.5630 1.3506 1.1622
No log 6.8571 144 1.2583 0.6054 1.2583 1.1217
No log 6.9524 146 1.2260 0.6189 1.2260 1.1072
No log 7.0476 148 1.1213 0.6346 1.1213 1.0589
No log 7.1429 150 1.0366 0.6516 1.0366 1.0182
No log 7.2381 152 1.0625 0.6387 1.0625 1.0308
No log 7.3333 154 1.2089 0.6270 1.2089 1.0995
No log 7.4286 156 1.2884 0.6237 1.2884 1.1351
No log 7.5238 158 1.3527 0.6029 1.3527 1.1631
No log 7.6190 160 1.3479 0.6029 1.3479 1.1610
No log 7.7143 162 1.2253 0.6361 1.2253 1.1069
No log 7.8095 164 1.1668 0.6230 1.1668 1.0802
No log 7.9048 166 1.2054 0.6305 1.2054 1.0979
No log 8.0 168 1.3180 0.5986 1.3180 1.1480
No log 8.0952 170 1.3975 0.5918 1.3975 1.1822
No log 8.1905 172 1.3612 0.5986 1.3612 1.1667
No log 8.2857 174 1.2362 0.5984 1.2362 1.1119
No log 8.3810 176 1.1251 0.6282 1.1251 1.0607
No log 8.4762 178 1.0428 0.6559 1.0428 1.0212
No log 8.5714 180 1.0634 0.6374 1.0634 1.0312
No log 8.6667 182 1.1539 0.6311 1.1539 1.0742
No log 8.7619 184 1.2778 0.5956 1.2778 1.1304
No log 8.8571 186 1.3875 0.5883 1.3875 1.1779
No log 8.9524 188 1.4499 0.5804 1.4499 1.2041
No log 9.0476 190 1.4359 0.5796 1.4359 1.1983
No log 9.1429 192 1.4043 0.5824 1.4043 1.1850
No log 9.2381 194 1.3707 0.5700 1.3707 1.1708
No log 9.3333 196 1.3398 0.5700 1.3398 1.1575
No log 9.4286 198 1.3092 0.5708 1.3092 1.1442
No log 9.5238 200 1.2941 0.5921 1.2941 1.1376
No log 9.6190 202 1.2865 0.5921 1.2865 1.1342
No log 9.7143 204 1.2945 0.5782 1.2945 1.1377
No log 9.8095 206 1.3076 0.5708 1.3076 1.1435
No log 9.9048 208 1.3110 0.5708 1.3110 1.1450
No log 10.0 210 1.3097 0.5708 1.3097 1.1444

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run3_AugV5_k5_task5_organization

Finetuned
(4023)
this model