ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k6_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9982
  • Qwk: 0.6423
  • Mse: 0.9982
  • Rmse: 0.9991

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.0909 2 2.4571 0.0431 2.4571 1.5675
No log 0.1818 4 1.6075 0.1987 1.6075 1.2679
No log 0.2727 6 1.5243 0.0794 1.5243 1.2346
No log 0.3636 8 1.5281 0.1333 1.5281 1.2361
No log 0.4545 10 1.3898 0.1057 1.3898 1.1789
No log 0.5455 12 1.4736 0.3803 1.4736 1.2139
No log 0.6364 14 1.6078 0.3617 1.6078 1.2680
No log 0.7273 16 1.5295 0.2779 1.5295 1.2367
No log 0.8182 18 1.4352 0.1285 1.4352 1.1980
No log 0.9091 20 1.3844 0.1251 1.3844 1.1766
No log 1.0 22 1.4678 0.3186 1.4678 1.2115
No log 1.0909 24 1.6105 0.3969 1.6105 1.2691
No log 1.1818 26 1.4941 0.3700 1.4941 1.2223
No log 1.2727 28 1.4358 0.3925 1.4358 1.1983
No log 1.3636 30 1.3735 0.3925 1.3735 1.1720
No log 1.4545 32 1.2695 0.3255 1.2695 1.1267
No log 1.5455 34 1.2511 0.3460 1.2511 1.1185
No log 1.6364 36 1.1903 0.3709 1.1903 1.0910
No log 1.7273 38 1.1979 0.3968 1.1979 1.0945
No log 1.8182 40 1.2000 0.4174 1.2000 1.0955
No log 1.9091 42 1.3419 0.4389 1.3419 1.1584
No log 2.0 44 1.4519 0.4430 1.4519 1.2050
No log 2.0909 46 1.3764 0.4564 1.3764 1.1732
No log 2.1818 48 1.2308 0.4529 1.2308 1.1094
No log 2.2727 50 1.1207 0.4556 1.1207 1.0586
No log 2.3636 52 1.0572 0.4802 1.0572 1.0282
No log 2.4545 54 1.0814 0.4832 1.0814 1.0399
No log 2.5455 56 1.1447 0.4900 1.1447 1.0699
No log 2.6364 58 1.1987 0.4841 1.1987 1.0948
No log 2.7273 60 1.3127 0.4785 1.3127 1.1457
No log 2.8182 62 1.3550 0.4655 1.3550 1.1640
No log 2.9091 64 1.5381 0.4626 1.5381 1.2402
No log 3.0 66 1.6078 0.4660 1.6078 1.2680
No log 3.0909 68 1.5683 0.4642 1.5683 1.2523
No log 3.1818 70 1.5598 0.4569 1.5598 1.2489
No log 3.2727 72 1.5264 0.4792 1.5264 1.2355
No log 3.3636 74 1.4247 0.4965 1.4247 1.1936
No log 3.4545 76 1.4767 0.4872 1.4767 1.2152
No log 3.5455 78 1.4526 0.4841 1.4526 1.2052
No log 3.6364 80 1.4722 0.4841 1.4722 1.2133
No log 3.7273 82 1.5581 0.4942 1.5581 1.2482
No log 3.8182 84 1.5138 0.5096 1.5138 1.2304
No log 3.9091 86 1.3878 0.5117 1.3878 1.1781
No log 4.0 88 1.2469 0.5312 1.2469 1.1166
No log 4.0909 90 1.2315 0.5402 1.2315 1.1097
No log 4.1818 92 1.3219 0.5297 1.3219 1.1498
No log 4.2727 94 1.3127 0.5390 1.3127 1.1457
No log 4.3636 96 1.2446 0.5595 1.2446 1.1156
No log 4.4545 98 1.1616 0.5526 1.1616 1.0778
No log 4.5455 100 1.0895 0.5791 1.0895 1.0438
No log 4.6364 102 1.1468 0.5845 1.1468 1.0709
No log 4.7273 104 1.3648 0.5495 1.3648 1.1683
No log 4.8182 106 1.4355 0.5617 1.4355 1.1981
No log 4.9091 108 1.3550 0.5524 1.3550 1.1640
No log 5.0 110 1.1431 0.5856 1.1431 1.0691
No log 5.0909 112 1.0188 0.5935 1.0188 1.0094
No log 5.1818 114 0.9836 0.5982 0.9836 0.9918
No log 5.2727 116 1.1156 0.5656 1.1156 1.0562
No log 5.3636 118 1.2847 0.5458 1.2847 1.1335
No log 5.4545 120 1.2343 0.5528 1.2343 1.1110
No log 5.5455 122 1.1021 0.5779 1.1021 1.0498
No log 5.6364 124 0.9394 0.6342 0.9394 0.9692
No log 5.7273 126 0.9255 0.6532 0.9255 0.9620
No log 5.8182 128 1.0522 0.6109 1.0522 1.0258
No log 5.9091 130 1.1885 0.5947 1.1885 1.0902
No log 6.0 132 1.2561 0.5866 1.2561 1.1208
No log 6.0909 134 1.1813 0.6022 1.1813 1.0869
No log 6.1818 136 1.0507 0.6024 1.0507 1.0250
No log 6.2727 138 0.9105 0.6696 0.9105 0.9542
No log 6.3636 140 0.8282 0.6864 0.8282 0.9100
No log 6.4545 142 0.8512 0.6732 0.8512 0.9226
No log 6.5455 144 0.9644 0.6153 0.9644 0.9821
No log 6.6364 146 1.1002 0.5959 1.1002 1.0489
No log 6.7273 148 1.1975 0.5677 1.1975 1.0943
No log 6.8182 150 1.1683 0.5787 1.1683 1.0809
No log 6.9091 152 1.0314 0.6065 1.0314 1.0156
No log 7.0 154 0.8967 0.6681 0.8967 0.9469
No log 7.0909 156 0.8251 0.6938 0.8251 0.9084
No log 7.1818 158 0.8390 0.6903 0.8390 0.9160
No log 7.2727 160 0.9329 0.6573 0.9329 0.9659
No log 7.3636 162 1.0999 0.5868 1.0999 1.0488
No log 7.4545 164 1.1954 0.5718 1.1954 1.0934
No log 7.5455 166 1.1717 0.5598 1.1717 1.0825
No log 7.6364 168 1.1496 0.5585 1.1496 1.0722
No log 7.7273 170 1.0852 0.5845 1.0852 1.0417
No log 7.8182 172 1.0450 0.6159 1.0450 1.0222
No log 7.9091 174 1.0310 0.5973 1.0310 1.0154
No log 8.0 176 1.0380 0.6141 1.0380 1.0188
No log 8.0909 178 1.0623 0.6042 1.0623 1.0307
No log 8.1818 180 1.0369 0.6264 1.0369 1.0183
No log 8.2727 182 1.0114 0.6250 1.0114 1.0057
No log 8.3636 184 0.9989 0.6250 0.9989 0.9994
No log 8.4545 186 0.9732 0.6318 0.9732 0.9865
No log 8.5455 188 0.9457 0.6452 0.9457 0.9725
No log 8.6364 190 0.9551 0.6397 0.9551 0.9773
No log 8.7273 192 0.9909 0.6485 0.9909 0.9955
No log 8.8182 194 1.0315 0.6296 1.0315 1.0156
No log 8.9091 196 1.0599 0.6224 1.0599 1.0295
No log 9.0 198 1.1012 0.6089 1.1012 1.0494
No log 9.0909 200 1.1135 0.6089 1.1135 1.0552
No log 9.1818 202 1.0957 0.6128 1.0957 1.0468
No log 9.2727 204 1.0828 0.6128 1.0828 1.0406
No log 9.3636 206 1.0654 0.6224 1.0654 1.0322
No log 9.4545 208 1.0471 0.6237 1.0471 1.0233
No log 9.5455 210 1.0317 0.6237 1.0317 1.0157
No log 9.6364 212 1.0199 0.6368 1.0199 1.0099
No log 9.7273 214 1.0089 0.6423 1.0089 1.0045
No log 9.8182 216 1.0022 0.6423 1.0022 1.0011
No log 9.9091 218 0.9990 0.6423 0.9990 0.9995
No log 10.0 220 0.9982 0.6423 0.9982 0.9991

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k6_task5_organization

Finetuned
(4023)
this model