ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k2_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5599
  • Qwk: 0.7576
  • Mse: 0.5599
  • Rmse: 0.7483

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.1818 2 2.1694 -0.0366 2.1694 1.4729
No log 0.3636 4 1.4346 0.1959 1.4346 1.1978
No log 0.5455 6 1.2657 0.2497 1.2657 1.1250
No log 0.7273 8 1.2423 0.4151 1.2423 1.1146
No log 0.9091 10 1.5205 0.3585 1.5205 1.2331
No log 1.0909 12 1.2622 0.4184 1.2622 1.1235
No log 1.2727 14 1.0897 0.3905 1.0897 1.0439
No log 1.4545 16 1.1344 0.4487 1.1344 1.0651
No log 1.6364 18 0.9575 0.4233 0.9575 0.9785
No log 1.8182 20 1.0376 0.5218 1.0376 1.0187
No log 2.0 22 1.2512 0.4901 1.2512 1.1186
No log 2.1818 24 1.0978 0.5275 1.0978 1.0478
No log 2.3636 26 0.8167 0.5936 0.8167 0.9037
No log 2.5455 28 0.9145 0.4704 0.9145 0.9563
No log 2.7273 30 1.0989 0.4569 1.0989 1.0483
No log 2.9091 32 0.9705 0.4817 0.9705 0.9851
No log 3.0909 34 0.7527 0.6103 0.7527 0.8676
No log 3.2727 36 0.8026 0.6287 0.8026 0.8959
No log 3.4545 38 0.9104 0.6462 0.9104 0.9541
No log 3.6364 40 0.8651 0.6471 0.8651 0.9301
No log 3.8182 42 0.7628 0.6837 0.7628 0.8734
No log 4.0 44 0.6630 0.6766 0.6630 0.8142
No log 4.1818 46 0.6394 0.7006 0.6394 0.7996
No log 4.3636 48 0.6506 0.6719 0.6506 0.8066
No log 4.5455 50 0.6237 0.6827 0.6237 0.7898
No log 4.7273 52 0.5885 0.7497 0.5885 0.7672
No log 4.9091 54 0.5796 0.7607 0.5796 0.7613
No log 5.0909 56 0.6241 0.7228 0.6241 0.7900
No log 5.2727 58 0.6761 0.7586 0.6761 0.8222
No log 5.4545 60 0.6633 0.7606 0.6633 0.8144
No log 5.6364 62 0.6068 0.7483 0.6068 0.7790
No log 5.8182 64 0.5693 0.7344 0.5693 0.7545
No log 6.0 66 0.6615 0.6681 0.6615 0.8133
No log 6.1818 68 0.6916 0.6510 0.6916 0.8316
No log 6.3636 70 0.6160 0.7311 0.6160 0.7849
No log 6.5455 72 0.5592 0.7629 0.5592 0.7478
No log 6.7273 74 0.5857 0.7493 0.5857 0.7653
No log 6.9091 76 0.6051 0.7458 0.6051 0.7779
No log 7.0909 78 0.5858 0.7760 0.5858 0.7654
No log 7.2727 80 0.5591 0.7746 0.5591 0.7477
No log 7.4545 82 0.5613 0.7449 0.5613 0.7492
No log 7.6364 84 0.5693 0.7436 0.5693 0.7546
No log 7.8182 86 0.5705 0.7466 0.5705 0.7553
No log 8.0 88 0.5652 0.7438 0.5652 0.7518
No log 8.1818 90 0.5581 0.7518 0.5581 0.7470
No log 8.3636 92 0.5549 0.7553 0.5549 0.7449
No log 8.5455 94 0.5592 0.7619 0.5592 0.7478
No log 8.7273 96 0.5735 0.7585 0.5735 0.7573
No log 8.9091 98 0.5814 0.7622 0.5814 0.7625
No log 9.0909 100 0.5798 0.7622 0.5798 0.7615
No log 9.2727 102 0.5714 0.7633 0.5714 0.7559
No log 9.4545 104 0.5667 0.7633 0.5667 0.7528
No log 9.6364 106 0.5621 0.7576 0.5621 0.7498
No log 9.8182 108 0.5605 0.7576 0.5605 0.7487
No log 10.0 110 0.5599 0.7576 0.5599 0.7483

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k2_task5_organization

Finetuned
(4023)
this model