ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k2_task1_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6117
  • Qwk: 0.7456
  • Mse: 0.6117
  • Rmse: 0.7821

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.1333 2 5.4527 -0.0139 5.4527 2.3351
No log 0.2667 4 3.2715 0.0501 3.2715 1.8087
No log 0.4 6 1.8490 0.1298 1.8490 1.3598
No log 0.5333 8 1.2821 0.2568 1.2821 1.1323
No log 0.6667 10 1.2051 0.2012 1.2051 1.0978
No log 0.8 12 1.1555 0.1975 1.1555 1.0749
No log 0.9333 14 0.9413 0.3173 0.9413 0.9702
No log 1.0667 16 1.5950 0.1925 1.5950 1.2629
No log 1.2 18 2.2138 0.1657 2.2138 1.4879
No log 1.3333 20 1.6966 0.2259 1.6966 1.3025
No log 1.4667 22 1.0577 0.3418 1.0577 1.0284
No log 1.6 24 0.9208 0.3673 0.9208 0.9596
No log 1.7333 26 1.4168 0.1032 1.4168 1.1903
No log 1.8667 28 1.7921 -0.2887 1.7921 1.3387
No log 2.0 30 1.8481 -0.1721 1.8481 1.3595
No log 2.1333 32 1.5824 -0.2097 1.5824 1.2579
No log 2.2667 34 1.3191 0.1688 1.3191 1.1485
No log 2.4 36 1.1859 0.2394 1.1859 1.0890
No log 2.5333 38 1.1241 0.2184 1.1241 1.0602
No log 2.6667 40 1.0754 0.2768 1.0754 1.0370
No log 2.8 42 1.0082 0.3197 1.0082 1.0041
No log 2.9333 44 0.9232 0.4152 0.9232 0.9608
No log 3.0667 46 0.8529 0.5306 0.8529 0.9235
No log 3.2 48 0.7879 0.5387 0.7879 0.8876
No log 3.3333 50 0.8242 0.5068 0.8242 0.9079
No log 3.4667 52 1.1057 0.4080 1.1057 1.0515
No log 3.6 54 1.2133 0.4081 1.2133 1.1015
No log 3.7333 56 1.1090 0.4306 1.1090 1.0531
No log 3.8667 58 0.7512 0.6279 0.7512 0.8667
No log 4.0 60 0.6585 0.6620 0.6585 0.8115
No log 4.1333 62 0.6849 0.6268 0.6849 0.8276
No log 4.2667 64 0.6080 0.6957 0.6080 0.7797
No log 4.4 66 0.6475 0.6885 0.6475 0.8047
No log 4.5333 68 0.8333 0.6393 0.8333 0.9129
No log 4.6667 70 0.8627 0.6441 0.8627 0.9288
No log 4.8 72 0.8206 0.6385 0.8206 0.9059
No log 4.9333 74 0.7169 0.6793 0.7169 0.8467
No log 5.0667 76 0.6083 0.7066 0.6083 0.7799
No log 5.2 78 0.5742 0.6991 0.5742 0.7577
No log 5.3333 80 0.5829 0.7007 0.5829 0.7635
No log 5.4667 82 0.6245 0.7205 0.6245 0.7903
No log 5.6 84 0.6558 0.6985 0.6558 0.8098
No log 5.7333 86 0.6811 0.6818 0.6811 0.8253
No log 5.8667 88 0.7301 0.6774 0.7301 0.8544
No log 6.0 90 0.8089 0.6589 0.8089 0.8994
No log 6.1333 92 0.9372 0.6263 0.9372 0.9681
No log 6.2667 94 0.9184 0.6386 0.9184 0.9583
No log 6.4 96 0.7856 0.6752 0.7856 0.8864
No log 6.5333 98 0.7259 0.6855 0.7259 0.8520
No log 6.6667 100 0.6704 0.7280 0.6704 0.8188
No log 6.8 102 0.6351 0.7285 0.6351 0.7969
No log 6.9333 104 0.6206 0.7373 0.6206 0.7878
No log 7.0667 106 0.6260 0.7458 0.6260 0.7912
No log 7.2 108 0.6522 0.7135 0.6522 0.8076
No log 7.3333 110 0.6539 0.7135 0.6539 0.8086
No log 7.4667 112 0.6824 0.7101 0.6824 0.8261
No log 7.6 114 0.7159 0.6963 0.7159 0.8461
No log 7.7333 116 0.7115 0.6963 0.7115 0.8435
No log 7.8667 118 0.6866 0.6918 0.6866 0.8286
No log 8.0 120 0.6556 0.7145 0.6556 0.8097
No log 8.1333 122 0.6334 0.7348 0.6334 0.7959
No log 8.2667 124 0.6155 0.7426 0.6155 0.7845
No log 8.4 126 0.6157 0.7427 0.6157 0.7847
No log 8.5333 128 0.6156 0.7441 0.6156 0.7846
No log 8.6667 130 0.6172 0.7477 0.6172 0.7856
No log 8.8 132 0.6114 0.7485 0.6114 0.7819
No log 8.9333 134 0.6069 0.7456 0.6069 0.7790
No log 9.0667 136 0.6066 0.7456 0.6066 0.7789
No log 9.2 138 0.6093 0.7413 0.6093 0.7806
No log 9.3333 140 0.6111 0.7471 0.6111 0.7817
No log 9.4667 142 0.6116 0.7413 0.6116 0.7820
No log 9.6 144 0.6116 0.7413 0.6116 0.7820
No log 9.7333 146 0.6120 0.7413 0.6120 0.7823
No log 9.8667 148 0.6119 0.7413 0.6119 0.7822
No log 10.0 150 0.6117 0.7456 0.6117 0.7821

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k2_task1_organization

Finetuned
(4023)
this model