ArabicNewSplits6_FineTuningAraBERT_run3_AugV5_k2_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9081
  • Qwk: 0.6293
  • Mse: 0.9081
  • Rmse: 0.9529

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.1818 2 2.3189 0.0143 2.3189 1.5228
No log 0.3636 4 1.6061 0.1453 1.6061 1.2673
No log 0.5455 6 1.4858 0.1096 1.4858 1.2190
No log 0.7273 8 1.5517 0.1209 1.5517 1.2457
No log 0.9091 10 1.4942 0.2036 1.4942 1.2224
No log 1.0909 12 1.5345 0.2898 1.5345 1.2388
No log 1.2727 14 1.5094 0.2954 1.5094 1.2286
No log 1.4545 16 1.3849 0.1475 1.3849 1.1768
No log 1.6364 18 1.3435 0.1826 1.3435 1.1591
No log 1.8182 20 1.2991 0.2186 1.2991 1.1398
No log 2.0 22 1.2653 0.1508 1.2653 1.1249
No log 2.1818 24 1.3091 0.1502 1.3091 1.1441
No log 2.3636 26 1.3817 0.3818 1.3817 1.1755
No log 2.5455 28 1.3800 0.4268 1.3800 1.1748
No log 2.7273 30 1.2511 0.4215 1.2511 1.1185
No log 2.9091 32 1.1082 0.3357 1.1082 1.0527
No log 3.0909 34 1.0879 0.3439 1.0879 1.0430
No log 3.2727 36 1.1033 0.4070 1.1033 1.0504
No log 3.4545 38 1.0586 0.3708 1.0586 1.0289
No log 3.6364 40 1.0327 0.3457 1.0327 1.0162
No log 3.8182 42 1.1099 0.5156 1.1099 1.0535
No log 4.0 44 1.1714 0.5159 1.1714 1.0823
No log 4.1818 46 1.1697 0.5226 1.1697 1.0815
No log 4.3636 48 1.0905 0.5444 1.0905 1.0443
No log 4.5455 50 0.9878 0.5594 0.9878 0.9939
No log 4.7273 52 0.9020 0.5518 0.9020 0.9497
No log 4.9091 54 0.8852 0.5067 0.8852 0.9409
No log 5.0909 56 0.8631 0.5916 0.8631 0.9290
No log 5.2727 58 0.9656 0.5982 0.9656 0.9827
No log 5.4545 60 1.1558 0.4951 1.1558 1.0751
No log 5.6364 62 1.2013 0.4897 1.2013 1.0960
No log 5.8182 64 1.1060 0.5182 1.1060 1.0517
No log 6.0 66 0.9641 0.5885 0.9641 0.9819
No log 6.1818 68 0.8159 0.6761 0.8159 0.9033
No log 6.3636 70 0.7358 0.6802 0.7358 0.8578
No log 6.5455 72 0.7102 0.6759 0.7102 0.8428
No log 6.7273 74 0.7270 0.6802 0.7270 0.8526
No log 6.9091 76 0.7979 0.6730 0.7979 0.8933
No log 7.0909 78 0.9243 0.5899 0.9243 0.9614
No log 7.2727 80 1.0149 0.5546 1.0149 1.0074
No log 7.4545 82 1.0652 0.5580 1.0652 1.0321
No log 7.6364 84 1.0646 0.5806 1.0646 1.0318
No log 7.8182 86 1.0260 0.5890 1.0260 1.0129
No log 8.0 88 0.9834 0.5951 0.9834 0.9916
No log 8.1818 90 0.9557 0.6049 0.9557 0.9776
No log 8.3636 92 0.9616 0.6049 0.9616 0.9806
No log 8.5455 94 0.9833 0.5971 0.9833 0.9916
No log 8.7273 96 1.0104 0.5945 1.0104 1.0052
No log 8.9091 98 1.0084 0.6046 1.0084 1.0042
No log 9.0909 100 0.9901 0.6046 0.9901 0.9950
No log 9.2727 102 0.9648 0.5901 0.9648 0.9822
No log 9.4545 104 0.9375 0.6130 0.9375 0.9683
No log 9.6364 106 0.9168 0.6276 0.9168 0.9575
No log 9.8182 108 0.9110 0.6293 0.9110 0.9544
No log 10.0 110 0.9081 0.6293 0.9081 0.9529

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_FineTuningAraBERT_run3_AugV5_k2_task5_organization

Finetuned
(4023)
this model