ArabicNewSplits5_FineTuningAraBERT_run1_AugV5_k1_task1_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6335
  • Qwk: 0.7243
  • Mse: 0.6335
  • Rmse: 0.7959

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.2 2 3.5576 0.0109 3.5576 1.8862
No log 0.4 4 2.2485 0.1752 2.2485 1.4995
No log 0.6 6 1.4239 0.1706 1.4239 1.1933
No log 0.8 8 1.1600 0.3202 1.1600 1.0770
No log 1.0 10 1.0369 0.4598 1.0369 1.0183
No log 1.2 12 0.9911 0.4434 0.9911 0.9955
No log 1.4 14 0.9492 0.4561 0.9492 0.9743
No log 1.6 16 1.0977 0.3370 1.0977 1.0477
No log 1.8 18 1.4815 0.1597 1.4815 1.2172
No log 2.0 20 1.6215 0.2329 1.6215 1.2734
No log 2.2 22 1.1656 0.3984 1.1656 1.0796
No log 2.4 24 0.7138 0.4599 0.7138 0.8449
No log 2.6 26 0.6723 0.5069 0.6723 0.8199
No log 2.8 28 0.7702 0.5942 0.7702 0.8776
No log 3.0 30 0.9742 0.5146 0.9742 0.9870
No log 3.2 32 1.0866 0.5008 1.0866 1.0424
No log 3.4 34 1.2842 0.4637 1.2842 1.1332
No log 3.6 36 1.0424 0.5533 1.0424 1.0210
No log 3.8 38 0.6658 0.6799 0.6658 0.8160
No log 4.0 40 0.5849 0.6883 0.5849 0.7648
No log 4.2 42 0.5789 0.7095 0.5789 0.7608
No log 4.4 44 0.5827 0.7152 0.5827 0.7633
No log 4.6 46 0.7157 0.7121 0.7157 0.8460
No log 4.8 48 1.1524 0.5635 1.1524 1.0735
No log 5.0 50 1.3649 0.4809 1.3649 1.1683
No log 5.2 52 1.2493 0.5289 1.2493 1.1177
No log 5.4 54 0.8840 0.6355 0.8840 0.9402
No log 5.6 56 0.6319 0.7271 0.6319 0.7949
No log 5.8 58 0.6781 0.7265 0.6781 0.8235
No log 6.0 60 0.6844 0.7495 0.6844 0.8273
No log 6.2 62 0.6360 0.7454 0.6360 0.7975
No log 6.4 64 0.6500 0.7450 0.6500 0.8062
No log 6.6 66 0.7239 0.6971 0.7239 0.8508
No log 6.8 68 0.7111 0.7358 0.7111 0.8433
No log 7.0 70 0.6630 0.7309 0.6630 0.8142
No log 7.2 72 0.6391 0.7318 0.6391 0.7995
No log 7.4 74 0.6539 0.7521 0.6539 0.8086
No log 7.6 76 0.7208 0.7652 0.7208 0.8490
No log 7.8 78 0.7584 0.7376 0.7584 0.8709
No log 8.0 80 0.7503 0.7405 0.7503 0.8662
No log 8.2 82 0.6944 0.7427 0.6944 0.8333
No log 8.4 84 0.6597 0.7467 0.6597 0.8122
No log 8.6 86 0.6459 0.7427 0.6459 0.8037
No log 8.8 88 0.6475 0.7267 0.6475 0.8047
No log 9.0 90 0.6513 0.7335 0.6513 0.8070
No log 9.2 92 0.6547 0.716 0.6547 0.8091
No log 9.4 94 0.6489 0.7174 0.6489 0.8055
No log 9.6 96 0.6397 0.7351 0.6397 0.7998
No log 9.8 98 0.6350 0.7243 0.6350 0.7969
No log 10.0 100 0.6335 0.7243 0.6335 0.7959

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits5_FineTuningAraBERT_run1_AugV5_k1_task1_organization

Finetuned
(4023)
this model