ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k1_task2_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9714
  • Qwk: 0.5045
  • Mse: 0.9714
  • Rmse: 0.9856

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.25 2 4.0214 -0.0133 4.0214 2.0053
No log 0.5 4 2.4270 0.0845 2.4270 1.5579
No log 0.75 6 1.1269 0.0773 1.1269 1.0615
No log 1.0 8 0.9086 0.0246 0.9086 0.9532
No log 1.25 10 0.7416 0.1277 0.7416 0.8612
No log 1.5 12 0.8415 0.0828 0.8415 0.9173
No log 1.75 14 0.8323 0.1352 0.8323 0.9123
No log 2.0 16 0.7199 0.2061 0.7199 0.8485
No log 2.25 18 0.8012 0.1352 0.8012 0.8951
No log 2.5 20 0.8024 0.1409 0.8024 0.8957
No log 2.75 22 0.6582 0.2484 0.6582 0.8113
No log 3.0 24 0.5904 0.4468 0.5904 0.7684
No log 3.25 26 0.5854 0.4708 0.5854 0.7651
No log 3.5 28 0.6132 0.5050 0.6132 0.7831
No log 3.75 30 0.5940 0.5431 0.5940 0.7707
No log 4.0 32 0.6129 0.5218 0.6129 0.7829
No log 4.25 34 0.6805 0.5635 0.6805 0.8249
No log 4.5 36 0.6763 0.5365 0.6763 0.8224
No log 4.75 38 0.6765 0.5161 0.6765 0.8225
No log 5.0 40 0.7129 0.5120 0.7129 0.8443
No log 5.25 42 0.8058 0.5452 0.8058 0.8977
No log 5.5 44 0.9323 0.3864 0.9323 0.9656
No log 5.75 46 0.8698 0.5 0.8698 0.9326
No log 6.0 48 0.7604 0.5064 0.7604 0.8720
No log 6.25 50 0.7393 0.4811 0.7393 0.8598
No log 6.5 52 0.7359 0.4956 0.7359 0.8579
No log 6.75 54 0.7974 0.5237 0.7974 0.8930
No log 7.0 56 0.8604 0.5236 0.8604 0.9276
No log 7.25 58 0.9605 0.5129 0.9605 0.9801
No log 7.5 60 0.9903 0.5065 0.9903 0.9951
No log 7.75 62 0.9258 0.5125 0.9258 0.9622
No log 8.0 64 0.8168 0.5165 0.8168 0.9038
No log 8.25 66 0.7902 0.5138 0.7902 0.8889
No log 8.5 68 0.8053 0.5073 0.8053 0.8974
No log 8.75 70 0.8550 0.5058 0.8550 0.9247
No log 9.0 72 0.9111 0.5206 0.9111 0.9545
No log 9.25 74 0.9490 0.4948 0.9490 0.9742
No log 9.5 76 0.9740 0.5045 0.9740 0.9869
No log 9.75 78 0.9760 0.5045 0.9760 0.9879
No log 10.0 80 0.9714 0.5045 0.9714 0.9856

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k1_task2_organization

Finetuned
(4023)
this model