ArabicNewSplits6_FineTuningAraBERT_run2_AugV5_k2_task3_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6899
  • Qwk: 0.2653
  • Mse: 0.6899
  • Rmse: 0.8306

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.1667 2 3.1605 -0.0205 3.1605 1.7778
No log 0.3333 4 1.7883 0.0130 1.7883 1.3373
No log 0.5 6 0.9545 0.0236 0.9545 0.9770
No log 0.6667 8 1.1878 0.1235 1.1878 1.0899
No log 0.8333 10 0.6158 0.1373 0.6158 0.7848
No log 1.0 12 0.5992 0.0 0.5992 0.7741
No log 1.1667 14 0.5703 0.0 0.5703 0.7552
No log 1.3333 16 0.5445 0.0569 0.5445 0.7379
No log 1.5 18 0.8664 0.0288 0.8664 0.9308
No log 1.6667 20 1.1357 0.0698 1.1357 1.0657
No log 1.8333 22 1.0508 0.0698 1.0508 1.0251
No log 2.0 24 0.9151 0.0431 0.9151 0.9566
No log 2.1667 26 0.6692 0.25 0.6692 0.8180
No log 2.3333 28 0.5555 0.0 0.5555 0.7453
No log 2.5 30 0.5779 0.0 0.5779 0.7602
No log 2.6667 32 0.6066 0.0 0.6066 0.7788
No log 2.8333 34 0.5824 0.0569 0.5824 0.7631
No log 3.0 36 0.6225 -0.0556 0.6225 0.7890
No log 3.1667 38 1.1674 0.0843 1.1674 1.0804
No log 3.3333 40 1.6923 0.0617 1.6923 1.3009
No log 3.5 42 1.3382 0.0877 1.3382 1.1568
No log 3.6667 44 0.7521 0.1515 0.7521 0.8673
No log 3.8333 46 0.6022 0.0388 0.6022 0.7760
No log 4.0 48 0.6471 0.0476 0.6471 0.8044
No log 4.1667 50 0.6674 0.0569 0.6674 0.8170
No log 4.3333 52 0.6210 0.0569 0.6210 0.7881
No log 4.5 54 0.5435 0.0476 0.5435 0.7372
No log 4.6667 56 0.5913 0.1030 0.5913 0.7690
No log 4.8333 58 0.8561 0.1861 0.8561 0.9253
No log 5.0 60 1.1137 0.1191 1.1137 1.0553
No log 5.1667 62 0.9960 0.1756 0.9960 0.9980
No log 5.3333 64 0.6891 0.1765 0.6891 0.8301
No log 5.5 66 0.5479 0.0857 0.5479 0.7402
No log 5.6667 68 0.7322 0.0588 0.7322 0.8557
No log 5.8333 70 0.8445 -0.1429 0.8445 0.9190
No log 6.0 72 0.8287 -0.1667 0.8287 0.9103
No log 6.1667 74 0.7007 0.0388 0.7007 0.8371
No log 6.3333 76 0.5826 0.0857 0.5826 0.7633
No log 6.5 78 0.5748 0.1020 0.5748 0.7582
No log 6.6667 80 0.5849 0.0769 0.5849 0.7648
No log 6.8333 82 0.6411 0.2444 0.6411 0.8007
No log 7.0 84 0.6723 0.2350 0.6723 0.8199
No log 7.1667 86 0.6289 0.2273 0.6289 0.7931
No log 7.3333 88 0.6245 0.1724 0.6245 0.7903
No log 7.5 90 0.6756 0.2857 0.6756 0.8220
No log 7.6667 92 0.7378 0.2000 0.7378 0.8590
No log 7.8333 94 0.8199 0.0053 0.8199 0.9055
No log 8.0 96 0.8632 0.0151 0.8632 0.9291
No log 8.1667 98 0.8436 0.0053 0.8436 0.9185
No log 8.3333 100 0.8032 0.2000 0.8032 0.8962
No log 8.5 102 0.7482 0.1675 0.7482 0.8650
No log 8.6667 104 0.7321 0.1921 0.7321 0.8556
No log 8.8333 106 0.7217 0.2161 0.7217 0.8495
No log 9.0 108 0.7065 0.2000 0.7065 0.8405
No log 9.1667 110 0.6956 0.2390 0.6956 0.8340
No log 9.3333 112 0.6897 0.2390 0.6897 0.8305
No log 9.5 114 0.6861 0.2000 0.6861 0.8283
No log 9.6667 116 0.6862 0.2390 0.6862 0.8284
No log 9.8333 118 0.6886 0.2653 0.6886 0.8298
No log 10.0 120 0.6899 0.2653 0.6899 0.8306

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_FineTuningAraBERT_run2_AugV5_k2_task3_organization

Finetuned
(4023)
this model