ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k1_task2_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8311
  • Qwk: 0.5422
  • Mse: 0.8311
  • Rmse: 0.9116

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.2222 2 3.9392 -0.0062 3.9392 1.9847
No log 0.4444 4 2.2132 0.0779 2.2132 1.4877
No log 0.6667 6 1.0749 0.0666 1.0749 1.0368
No log 0.8889 8 0.8034 0.0360 0.8034 0.8963
No log 1.1111 10 0.7452 0.0993 0.7452 0.8632
No log 1.3333 12 0.7030 0.1834 0.7030 0.8385
No log 1.5556 14 0.6977 0.2149 0.6977 0.8353
No log 1.7778 16 0.6906 0.2236 0.6906 0.8310
No log 2.0 18 0.7289 0.2375 0.7289 0.8538
No log 2.2222 20 1.2202 0.1106 1.2202 1.1046
No log 2.4444 22 1.2447 0.1315 1.2447 1.1157
No log 2.6667 24 0.7872 0.3002 0.7872 0.8872
No log 2.8889 26 0.6410 0.3193 0.6410 0.8006
No log 3.1111 28 0.7345 0.3653 0.7345 0.8570
No log 3.3333 30 0.7517 0.3996 0.7517 0.8670
No log 3.5556 32 0.6636 0.4458 0.6636 0.8146
No log 3.7778 34 0.6848 0.3956 0.6848 0.8275
No log 4.0 36 0.7934 0.3506 0.7934 0.8907
No log 4.2222 38 0.7878 0.3674 0.7878 0.8876
No log 4.4444 40 0.6736 0.4128 0.6736 0.8208
No log 4.6667 42 0.6528 0.4250 0.6528 0.8079
No log 4.8889 44 0.8012 0.4283 0.8012 0.8951
No log 5.1111 46 0.8557 0.3546 0.8557 0.9251
No log 5.3333 48 0.7751 0.4221 0.7751 0.8804
No log 5.5556 50 0.6745 0.4812 0.6745 0.8213
No log 5.7778 52 0.6712 0.4691 0.6712 0.8192
No log 6.0 54 0.7760 0.4835 0.7760 0.8809
No log 6.2222 56 0.8589 0.4930 0.8589 0.9268
No log 6.4444 58 0.8488 0.5044 0.8488 0.9213
No log 6.6667 60 0.7760 0.4822 0.7760 0.8809
No log 6.8889 62 0.7641 0.4928 0.7641 0.8741
No log 7.1111 64 0.7725 0.4986 0.7725 0.8789
No log 7.3333 66 0.7931 0.5133 0.7931 0.8906
No log 7.5556 68 0.8187 0.5229 0.8187 0.9048
No log 7.7778 70 0.8212 0.5316 0.8212 0.9062
No log 8.0 72 0.8327 0.5106 0.8327 0.9125
No log 8.2222 74 0.8441 0.5054 0.8441 0.9188
No log 8.4444 76 0.8551 0.4944 0.8551 0.9247
No log 8.6667 78 0.8512 0.5054 0.8512 0.9226
No log 8.8889 80 0.8390 0.5106 0.8390 0.9160
No log 9.1111 82 0.8315 0.5270 0.8315 0.9119
No log 9.3333 84 0.8348 0.5274 0.8348 0.9137
No log 9.5556 86 0.8345 0.5270 0.8345 0.9135
No log 9.7778 88 0.8325 0.5424 0.8325 0.9124
No log 10.0 90 0.8311 0.5422 0.8311 0.9116

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k1_task2_organization

Finetuned
(4023)
this model