ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k1_task1_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6320
  • Qwk: 0.7192
  • Mse: 0.6320
  • Rmse: 0.7950

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.25 2 3.5278 0.0893 3.5278 1.8782
No log 0.5 4 2.0671 0.2442 2.0671 1.4378
No log 0.75 6 1.3358 0.1638 1.3358 1.1558
No log 1.0 8 1.0129 0.4820 1.0129 1.0064
No log 1.25 10 0.8537 0.5333 0.8537 0.9240
No log 1.5 12 0.8108 0.6040 0.8108 0.9004
No log 1.75 14 0.7212 0.6332 0.7212 0.8492
No log 2.0 16 0.7412 0.6835 0.7412 0.8610
No log 2.25 18 1.0512 0.5684 1.0512 1.0253
No log 2.5 20 1.1456 0.5918 1.1456 1.0703
No log 2.75 22 0.6969 0.6806 0.6969 0.8348
No log 3.0 24 0.9015 0.5874 0.9015 0.9495
No log 3.25 26 1.2970 0.3630 1.2970 1.1388
No log 3.5 28 1.3772 0.3700 1.3772 1.1735
No log 3.75 30 1.1282 0.4413 1.1282 1.0622
No log 4.0 32 0.7550 0.6927 0.7550 0.8689
No log 4.25 34 0.5858 0.7116 0.5858 0.7654
No log 4.5 36 0.7892 0.6827 0.7892 0.8884
No log 4.75 38 0.8926 0.6554 0.8926 0.9448
No log 5.0 40 0.7192 0.6949 0.7192 0.8481
No log 5.25 42 0.5857 0.7669 0.5857 0.7653
No log 5.5 44 0.6282 0.7233 0.6282 0.7926
No log 5.75 46 0.6216 0.7607 0.6216 0.7884
No log 6.0 48 0.6056 0.7441 0.6056 0.7782
No log 6.25 50 0.7519 0.7055 0.7519 0.8671
No log 6.5 52 0.8510 0.7037 0.8510 0.9225
No log 6.75 54 0.8637 0.7014 0.8637 0.9293
No log 7.0 56 0.7678 0.7004 0.7678 0.8762
No log 7.25 58 0.6540 0.7243 0.6540 0.8087
No log 7.5 60 0.6090 0.7412 0.6090 0.7804
No log 7.75 62 0.6064 0.7744 0.6064 0.7787
No log 8.0 64 0.6245 0.7713 0.6245 0.7903
No log 8.25 66 0.6286 0.7784 0.6286 0.7928
No log 8.5 68 0.6160 0.7787 0.6160 0.7849
No log 8.75 70 0.6093 0.7638 0.6093 0.7806
No log 9.0 72 0.6185 0.7462 0.6185 0.7864
No log 9.25 74 0.6253 0.7301 0.6253 0.7907
No log 9.5 76 0.6313 0.7192 0.6313 0.7946
No log 9.75 78 0.6321 0.7192 0.6321 0.7950
No log 10.0 80 0.6320 0.7192 0.6320 0.7950

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k1_task1_organization

Finetuned
(4023)
this model