ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k1_task3_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7003
  • Qwk: 0.1902
  • Mse: 0.7003
  • Rmse: 0.8368

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.2857 2 3.2724 0.0013 3.2724 1.8090
No log 0.5714 4 1.6404 -0.0370 1.6404 1.2808
No log 0.8571 6 0.7375 0.2487 0.7375 0.8588
No log 1.1429 8 0.7183 0.0863 0.7183 0.8475
No log 1.4286 10 1.0426 0.0462 1.0426 1.0211
No log 1.7143 12 0.6555 0.2644 0.6555 0.8096
No log 2.0 14 0.5599 -0.0159 0.5599 0.7483
No log 2.2857 16 0.6763 -0.0068 0.6763 0.8223
No log 2.5714 18 0.6929 -0.0133 0.6929 0.8324
No log 2.8571 20 0.6635 -0.0303 0.6635 0.8145
No log 3.1429 22 0.6053 -0.0081 0.6053 0.7780
No log 3.4286 24 0.5758 0.0476 0.5758 0.7588
No log 3.7143 26 0.5891 0.0388 0.5891 0.7675
No log 4.0 28 0.6103 0.0303 0.6103 0.7812
No log 4.2857 30 0.6981 -0.1733 0.6981 0.8355
No log 4.5714 32 0.6883 -0.1200 0.6883 0.8296
No log 4.8571 34 0.6776 -0.1200 0.6776 0.8231
No log 5.1429 36 0.6214 -0.0370 0.6214 0.7883
No log 5.4286 38 0.6182 0.0303 0.6182 0.7863
No log 5.7143 40 0.6389 -0.0233 0.6389 0.7993
No log 6.0 42 0.6574 -0.0233 0.6574 0.8108
No log 6.2857 44 0.6672 -0.0233 0.6672 0.8168
No log 6.5714 46 0.6705 -0.0303 0.6705 0.8188
No log 6.8571 48 0.6740 -0.0435 0.6740 0.8210
No log 7.1429 50 0.6679 -0.0435 0.6679 0.8173
No log 7.4286 52 0.6600 -0.0435 0.6600 0.8124
No log 7.7143 54 0.6606 0.0222 0.6606 0.8128
No log 8.0 56 0.6645 0.0222 0.6645 0.8152
No log 8.2857 58 0.6750 0.0897 0.6750 0.8216
No log 8.5714 60 0.6792 0.0897 0.6792 0.8241
No log 8.8571 62 0.6846 0.0692 0.6846 0.8274
No log 9.1429 64 0.6882 0.1304 0.6882 0.8296
No log 9.4286 66 0.6906 0.1392 0.6906 0.8310
No log 9.7143 68 0.6962 0.1392 0.6962 0.8344
No log 10.0 70 0.7003 0.1902 0.7003 0.8368

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k1_task3_organization

Finetuned
(4023)
this model