ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run3_AugV5_k1_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5792
  • Qwk: 0.7568
  • Mse: 0.5792
  • Rmse: 0.7610

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.2857 2 2.1249 0.0481 2.1249 1.4577
No log 0.5714 4 1.3339 0.2568 1.3339 1.1549
No log 0.8571 6 1.2672 0.2769 1.2672 1.1257
No log 1.1429 8 1.3756 0.3526 1.3756 1.1728
No log 1.4286 10 1.4595 0.3327 1.4595 1.2081
No log 1.7143 12 1.2101 0.2697 1.2101 1.1000
No log 2.0 14 1.0827 0.3803 1.0827 1.0405
No log 2.2857 16 0.9448 0.4804 0.9448 0.9720
No log 2.5714 18 0.9356 0.5655 0.9356 0.9672
No log 2.8571 20 0.8837 0.6129 0.8837 0.9401
No log 3.1429 22 0.7742 0.6628 0.7742 0.8799
No log 3.4286 24 0.6540 0.6832 0.6540 0.8087
No log 3.7143 26 0.6513 0.6641 0.6513 0.8070
No log 4.0 28 0.6083 0.7228 0.6083 0.7799
No log 4.2857 30 0.5926 0.7369 0.5926 0.7698
No log 4.5714 32 0.6155 0.7467 0.6155 0.7846
No log 4.8571 34 0.5937 0.7554 0.5937 0.7705
No log 5.1429 36 0.5927 0.7228 0.5927 0.7699
No log 5.4286 38 0.6257 0.7036 0.6257 0.7910
No log 5.7143 40 0.5642 0.7654 0.5642 0.7511
No log 6.0 42 0.6725 0.7404 0.6725 0.8201
No log 6.2857 44 0.7244 0.7317 0.7244 0.8511
No log 6.5714 46 0.6215 0.7617 0.6215 0.7883
No log 6.8571 48 0.5527 0.7928 0.5527 0.7435
No log 7.1429 50 0.6085 0.7143 0.6085 0.7801
No log 7.4286 52 0.5957 0.7392 0.5957 0.7718
No log 7.7143 54 0.5433 0.7769 0.5433 0.7371
No log 8.0 56 0.5416 0.7600 0.5416 0.7359
No log 8.2857 58 0.6040 0.7617 0.6040 0.7771
No log 8.5714 60 0.6532 0.7551 0.6532 0.8082
No log 8.8571 62 0.6621 0.7470 0.6621 0.8137
No log 9.1429 64 0.6357 0.7551 0.6357 0.7973
No log 9.4286 66 0.6079 0.7617 0.6079 0.7797
No log 9.7143 68 0.5881 0.7621 0.5881 0.7669
No log 10.0 70 0.5792 0.7568 0.5792 0.7610

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits4_WithDuplicationsForScore5_FineTuningAraBERT_run3_AugV5_k1_task5_organization

Finetuned
(4023)
this model