ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run3_AugV5_k1_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0494
  • Qwk: 0.5948
  • Mse: 1.0494
  • Rmse: 1.0244

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.3333 2 2.3584 0.0263 2.3584 1.5357
No log 0.6667 4 1.5726 0.1917 1.5726 1.2541
No log 1.0 6 1.2525 0.2342 1.2525 1.1192
No log 1.3333 8 1.3271 0.1475 1.3271 1.1520
No log 1.6667 10 1.4177 0.3543 1.4177 1.1907
No log 2.0 12 1.7354 0.3403 1.7354 1.3174
No log 2.3333 14 2.1633 0.2262 2.1633 1.4708
No log 2.6667 16 2.5027 0.2288 2.5027 1.5820
No log 3.0 18 2.3444 0.1914 2.3444 1.5312
No log 3.3333 20 1.9998 0.2155 1.9998 1.4142
No log 3.6667 22 1.8190 0.2521 1.8190 1.3487
No log 4.0 24 1.7885 0.3205 1.7885 1.3373
No log 4.3333 26 1.6569 0.3668 1.6569 1.2872
No log 4.6667 28 1.6328 0.3958 1.6328 1.2778
No log 5.0 30 1.6401 0.3848 1.6401 1.2806
No log 5.3333 32 1.5225 0.4138 1.5225 1.2339
No log 5.6667 34 1.4126 0.4624 1.4126 1.1885
No log 6.0 36 1.2496 0.4841 1.2496 1.1178
No log 6.3333 38 1.1718 0.4900 1.1718 1.0825
No log 6.6667 40 1.1720 0.5302 1.1720 1.0826
No log 7.0 42 1.1444 0.5629 1.1444 1.0698
No log 7.3333 44 1.1238 0.5803 1.1238 1.0601
No log 7.6667 46 1.1178 0.5791 1.1178 1.0573
No log 8.0 48 1.0614 0.5884 1.0614 1.0302
No log 8.3333 50 1.0351 0.5859 1.0351 1.0174
No log 8.6667 52 1.0004 0.5915 1.0004 1.0002
No log 9.0 54 0.9981 0.5915 0.9981 0.9990
No log 9.3333 56 1.0201 0.6002 1.0201 1.0100
No log 9.6667 58 1.0448 0.5948 1.0448 1.0222
No log 10.0 60 1.0494 0.5948 1.0494 1.0244

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run3_AugV5_k1_task5_organization

Finetuned
(4023)
this model