ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run3_AugV5_k1_task1_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8236
  • Qwk: 0.6465
  • Mse: 0.8236
  • Rmse: 0.9075

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.25 2 4.9218 -0.0078 4.9218 2.2185
No log 0.5 4 3.1460 0.0711 3.1460 1.7737
No log 0.75 6 1.7174 0.1110 1.7174 1.3105
No log 1.0 8 1.2579 0.2581 1.2579 1.1215
No log 1.25 10 1.1525 0.2189 1.1525 1.0735
No log 1.5 12 1.2477 0.1628 1.2477 1.1170
No log 1.75 14 1.2352 0.2112 1.2352 1.1114
No log 2.0 16 1.2084 0.2857 1.2084 1.0993
No log 2.25 18 1.1900 0.3329 1.1900 1.0909
No log 2.5 20 1.1173 0.4018 1.1173 1.0570
No log 2.75 22 1.0085 0.4227 1.0085 1.0043
No log 3.0 24 0.9519 0.4632 0.9519 0.9757
No log 3.25 26 0.9582 0.4682 0.9582 0.9789
No log 3.5 28 0.9808 0.4465 0.9808 0.9904
No log 3.75 30 0.8343 0.5581 0.8343 0.9134
No log 4.0 32 0.7627 0.5968 0.7627 0.8734
No log 4.25 34 0.7576 0.5902 0.7576 0.8704
No log 4.5 36 0.7558 0.6263 0.7558 0.8694
No log 4.75 38 0.8132 0.5838 0.8132 0.9018
No log 5.0 40 0.8171 0.5673 0.8171 0.9040
No log 5.25 42 0.7835 0.6282 0.7835 0.8852
No log 5.5 44 0.7963 0.5787 0.7963 0.8924
No log 5.75 46 0.8043 0.5584 0.8043 0.8968
No log 6.0 48 0.8117 0.5584 0.8117 0.9009
No log 6.25 50 0.8302 0.5795 0.8302 0.9111
No log 6.5 52 0.8867 0.5732 0.8867 0.9416
No log 6.75 54 0.9018 0.5876 0.9018 0.9496
No log 7.0 56 0.8515 0.6233 0.8515 0.9227
No log 7.25 58 0.8345 0.6462 0.8345 0.9135
No log 7.5 60 0.8453 0.6398 0.8453 0.9194
No log 7.75 62 0.8536 0.6381 0.8536 0.9239
No log 8.0 64 0.8474 0.6368 0.8474 0.9205
No log 8.25 66 0.8351 0.6594 0.8351 0.9138
No log 8.5 68 0.8297 0.6679 0.8297 0.9109
No log 8.75 70 0.8313 0.6570 0.8313 0.9118
No log 9.0 72 0.8321 0.6425 0.8321 0.9122
No log 9.25 74 0.8288 0.6425 0.8288 0.9104
No log 9.5 76 0.8236 0.6465 0.8236 0.9075
No log 9.75 78 0.8229 0.6465 0.8229 0.9072
No log 10.0 80 0.8236 0.6465 0.8236 0.9075

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run3_AugV5_k1_task1_organization

Finetuned
(4023)
this model