ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k1_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6757
  • Qwk: 0.6888
  • Mse: 0.6757
  • Rmse: 0.8220

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.2857 2 2.2291 0.0408 2.2291 1.4930
No log 0.5714 4 1.4107 0.2161 1.4107 1.1877
No log 0.8571 6 1.2038 0.2634 1.2038 1.0972
No log 1.1429 8 1.2473 0.3538 1.2473 1.1168
No log 1.4286 10 1.3863 0.3907 1.3863 1.1774
No log 1.7143 12 1.4115 0.4301 1.4115 1.1881
No log 2.0 14 1.3178 0.4440 1.3178 1.1480
No log 2.2857 16 1.0888 0.4789 1.0888 1.0435
No log 2.5714 18 1.0108 0.4844 1.0108 1.0054
No log 2.8571 20 0.9046 0.5436 0.9046 0.9511
No log 3.1429 22 0.8340 0.5915 0.8340 0.9133
No log 3.4286 24 0.8197 0.6108 0.8197 0.9053
No log 3.7143 26 0.8901 0.6103 0.8901 0.9434
No log 4.0 28 0.7990 0.6280 0.7990 0.8939
No log 4.2857 30 0.7188 0.6644 0.7188 0.8478
No log 4.5714 32 0.6735 0.6770 0.6735 0.8207
No log 4.8571 34 0.6502 0.6915 0.6502 0.8063
No log 5.1429 36 0.6319 0.6791 0.6319 0.7949
No log 5.4286 38 0.6192 0.7041 0.6192 0.7869
No log 5.7143 40 0.6921 0.6867 0.6921 0.8319
No log 6.0 42 0.8217 0.6529 0.8217 0.9065
No log 6.2857 44 0.9014 0.6748 0.9014 0.9494
No log 6.5714 46 0.8613 0.6706 0.8613 0.9281
No log 6.8571 48 0.7102 0.6848 0.7102 0.8427
No log 7.1429 50 0.6102 0.7004 0.6102 0.7811
No log 7.4286 52 0.6139 0.6970 0.6139 0.7835
No log 7.7143 54 0.6091 0.7107 0.6091 0.7804
No log 8.0 56 0.6125 0.7004 0.6125 0.7826
No log 8.2857 58 0.6598 0.6832 0.6598 0.8123
No log 8.5714 60 0.6985 0.6911 0.6985 0.8358
No log 8.8571 62 0.6971 0.6911 0.6971 0.8349
No log 9.1429 64 0.6968 0.6911 0.6968 0.8348
No log 9.4286 66 0.6901 0.6976 0.6901 0.8307
No log 9.7143 68 0.6817 0.6888 0.6817 0.8257
No log 10.0 70 0.6757 0.6888 0.6757 0.8220

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run1_AugV5_k1_task5_organization

Finetuned
(4023)
this model