ArabicNewSplits6_FineTuningAraBERT_run2_AugV5_k2_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8879
  • Qwk: 0.6848
  • Mse: 0.8879
  • Rmse: 0.9423

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.1818 2 2.2709 0.0088 2.2709 1.5070
No log 0.3636 4 1.7138 0.0400 1.7138 1.3091
No log 0.5455 6 1.7412 0.0136 1.7412 1.3195
No log 0.7273 8 1.5302 0.2050 1.5302 1.2370
No log 0.9091 10 1.4885 0.2588 1.4885 1.2200
No log 1.0909 12 1.4998 0.2405 1.4998 1.2247
No log 1.2727 14 1.6424 0.2639 1.6424 1.2815
No log 1.4545 16 1.6052 0.2996 1.6052 1.2670
No log 1.6364 18 1.4910 0.2354 1.4910 1.2211
No log 1.8182 20 1.3258 0.3025 1.3258 1.1514
No log 2.0 22 1.2220 0.3395 1.2220 1.1054
No log 2.1818 24 1.1524 0.3226 1.1524 1.0735
No log 2.3636 26 1.1310 0.2682 1.1310 1.0635
No log 2.5455 28 1.1343 0.2936 1.1343 1.0650
No log 2.7273 30 1.1651 0.4151 1.1651 1.0794
No log 2.9091 32 1.2613 0.4607 1.2613 1.1231
No log 3.0909 34 1.2590 0.5218 1.2590 1.1220
No log 3.2727 36 1.1954 0.5200 1.1954 1.0933
No log 3.4545 38 1.0391 0.5903 1.0391 1.0194
No log 3.6364 40 0.9924 0.6233 0.9924 0.9962
No log 3.8182 42 0.9426 0.6332 0.9426 0.9709
No log 4.0 44 0.8869 0.6678 0.8869 0.9418
No log 4.1818 46 0.9002 0.6559 0.9002 0.9488
No log 4.3636 48 0.9887 0.6184 0.9887 0.9943
No log 4.5455 50 1.1141 0.5550 1.1141 1.0555
No log 4.7273 52 1.1507 0.5467 1.1507 1.0727
No log 4.9091 54 1.1312 0.5334 1.1312 1.0636
No log 5.0909 56 1.0697 0.5485 1.0697 1.0343
No log 5.2727 58 1.0063 0.5771 1.0063 1.0031
No log 5.4545 60 0.9096 0.5850 0.9096 0.9537
No log 5.6364 62 0.8272 0.6061 0.8272 0.9095
No log 5.8182 64 0.7564 0.6485 0.7564 0.8697
No log 6.0 66 0.7170 0.6871 0.7170 0.8468
No log 6.1818 68 0.7125 0.7037 0.7125 0.8441
No log 6.3636 70 0.7421 0.6595 0.7421 0.8615
No log 6.5455 72 0.8479 0.6552 0.8479 0.9208
No log 6.7273 74 1.0228 0.6419 1.0228 1.0113
No log 6.9091 76 1.1326 0.6172 1.1326 1.0642
No log 7.0909 78 1.1401 0.6238 1.1401 1.0677
No log 7.2727 80 1.0679 0.6180 1.0679 1.0334
No log 7.4545 82 0.9903 0.6347 0.9903 0.9951
No log 7.6364 84 0.8867 0.6671 0.8867 0.9416
No log 7.8182 86 0.7989 0.6803 0.7989 0.8938
No log 8.0 88 0.7719 0.6959 0.7719 0.8786
No log 8.1818 90 0.7639 0.7062 0.7639 0.8740
No log 8.3636 92 0.7947 0.6985 0.7947 0.8915
No log 8.5455 94 0.8317 0.6956 0.8317 0.9120
No log 8.7273 96 0.8655 0.6778 0.8655 0.9303
No log 8.9091 98 0.8904 0.6671 0.8904 0.9436
No log 9.0909 100 0.8997 0.6671 0.8997 0.9485
No log 9.2727 102 0.8936 0.6671 0.8936 0.9453
No log 9.4545 104 0.8935 0.6769 0.8935 0.9452
No log 9.6364 106 0.8911 0.6848 0.8911 0.9440
No log 9.8182 108 0.8900 0.6848 0.8900 0.9434
No log 10.0 110 0.8879 0.6848 0.8879 0.9423

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_FineTuningAraBERT_run2_AugV5_k2_task5_organization

Finetuned
(4023)
this model