ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k2_task5_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6986
  • Qwk: 0.7427
  • Mse: 0.6986
  • Rmse: 0.8359

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.1818 2 2.3641 0.0579 2.3641 1.5376
No log 0.3636 4 1.4772 0.2630 1.4772 1.2154
No log 0.5455 6 1.2256 0.2737 1.2256 1.1071
No log 0.7273 8 1.2324 0.1959 1.2324 1.1101
No log 0.9091 10 1.3593 0.1926 1.3593 1.1659
No log 1.0909 12 1.4275 0.2700 1.4275 1.1948
No log 1.2727 14 1.4621 0.4035 1.4621 1.2092
No log 1.4545 16 1.6175 0.3940 1.6175 1.2718
No log 1.6364 18 1.6256 0.3769 1.6256 1.2750
No log 1.8182 20 1.4448 0.4257 1.4448 1.2020
No log 2.0 22 1.2749 0.4032 1.2749 1.1291
No log 2.1818 24 1.1558 0.3298 1.1558 1.0751
No log 2.3636 26 1.1112 0.3356 1.1112 1.0542
No log 2.5455 28 1.0616 0.4108 1.0616 1.0303
No log 2.7273 30 0.9823 0.4425 0.9823 0.9911
No log 2.9091 32 0.9610 0.5534 0.9610 0.9803
No log 3.0909 34 0.9654 0.5297 0.9654 0.9825
No log 3.2727 36 0.9966 0.5524 0.9966 0.9983
No log 3.4545 38 1.0078 0.5985 1.0078 1.0039
No log 3.6364 40 0.9842 0.6192 0.9842 0.9921
No log 3.8182 42 0.9355 0.6169 0.9355 0.9672
No log 4.0 44 0.8139 0.6378 0.8139 0.9021
No log 4.1818 46 0.7261 0.6596 0.7261 0.8521
No log 4.3636 48 0.6957 0.6952 0.6957 0.8341
No log 4.5455 50 0.6885 0.7098 0.6885 0.8297
No log 4.7273 52 0.6872 0.7003 0.6872 0.8290
No log 4.9091 54 0.7308 0.6809 0.7308 0.8549
No log 5.0909 56 0.7887 0.6864 0.7887 0.8881
No log 5.2727 58 0.8021 0.6942 0.8021 0.8956
No log 5.4545 60 0.8135 0.6855 0.8135 0.9019
No log 5.6364 62 0.7672 0.7098 0.7672 0.8759
No log 5.8182 64 0.7452 0.7031 0.7452 0.8632
No log 6.0 66 0.7024 0.6977 0.7024 0.8381
No log 6.1818 68 0.6779 0.6713 0.6779 0.8234
No log 6.3636 70 0.6832 0.7015 0.6832 0.8266
No log 6.5455 72 0.7007 0.6850 0.7007 0.8371
No log 6.7273 74 0.7242 0.6875 0.7242 0.8510
No log 6.9091 76 0.8051 0.7014 0.8051 0.8973
No log 7.0909 78 0.9199 0.6613 0.9199 0.9591
No log 7.2727 80 0.9603 0.6439 0.9603 0.9800
No log 7.4545 82 0.9467 0.6535 0.9467 0.9730
No log 7.6364 84 0.8903 0.6763 0.8903 0.9436
No log 7.8182 86 0.8205 0.6911 0.8205 0.9058
No log 8.0 88 0.7397 0.7057 0.7397 0.8601
No log 8.1818 90 0.6963 0.7211 0.6963 0.8344
No log 8.3636 92 0.6791 0.7453 0.6791 0.8241
No log 8.5455 94 0.6697 0.7393 0.6697 0.8184
No log 8.7273 96 0.6786 0.7453 0.6786 0.8238
No log 8.9091 98 0.6860 0.7346 0.6860 0.8282
No log 9.0909 100 0.6908 0.7346 0.6908 0.8312
No log 9.2727 102 0.6924 0.7346 0.6924 0.8321
No log 9.4545 104 0.6961 0.7427 0.6961 0.8343
No log 9.6364 106 0.6955 0.7427 0.6955 0.8339
No log 9.8182 108 0.6977 0.7427 0.6977 0.8353
No log 10.0 110 0.6986 0.7427 0.6986 0.8359

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k2_task5_organization

Finetuned
(4023)
this model