ArabicNewSplits6_FineTuningAraBERT_run1_AugV5_k1_task1_organization

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6906
  • Qwk: 0.6724
  • Mse: 0.6906
  • Rmse: 0.8310

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse Rmse
No log 0.25 2 7.7266 -0.0007 7.7266 2.7797
No log 0.5 4 5.5283 -0.0291 5.5283 2.3512
No log 0.75 6 3.8103 0.0630 3.8103 1.9520
No log 1.0 8 2.7250 0.1970 2.7250 1.6508
No log 1.25 10 1.6311 0.1934 1.6311 1.2771
No log 1.5 12 1.0552 0.4709 1.0552 1.0273
No log 1.75 14 1.0405 0.2691 1.0405 1.0200
No log 2.0 16 0.9581 0.3095 0.9581 0.9788
No log 2.25 18 0.9161 0.4781 0.9161 0.9571
No log 2.5 20 0.8273 0.5481 0.8273 0.9096
No log 2.75 22 0.7723 0.5680 0.7723 0.8788
No log 3.0 24 0.6895 0.6368 0.6895 0.8304
No log 3.25 26 0.6528 0.6986 0.6528 0.8080
No log 3.5 28 0.6826 0.6568 0.6826 0.8262
No log 3.75 30 0.6822 0.6590 0.6822 0.8259
No log 4.0 32 0.6525 0.7043 0.6525 0.8078
No log 4.25 34 0.6396 0.6985 0.6396 0.7997
No log 4.5 36 0.6636 0.6893 0.6636 0.8146
No log 4.75 38 0.7755 0.6164 0.7755 0.8806
No log 5.0 40 1.0233 0.5923 1.0233 1.0116
No log 5.25 42 1.0677 0.5809 1.0677 1.0333
No log 5.5 44 0.7784 0.5830 0.7784 0.8823
No log 5.75 46 0.6660 0.6988 0.6660 0.8161
No log 6.0 48 0.6475 0.7509 0.6475 0.8047
No log 6.25 50 0.6666 0.7464 0.6666 0.8164
No log 6.5 52 0.7465 0.6474 0.7465 0.8640
No log 6.75 54 0.8953 0.6399 0.8953 0.9462
No log 7.0 56 0.9261 0.6279 0.9261 0.9623
No log 7.25 58 0.8200 0.6717 0.8200 0.9055
No log 7.5 60 0.6931 0.7372 0.6931 0.8325
No log 7.75 62 0.6672 0.7324 0.6672 0.8168
No log 8.0 64 0.6479 0.7324 0.6479 0.8049
No log 8.25 66 0.6357 0.7308 0.6357 0.7973
No log 8.5 68 0.6590 0.7342 0.6590 0.8118
No log 8.75 70 0.7213 0.6724 0.7213 0.8493
No log 9.0 72 0.7515 0.6313 0.7515 0.8669
No log 9.25 74 0.7460 0.6313 0.7460 0.8637
No log 9.5 76 0.7210 0.6480 0.7210 0.8491
No log 9.75 78 0.7007 0.6601 0.7007 0.8371
No log 10.0 80 0.6906 0.6724 0.6906 0.8310

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu118
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
-
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MayBashendy/ArabicNewSplits6_FineTuningAraBERT_run1_AugV5_k1_task1_organization

Finetuned
(4023)
this model