hebashakeel commited on
Commit
8bfd154
·
verified ·
1 Parent(s): d19fea1

End of training

Browse files
README.md CHANGED
@@ -18,21 +18,21 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 1.0043
22
- - Accuracy: 0.653
23
- - Auc: 0.876
24
- - Precision Class 0: 0.389
25
- - Precision Class 1: 0.792
26
- - Precision Class 2: 0.467
27
- - Precision Class 3: 0.755
28
- - Precision Class 4: 0.746
29
- - Precision Class 5: 0.48
30
  - Recall Class 0: 0.368
31
  - Recall Class 1: 0.826
32
- - Recall Class 2: 0.519
33
- - Recall Class 3: 0.787
34
- - Recall Class 4: 0.781
35
- - Recall Class 5: 0.364
36
 
37
  ## Model description
38
 
@@ -51,9 +51,9 @@ More information needed
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
- - learning_rate: 0.0001
55
- - train_batch_size: 16
56
- - eval_batch_size: 16
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
@@ -63,16 +63,16 @@ The following hyperparameters were used during training:
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | Precision Class 0 | Precision Class 1 | Precision Class 2 | Precision Class 3 | Precision Class 4 | Precision Class 5 | Recall Class 0 | Recall Class 1 | Recall Class 2 | Recall Class 3 | Recall Class 4 | Recall Class 5 |
65
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|
66
- | 1.0015 | 1.0 | 62 | 1.0351 | 0.613 | 0.877 | 0.5 | 0.857 | 0.308 | 0.76 | 0.671 | 0.464 | 0.48 | 0.6 | 0.364 | 0.905 | 0.701 | 0.361 |
67
- | 0.9701 | 2.0 | 124 | 1.0177 | 0.623 | 0.879 | 0.5 | 0.867 | 0.348 | 0.755 | 0.686 | 0.452 | 0.48 | 0.65 | 0.364 | 0.881 | 0.716 | 0.389 |
68
- | 0.9532 | 3.0 | 186 | 1.0052 | 0.618 | 0.881 | 0.5 | 0.812 | 0.304 | 0.766 | 0.694 | 0.429 | 0.52 | 0.65 | 0.318 | 0.857 | 0.746 | 0.333 |
69
- | 0.9447 | 4.0 | 248 | 1.0016 | 0.618 | 0.882 | 0.545 | 0.812 | 0.308 | 0.75 | 0.71 | 0.441 | 0.48 | 0.65 | 0.364 | 0.929 | 0.657 | 0.417 |
70
- | 0.9253 | 5.0 | 310 | 0.9870 | 0.627 | 0.882 | 0.522 | 0.857 | 0.368 | 0.745 | 0.689 | 0.419 | 0.48 | 0.6 | 0.318 | 0.905 | 0.761 | 0.361 |
71
- | 0.9146 | 6.0 | 372 | 0.9955 | 0.608 | 0.881 | 0.522 | 0.867 | 0.292 | 0.745 | 0.703 | 0.4 | 0.48 | 0.65 | 0.318 | 0.905 | 0.672 | 0.389 |
72
- | 0.9142 | 7.0 | 434 | 0.9812 | 0.637 | 0.882 | 0.545 | 0.857 | 0.409 | 0.74 | 0.689 | 0.467 | 0.48 | 0.6 | 0.409 | 0.881 | 0.761 | 0.389 |
73
- | 0.9176 | 8.0 | 496 | 0.9838 | 0.627 | 0.882 | 0.545 | 0.857 | 0.36 | 0.74 | 0.69 | 0.467 | 0.48 | 0.6 | 0.409 | 0.881 | 0.731 | 0.389 |
74
- | 0.9133 | 9.0 | 558 | 0.9820 | 0.623 | 0.882 | 0.545 | 0.8 | 0.36 | 0.735 | 0.69 | 0.467 | 0.48 | 0.6 | 0.409 | 0.857 | 0.731 | 0.389 |
75
- | 0.8981 | 10.0 | 620 | 0.9816 | 0.623 | 0.882 | 0.545 | 0.8 | 0.36 | 0.735 | 0.69 | 0.467 | 0.48 | 0.6 | 0.409 | 0.857 | 0.731 | 0.389 |
76
 
77
 
78
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 1.0431
22
+ - Accuracy: 0.62
23
+ - Auc: 0.882
24
+ - Precision Class 0: 0.35
25
+ - Precision Class 1: 0.76
26
+ - Precision Class 2: 0.379
27
+ - Precision Class 3: 0.714
28
+ - Precision Class 4: 0.803
29
+ - Precision Class 5: 0.379
30
  - Recall Class 0: 0.368
31
  - Recall Class 1: 0.826
32
+ - Recall Class 2: 0.407
33
+ - Recall Class 3: 0.745
34
+ - Recall Class 4: 0.766
35
+ - Recall Class 5: 0.333
36
 
37
  ## Model description
38
 
 
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
+ - learning_rate: 0.001
55
+ - train_batch_size: 8
56
+ - eval_batch_size: 8
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
 
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc | Precision Class 0 | Precision Class 1 | Precision Class 2 | Precision Class 3 | Precision Class 4 | Precision Class 5 | Recall Class 0 | Recall Class 1 | Recall Class 2 | Recall Class 3 | Recall Class 4 | Recall Class 5 |
65
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:-----------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|:--------------:|
66
+ | 1.5148 | 1.0 | 124 | 1.2488 | 0.505 | 0.847 | 0.318 | 1.0 | 0.233 | 0.964 | 0.653 | 0.235 | 0.56 | 0.2 | 0.318 | 0.643 | 0.701 | 0.222 |
67
+ | 1.171 | 2.0 | 248 | 1.0449 | 0.59 | 0.876 | 0.478 | 0.929 | 0.286 | 0.76 | 0.755 | 0.375 | 0.44 | 0.65 | 0.364 | 0.905 | 0.552 | 0.5 |
68
+ | 1.0919 | 3.0 | 372 | 1.2051 | 0.571 | 0.863 | 0.381 | 1.0 | 0.262 | 0.872 | 0.741 | 1.0 | 0.64 | 0.45 | 0.727 | 0.81 | 0.642 | 0.083 |
69
+ | 0.9998 | 4.0 | 496 | 1.0001 | 0.599 | 0.879 | 0.429 | 0.923 | 0.545 | 0.783 | 0.632 | 0.293 | 0.24 | 0.6 | 0.273 | 0.857 | 0.821 | 0.333 |
70
+ | 0.9018 | 5.0 | 620 | 1.0078 | 0.599 | 0.879 | 0.6 | 0.632 | 0.571 | 0.838 | 0.635 | 0.347 | 0.36 | 0.6 | 0.182 | 0.738 | 0.806 | 0.472 |
71
+ | 0.8651 | 6.0 | 744 | 0.9984 | 0.623 | 0.881 | 0.4 | 0.778 | 0.556 | 0.872 | 0.644 | 0.333 | 0.56 | 0.7 | 0.227 | 0.81 | 0.866 | 0.194 |
72
+ | 0.7965 | 7.0 | 868 | 0.9984 | 0.637 | 0.886 | 0.485 | 0.684 | 0.471 | 0.919 | 0.667 | 0.316 | 0.64 | 0.65 | 0.364 | 0.81 | 0.866 | 0.167 |
73
+ | 0.7863 | 8.0 | 992 | 0.9887 | 0.632 | 0.887 | 0.485 | 0.619 | 0.45 | 0.857 | 0.727 | 0.4 | 0.64 | 0.65 | 0.409 | 0.857 | 0.716 | 0.333 |
74
+ | 0.7388 | 9.0 | 1116 | 0.9815 | 0.613 | 0.887 | 0.522 | 0.636 | 0.4 | 0.837 | 0.712 | 0.342 | 0.48 | 0.7 | 0.364 | 0.857 | 0.701 | 0.361 |
75
+ | 0.7216 | 10.0 | 1240 | 0.9811 | 0.627 | 0.887 | 0.538 | 0.765 | 0.375 | 0.818 | 0.704 | 0.367 | 0.56 | 0.65 | 0.409 | 0.857 | 0.746 | 0.306 |
76
 
77
 
78
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:97931f21c0b06abf60b423e6fb65da1d34735f359f2f8d732148875ed93c3759
3
  size 437970952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3fe7b2513bab94d78604edbc6cb23ee3cc95e1036fb6ba64f0f1ae59b237767f
3
  size 437970952
runs/Feb17_08-07-57_d68c514ae1af/events.out.tfevents.1739779678.d68c514ae1af.30.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:307bdb6ed678f68dd46e6be81c963c527b73b103cdd631002a0bbacb5399bb7b
3
+ size 18630
runs/Feb17_08-07-57_d68c514ae1af/events.out.tfevents.1739779744.d68c514ae1af.30.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28c9a818423863b87fbf183290dd385802f10c14ae2021f0416b1f7739f080fe
3
+ size 1172
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5a168f38a84108d608794812cf1f03167db642fdddb71692c76ab56582015085
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e798020e3c71a47317e78fb28a8c3ec6b02a1238e8b6fe3fbe3a7edb381663a
3
  size 5240