zakariyafirachine commited on
Commit
5c7ba72
·
verified ·
1 Parent(s): f644b3e

End of training

Browse files
README.md CHANGED
@@ -20,11 +20,11 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.5517
24
- - Precision: 0.6585
25
- - Recall: 0.6890
26
- - F1: 0.6734
27
- - Accuracy: 0.7724
28
 
29
  ## Model description
30
 
@@ -44,8 +44,8 @@ More information needed
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 2e-05
47
- - train_batch_size: 8
48
- - eval_batch_size: 8
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
@@ -55,26 +55,26 @@ The following hyperparameters were used during training:
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
- | No log | 1.0 | 40 | 0.9065 | 0.5949 | 0.6117 | 0.6031 | 0.7256 |
59
- | No log | 2.0 | 80 | 0.8427 | 0.6468 | 0.6307 | 0.6386 | 0.7427 |
60
- | No log | 3.0 | 120 | 0.7721 | 0.6288 | 0.6343 | 0.6316 | 0.7481 |
61
- | No log | 4.0 | 160 | 0.7262 | 0.6379 | 0.6608 | 0.6491 | 0.7556 |
62
- | No log | 5.0 | 200 | 0.7035 | 0.6670 | 0.6880 | 0.6774 | 0.7701 |
63
- | No log | 6.0 | 240 | 0.6704 | 0.6682 | 0.6976 | 0.6826 | 0.7804 |
64
- | No log | 7.0 | 280 | 0.6468 | 0.6464 | 0.6798 | 0.6627 | 0.7666 |
65
- | No log | 8.0 | 320 | 0.6299 | 0.6551 | 0.6929 | 0.6735 | 0.7742 |
66
- | No log | 9.0 | 360 | 0.6091 | 0.6660 | 0.6961 | 0.6807 | 0.7771 |
67
- | No log | 10.0 | 400 | 0.5936 | 0.6831 | 0.7116 | 0.6970 | 0.7925 |
68
- | No log | 11.0 | 440 | 0.5871 | 0.6585 | 0.6854 | 0.6716 | 0.7728 |
69
- | No log | 12.0 | 480 | 0.5785 | 0.6462 | 0.6895 | 0.6672 | 0.7704 |
70
- | 0.5607 | 13.0 | 520 | 0.5702 | 0.6872 | 0.7207 | 0.7035 | 0.7974 |
71
- | 0.5607 | 14.0 | 560 | 0.5631 | 0.6706 | 0.6990 | 0.6845 | 0.7833 |
72
- | 0.5607 | 15.0 | 600 | 0.5632 | 0.6638 | 0.6975 | 0.6802 | 0.7754 |
73
- | 0.5607 | 16.0 | 640 | 0.5635 | 0.6577 | 0.6937 | 0.6752 | 0.7751 |
74
- | 0.5607 | 17.0 | 680 | 0.5562 | 0.6546 | 0.6903 | 0.6720 | 0.7736 |
75
- | 0.5607 | 18.0 | 720 | 0.5533 | 0.6623 | 0.6899 | 0.6758 | 0.7738 |
76
- | 0.5607 | 19.0 | 760 | 0.5524 | 0.6574 | 0.6901 | 0.6733 | 0.7727 |
77
- | 0.5607 | 20.0 | 800 | 0.5517 | 0.6585 | 0.6890 | 0.6734 | 0.7724 |
78
 
79
 
80
  ### Framework versions
 
20
 
21
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.5155
24
+ - Precision: 0.6772
25
+ - Recall: 0.7009
26
+ - F1: 0.6889
27
+ - Accuracy: 0.7741
28
 
29
  ## Model description
30
 
 
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 2e-05
47
+ - train_batch_size: 16
48
+ - eval_batch_size: 16
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
 
55
 
56
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
57
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
58
+ | No log | 1.0 | 20 | 0.5575 | 0.6845 | 0.7099 | 0.6970 | 0.7899 |
59
+ | No log | 2.0 | 40 | 0.5403 | 0.6788 | 0.7022 | 0.6903 | 0.7883 |
60
+ | No log | 3.0 | 60 | 0.5366 | 0.6739 | 0.6937 | 0.6836 | 0.7800 |
61
+ | No log | 4.0 | 80 | 0.5271 | 0.6818 | 0.7228 | 0.7017 | 0.7930 |
62
+ | No log | 5.0 | 100 | 0.5309 | 0.6891 | 0.7148 | 0.7017 | 0.7883 |
63
+ | No log | 6.0 | 120 | 0.5199 | 0.6874 | 0.7119 | 0.6995 | 0.7937 |
64
+ | No log | 7.0 | 140 | 0.5218 | 0.6798 | 0.7078 | 0.6935 | 0.7835 |
65
+ | No log | 8.0 | 160 | 0.5189 | 0.6837 | 0.7092 | 0.6962 | 0.7843 |
66
+ | No log | 9.0 | 180 | 0.5172 | 0.6788 | 0.7005 | 0.6895 | 0.7745 |
67
+ | No log | 10.0 | 200 | 0.5177 | 0.6731 | 0.6980 | 0.6853 | 0.7736 |
68
+ | No log | 11.0 | 220 | 0.5210 | 0.6811 | 0.7017 | 0.6912 | 0.7768 |
69
+ | No log | 12.0 | 240 | 0.5170 | 0.6742 | 0.7041 | 0.6888 | 0.7761 |
70
+ | No log | 13.0 | 260 | 0.5150 | 0.6721 | 0.7030 | 0.6872 | 0.7772 |
71
+ | No log | 14.0 | 280 | 0.5117 | 0.6744 | 0.7039 | 0.6888 | 0.7764 |
72
+ | No log | 15.0 | 300 | 0.5165 | 0.6799 | 0.7035 | 0.6915 | 0.7764 |
73
+ | No log | 16.0 | 320 | 0.5151 | 0.6771 | 0.7017 | 0.6891 | 0.7752 |
74
+ | No log | 17.0 | 340 | 0.5171 | 0.6765 | 0.6990 | 0.6876 | 0.7730 |
75
+ | No log | 18.0 | 360 | 0.5155 | 0.6750 | 0.6993 | 0.6870 | 0.7735 |
76
+ | No log | 19.0 | 380 | 0.5155 | 0.6768 | 0.6997 | 0.6881 | 0.7737 |
77
+ | No log | 20.0 | 400 | 0.5155 | 0.6772 | 0.7009 | 0.6889 | 0.7741 |
78
 
79
 
80
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:574569ea90d46bf1ffc3760d76c2b31303f5bdf74d88ab73ae09e4f6b510be4d
3
  size 435839092
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc6513e3016682842f8bfdb96d90447a0d074bafd129cb053f07b0a3453387c0
3
  size 435839092
runs/Apr06_00-40-16_720703b45c43/events.out.tfevents.1712364041.720703b45c43.293.4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec6f49df7d3a2614aff14054d1b7c63fed829946453978a7ac1782ff968b2c03
3
+ size 17713
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:7d82be7631405616124f3918d39af14fdcae25ede01e64589ee463a40e9c2791
3
  size 4856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31d753d81f4b8d07f08dab21575c0d928b0d07a21e75fdb2113cc894830662a7
3
  size 4856