singhshiva commited on
Commit
c0bc99f
·
verified ·
1 Parent(s): 454aa0e

singhshiva/excelB

Browse files
Files changed (1) hide show
  1. README.md +8 -10
README.md CHANGED
@@ -16,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # my-bert-classifier3
18
 
19
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.1884
22
- - Accuracy: 0.9555
23
 
24
  ## Model description
25
 
@@ -38,27 +38,25 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 2e-05
42
  - train_batch_size: 16
43
  - eval_batch_size: 16
44
  - seed: 42
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
- - num_epochs: 5
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
- | 0.3273 | 1.0 | 73 | 0.2244 | 0.9247 |
54
- | 0.1971 | 2.0 | 146 | 0.1142 | 0.9658 |
55
- | 0.1245 | 3.0 | 219 | 0.1556 | 0.9623 |
56
- | 0.0031 | 4.0 | 292 | 0.1447 | 0.9692 |
57
- | 0.0357 | 5.0 | 365 | 0.1884 | 0.9555 |
58
 
59
 
60
  ### Framework versions
61
 
62
  - Transformers 4.50.0
63
  - Pytorch 2.6.0+cu124
 
64
  - Tokenizers 0.21.1
 
16
 
17
  # my-bert-classifier3
18
 
19
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.0814
22
+ - Accuracy: 0.9826
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 1e-05
42
  - train_batch_size: 16
43
  - eval_batch_size: 16
44
  - seed: 42
45
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
  - lr_scheduler_type: linear
47
+ - num_epochs: 2
48
 
49
  ### Training results
50
 
51
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
52
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
53
+ | 0.0409 | 1.0 | 689 | 0.1486 | 0.9666 |
54
+ | 0.0055 | 2.0 | 1378 | 0.0814 | 0.9826 |
 
 
 
55
 
56
 
57
  ### Framework versions
58
 
59
  - Transformers 4.50.0
60
  - Pytorch 2.6.0+cu124
61
+ - Datasets 3.4.1
62
  - Tokenizers 0.21.1