trungpq commited on
Commit
93a5cf4
·
verified ·
1 Parent(s): 5d62cc4

End of training

Browse files
Files changed (4) hide show
  1. README.md +29 -29
  2. config.json +2 -2
  3. model.safetensors +1 -1
  4. training_args.bin +2 -2
README.md CHANGED
@@ -16,12 +16,12 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.7148
20
- - Accuracy: 0.9069
21
- - F1 Macro: 0.8782
22
- - Precision Macro: 0.8729
23
- - Recall Macro: 0.8840
24
- - Total Tf: [1403, 144, 1403, 144]
25
 
26
  ## Model description
27
 
@@ -44,35 +44,35 @@ The following hyperparameters were used during training:
44
  - train_batch_size: 64
45
  - eval_batch_size: 64
46
  - seed: 42
47
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_steps: 277
50
  - num_epochs: 15
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
55
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:----------------------:|
56
- | 0.2561 | 1.0 | 278 | 0.2745 | 0.9082 | 0.8822 | 0.8709 | 0.8960 | [1405, 142, 1405, 142] |
57
- | 0.1645 | 2.0 | 556 | 0.2768 | 0.9095 | 0.8827 | 0.8742 | 0.8926 | [1407, 140, 1407, 140] |
58
- | 0.1117 | 3.0 | 834 | 0.3586 | 0.9005 | 0.8712 | 0.8625 | 0.8814 | [1393, 154, 1393, 154] |
59
- | 0.0718 | 4.0 | 1112 | 0.4156 | 0.8927 | 0.8640 | 0.8502 | 0.8822 | [1381, 166, 1381, 166] |
60
- | 0.0479 | 5.0 | 1390 | 0.4730 | 0.8998 | 0.8688 | 0.8639 | 0.8742 | [1392, 155, 1392, 155] |
61
- | 0.0361 | 6.0 | 1668 | 0.5653 | 0.8804 | 0.8536 | 0.8343 | 0.8868 | [1362, 185, 1362, 185] |
62
- | 0.0239 | 7.0 | 1946 | 0.5393 | 0.8992 | 0.8716 | 0.8586 | 0.8883 | [1391, 156, 1391, 156] |
63
- | 0.0227 | 8.0 | 2224 | 0.5848 | 0.9024 | 0.8728 | 0.8662 | 0.8802 | [1396, 151, 1396, 151] |
64
- | 0.0152 | 9.0 | 2502 | 0.5969 | 0.9069 | 0.8772 | 0.8747 | 0.8798 | [1403, 144, 1403, 144] |
65
- | 0.0046 | 10.0 | 2780 | 0.6695 | 0.9089 | 0.8776 | 0.8820 | 0.8734 | [1406, 141, 1406, 141] |
66
- | 0.0031 | 11.0 | 3058 | 0.6885 | 0.9050 | 0.8741 | 0.8732 | 0.8751 | [1400, 147, 1400, 147] |
67
- | 0.0067 | 12.0 | 3336 | 0.7181 | 0.9024 | 0.8738 | 0.8648 | 0.8844 | [1396, 151, 1396, 151] |
68
- | 0.0019 | 13.0 | 3614 | 0.7104 | 0.9089 | 0.8797 | 0.8775 | 0.8819 | [1406, 141, 1406, 141] |
69
- | 0.0029 | 14.0 | 3892 | 0.7245 | 0.9056 | 0.8769 | 0.8705 | 0.8840 | [1401, 146, 1401, 146] |
70
- | 0.0015 | 15.0 | 4170 | 0.7148 | 0.9069 | 0.8782 | 0.8729 | 0.8840 | [1403, 144, 1403, 144] |
71
 
72
 
73
  ### Framework versions
74
 
75
- - Transformers 4.56.1
76
- - Pytorch 2.8.0+cu128
77
- - Datasets 4.0.0
78
- - Tokenizers 0.22.0
 
16
 
17
  This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.5460
20
+ - Accuracy: 0.74
21
+ - F1 Macro: 0.5938
22
+ - Precision Macro: 0.7222
23
+ - Recall Macro: 0.5952
24
+ - Total Tf: [74, 26, 74, 26]
25
 
26
  ## Model description
27
 
 
44
  - train_batch_size: 64
45
  - eval_batch_size: 64
46
  - seed: 42
47
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 2
50
  - num_epochs: 15
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | Precision Macro | Recall Macro | Total Tf |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:---------------:|:------------:|:----------------:|
56
+ | 0.6777 | 1.0 | 3 | 0.6718 | 0.65 | 0.5394 | 0.5512 | 0.5405 | [65, 35, 65, 35] |
57
+ | 0.6518 | 2.0 | 6 | 0.7083 | 0.43 | 0.4272 | 0.5536 | 0.5452 | [43, 57, 43, 57] |
58
+ | 0.627 | 3.0 | 9 | 0.7033 | 0.49 | 0.4899 | 0.5898 | 0.5881 | [49, 51, 49, 51] |
59
+ | 0.5443 | 4.0 | 12 | 0.6611 | 0.63 | 0.6009 | 0.6040 | 0.6214 | [63, 37, 63, 37] |
60
+ | 0.5005 | 5.0 | 15 | 0.6256 | 0.7 | 0.6 | 0.625 | 0.5952 | [70, 30, 70, 30] |
61
+ | 0.4764 | 6.0 | 18 | 0.6032 | 0.73 | 0.5858 | 0.6890 | 0.5881 | [73, 27, 73, 27] |
62
+ | 0.4248 | 7.0 | 21 | 0.5898 | 0.74 | 0.6082 | 0.7083 | 0.6048 | [74, 26, 74, 26] |
63
+ | 0.4033 | 8.0 | 24 | 0.5791 | 0.73 | 0.6129 | 0.6765 | 0.6071 | [73, 27, 73, 27] |
64
+ | 0.3491 | 9.0 | 27 | 0.5685 | 0.74 | 0.6330 | 0.6935 | 0.6238 | [74, 26, 74, 26] |
65
+ | 0.3325 | 10.0 | 30 | 0.5597 | 0.73 | 0.5858 | 0.6890 | 0.5881 | [73, 27, 73, 27] |
66
+ | 0.3099 | 11.0 | 33 | 0.5533 | 0.74 | 0.5938 | 0.7222 | 0.5952 | [74, 26, 74, 26] |
67
+ | 0.3079 | 12.0 | 36 | 0.5498 | 0.74 | 0.5938 | 0.7222 | 0.5952 | [74, 26, 74, 26] |
68
+ | 0.2952 | 13.0 | 39 | 0.5477 | 0.74 | 0.5938 | 0.7222 | 0.5952 | [74, 26, 74, 26] |
69
+ | 0.3065 | 14.0 | 42 | 0.5465 | 0.74 | 0.5938 | 0.7222 | 0.5952 | [74, 26, 74, 26] |
70
+ | 0.301 | 15.0 | 45 | 0.5460 | 0.74 | 0.5938 | 0.7222 | 0.5952 | [74, 26, 74, 26] |
71
 
72
 
73
  ### Framework versions
74
 
75
+ - Transformers 4.52.4
76
+ - Pytorch 2.6.0+cu124
77
+ - Datasets 3.6.0
78
+ - Tokenizers 0.21.2
config.json CHANGED
@@ -2,9 +2,9 @@
2
  "architectures": [
3
  "BERTModel"
4
  ],
5
- "dtype": "float32",
6
  "model_type": "bert_model",
7
  "num_classes": 1,
8
  "pos_weight": null,
9
- "transformers_version": "4.56.1"
 
10
  }
 
2
  "architectures": [
3
  "BERTModel"
4
  ],
 
5
  "model_type": "bert_model",
6
  "num_classes": 1,
7
  "pos_weight": null,
8
+ "torch_dtype": "float32",
9
+ "transformers_version": "4.52.4"
10
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:36762e3cc5eec875f42468694e000ca70929c98ef5add1a0adcafe54a560a93f
3
  size 437955556
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8cf123d4fdf83669961c5fb0263ff11fcbf5f95c13019d06d28e4b61cdc8ef3
3
  size 437955556
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:63a4a4b90f94cde8021822b0e3cbcc278949c0a568906f7783550884da6fadb9
3
- size 5841
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8e5f434ef6d088145092fda8c9ea9b57da6d97573626c3dbfed13a5cd75be5d6
3
+ size 5368