DARedmond commited on
Commit
5e0fa8d
·
verified ·
1 Parent(s): 6adc8fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -62
README.md CHANGED
@@ -1,62 +1,46 @@
1
- ---
2
- library_name: transformers
3
- base_model: huawei-noah/TinyBERT_General_4L_312D
4
- tags:
5
- - generated_from_trainer
6
- metrics:
7
- - accuracy
8
- model-index:
9
- - name: fine_tuned_spam_model
10
- results: []
11
- ---
12
-
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
-
16
- # fine_tuned_spam_model
17
-
18
- This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on an unknown dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 0.6292
21
- - Accuracy: 0.7664
22
-
23
- ## Model description
24
-
25
- More information needed
26
-
27
- ## Intended uses & limitations
28
-
29
- More information needed
30
-
31
- ## Training and evaluation data
32
-
33
- More information needed
34
-
35
- ## Training procedure
36
-
37
- ### Training hyperparameters
38
-
39
- The following hyperparameters were used during training:
40
- - learning_rate: 5e-05
41
- - train_batch_size: 16
42
- - eval_batch_size: 16
43
- - seed: 42
44
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
- - lr_scheduler_type: linear
46
- - num_epochs: 3
47
-
48
- ### Training results
49
-
50
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
51
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
52
- | 0.6875 | 1.0 | 364 | 0.7508 | 0.7226 |
53
- | 0.5574 | 2.0 | 728 | 0.6804 | 0.7292 |
54
- | 0.5481 | 3.0 | 1092 | 0.6292 | 0.7664 |
55
-
56
-
57
- ### Framework versions
58
-
59
- - Transformers 4.49.0
60
- - Pytorch 2.6.0+cpu
61
- - Datasets 3.3.2
62
- - Tokenizers 0.21.1
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: huawei-noah/TinyBERT_General_4L_312D
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: fine_tuned_spam_model
10
+ results: []
11
+ ---
12
+
13
+ # fine_tuned_spam_model
14
+
15
+ This model is a fine-tuned version of [huawei-noah/TinyBERT_General_4L_312D](https://huggingface.co/huawei-noah/TinyBERT_General_4L_312D) on a dataset of batch-labeled emails
16
+ and SMS messages that were identified to be spam (Enron, spamassassin, sms-spam, etc.).
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.6292
19
+ - Accuracy: 0.7664
20
+
21
+ ### Training hyperparameters
22
+
23
+ The following hyperparameters were used during training:
24
+ - learning_rate: 5e-05
25
+ - train_batch_size: 16
26
+ - eval_batch_size: 16
27
+ - seed: 42
28
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
29
+ - lr_scheduler_type: linear
30
+ - num_epochs: 3
31
+
32
+ ### Training results
33
+
34
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
35
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
36
+ | 0.6875 | 1.0 | 364 | 0.7508 | 0.7226 |
37
+ | 0.5574 | 2.0 | 728 | 0.6804 | 0.7292 |
38
+ | 0.5481 | 3.0 | 1092 | 0.6292 | 0.7664 |
39
+
40
+
41
+ ### Framework versions
42
+
43
+ - Transformers 4.49.0
44
+ - Pytorch 2.6.0+cpu
45
+ - Datasets 3.3.2
46
+ - Tokenizers 0.21.1