Update README.md
Browse files
README.md
CHANGED
|
@@ -15,6 +15,7 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
| 15 |
This model is modified version of distilbert/distilbert-base-uncased trained via knowledge distillation from shawhin/bert-phishing-classifier_teacher using the shawhin/phishing-site-classification dataset. It achieves the following results on the testing set:
|
| 16 |
|
| 17 |
Loss (training): 0.0673365443944931
|
|
|
|
| 18 |
Accuracy: 0.9089
|
| 19 |
Precision: 0.8950
|
| 20 |
Recall: 0.9301
|
|
@@ -110,6 +111,8 @@ num_epochs: 5
|
|
| 110 |
temperature: 2.0
|
| 111 |
adam optimizer alpha: 0.5
|
| 112 |
|
|
|
|
|
|
|
| 113 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 114 |
|
| 115 |
#### Speeds, Sizes, Times [optional]
|
|
|
|
| 15 |
This model is modified version of distilbert/distilbert-base-uncased trained via knowledge distillation from shawhin/bert-phishing-classifier_teacher using the shawhin/phishing-site-classification dataset. It achieves the following results on the testing set:
|
| 16 |
|
| 17 |
Loss (training): 0.0673365443944931
|
| 18 |
+
|
| 19 |
Accuracy: 0.9089
|
| 20 |
Precision: 0.8950
|
| 21 |
Recall: 0.9301
|
|
|
|
| 111 |
temperature: 2.0
|
| 112 |
adam optimizer alpha: 0.5
|
| 113 |
|
| 114 |
+
|
| 115 |
+
|
| 116 |
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
| 117 |
|
| 118 |
#### Speeds, Sizes, Times [optional]
|