SetFit/enron_spam
Viewer • Updated • 33.7k • 2.58k • 21
How to use mrm8488/bert-tiny-finetuned-enron-spam-detection with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="mrm8488/bert-tiny-finetuned-enron-spam-detection") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("mrm8488/bert-tiny-finetuned-enron-spam-detection")
model = AutoModelForSequenceClassification.from_pretrained("mrm8488/bert-tiny-finetuned-enron-spam-detection")This model is a fine-tuned version of google/bert_uncased_L-2_H-128_A-2 (aka BERT-Tiny) on an SetFit/enron_spam for Spam Dectection downstream task.
It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | Accuracy | F1 |
|---|---|---|---|---|---|---|---|
| 0.1125 | 1.0 | 1983 | 0.0797 | 0.9839 | 0.9692 | 0.9765 | 0.9765 |
| 0.061 | 2.0 | 3966 | 0.0618 | 0.9822 | 0.9861 | 0.984 | 0.9842 |
| 0.0486 | 3.0 | 5949 | 0.0593 | 0.9851 | 0.9871 | 0.986 | 0.9861 |
| 0.048 | 4.0 | 7932 | 0.0588 | 0.9870 | 0.9821 | 0.9845 | 0.9846 |