File size: 1,146 Bytes
2076388 03d1b2e 0ef7c22 cf328c5 0ef7c22 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | ---
license: apache-2.0
datasets:
- ucirvine/sms_spam
language:
- en
- hi
- te
metrics:
- accuracy
- f1
base_model:
- distilbert/distilbert-base-uncased
tags:
- text_classification
- spam_detection
- distilbert
---
# Spam Detection using DistilBERT
This model is a fine-tuned `distilbert-base-uncased` transformer for binary
spam classification (spam vs ham).
## Labels
- 0 → Ham
- 1 → Spam
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("<your-username>/spam-detection-distilbert")
model = AutoModelForSequenceClassification.from_pretrained("<your-username>/spam-detection-distilbert")
inputs = tokenizer(
"You won a free iPhone!",
return_tensors="pt",
truncation=True,
padding="max_length",
max_length=128
)
with torch.no_grad():
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=1).item()
print("SPAM" if prediction == 1 else "HAM")
```
## 🔗 GitHub Repository
Code for training and inference is available here:
https://github.com/revanthreddy0906/spam-detection-distilbert.git |