Digvijay05 commited on
Commit
bcbfbfb
·
verified ·
1 Parent(s): e301e28

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +104 -0
README.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-classification
5
+ tags:
6
+ - sms-spam
7
+ - phishing-detection
8
+ - scam-detection
9
+ - security
10
+ metrics:
11
+ - f1
12
+ - accuracy
13
+ - precision
14
+ - recall
15
+ widget:
16
+ - text: "Your account is blocked! Verify immediately with OTP. Send money to scam@ybl using https://scam.xyz/"
17
+ example_title: "Bank KYC Scam"
18
+ - text: "Congratulations! You won Rs 50,000 lottery prize. Contact urgently to claim via link: http://bit.ly/claim"
19
+ example_title: "Lottery Scam"
20
+ - text: "Hey, are we still meeting for lunch tomorrow at 12?"
21
+ example_title: "Safe Message"
22
+ ---
23
+
24
+ # SCAMBERT: DistilBERT for SMS Fraud & Scam Detection
25
+
26
+ SCAMBERT is a fine-tuned `distilbert-base-uncased` model specifically designed to detect social engineering, financial fraud, phishing, and scam payloads in SMS and short-form conversational text. It is built as Layer 3 of the AI Honeypot (CIPHER) Threat Intelligence Pipeline.
27
+
28
+ ## Model Summary
29
+
30
+ - **Model Type:** Text Classification (Binary)
31
+ - **Base Model:** `distilbert-base-uncased`
32
+ - **Language:** English (en)
33
+ - **Task:** Spam/Scam Detection
34
+ - **License:** MIT (or your project's license)
35
+ - **Size:** ~255 MB
36
+
37
+ ### Labels
38
+
39
+ - `0`: Safe / Legitimate
40
+ - `1`: Scam / Fraud / Phishing
41
+
42
+ ## Performance & Metrics
43
+
44
+ The model was fine-tuned on a dataset of **8,438** samples (27.5% Scam / 72.5% Safe). Due to class imbalance, class weights were applied during training.
45
+
46
+ ### Calibration & Validation Results
47
+
48
+ - **Best Accuracy:** 99.41%
49
+ - **Best F1-Score:** 98.92%
50
+ - **Calibrated Precision:** 95.08%
51
+ - **Calibrated Recall:** 100.0%
52
+ - **Optimal Threshold:** `0.0028` (For high-recall environments)
53
+
54
+ ### Robustness Evaluation
55
+
56
+ The model was tested against common bad-actor obfuscation tactics:
57
+
58
+ | Tactic | Example Input | Prediction Probability | Passed |
59
+ | :--- | :--- | :--- | :--- |
60
+ | **URL Obfuscation** | `Win $1000 fast! Click hxxp://scammy...` | 99.9% Scam | ✅ |
61
+ | **Numeric Substitution** | `W1NNER! Y0u have b33n select3d...` | 99.3% Scam | ✅ |
62
+ | **Mixed Case** | `cOnGrAtUlAtIoNs, yOu WoN a FrEe...` | 89.8% Scam | ✅ |
63
+
64
+ *Note: The model occasionally struggles with extremely short, contextless messages (e.g., "Call me now") as intended, relying on earlier heuristic layers for context.*
65
+
66
+ ## Usage
67
+
68
+ You can use this model directly with Hugging Face's `pipeline`:
69
+
70
+ ```python
71
+ from transformers import pipeline
72
+
73
+ # Load the pipeline
74
+ classifier = pipeline("text-classification", model="Digvijay05/SCAMBERT")
75
+
76
+ # Inference
77
+ text = "Earn Rs 5000 daily income from home part time. Click this link: http://bit.ly/job"
78
+ result = classifier(text)
79
+
80
+ print(result)
81
+ # [{'label': 'LABEL_1', 'score': 0.99...}]
82
+ ```
83
+
84
+ Or run via the **Inference API**:
85
+
86
+ ```python
87
+ import httpx
88
+
89
+ API_URL = "https://api-inference.huggingface.co/models/Digvijay05/SCAMBERT"
90
+ headers = {"Authorization": "Bearer YOUR_HF_TOKEN"}
91
+
92
+ response = httpx.post(API_URL, headers=headers, json={"inputs": "Your account is locked. Verify at bit.ly/secure"})
93
+ print(response.json())
94
+ ```
95
+
96
+ ## Deployment Considerations
97
+
98
+ - **CPU Latency Estimate:** ~10-30ms / sequence
99
+ - **GPU Latency Estimate:** ~2-5ms / sequence
100
+ - **Recommendation:** Can be efficiently hosted on serverless CPU environments (like Render Free Tier) using Hugging Face's Inference API, or deployed natively if 512MB+ RAM is available. ONNX quantization is recommended for edge deployments.
101
+
102
+ ## Intended Use
103
+
104
+ This model is designed as a *semantic booster* tie-breaker layer within a multi-layered classification engine. It excels at detecting complex sentence structures, urgency, and manipulative context that standard Regex/Heuristic rules might miss.