File size: 2,134 Bytes
6f14c88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a021bed
6f14c88
8248ef6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f14c88
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: apache-2.0
language:
- en
pipeline_tag: fill-mask
tags:
- url
- cybersecurity
- urls
- links
- classification
- phishing-detection
- tiny
- phishing
- malware
- defacement
- transformers
- urlbert
- bert
- malicious
- base
- urlbert
new_version: CrabInHoney/urlbert-tiny-v5
---

urlbert-tiny-base-v4 is a lightweight BERT-based model specifically optimized for URL analysis. This version includes several improvements over the previous version:

- Trained using a teacher-student architecture
- Utilized masked token prediction as the primary pre-training task
- Incorporated knowledge distillation from a larger model's logits
- Additional training on 3 specialized tasks to enhance URL structure understanding

The result is an efficient model that can be rapidly fine-tuned for URL classification tasks with minimal computational resources.

## Model Details

- **Parameters:** 3.72M
- **Tensor Type:** F32
- **Previous Version:** [urlbert-tiny-base-v3](https://huggingface.co/CrabInHoney/urlbert-tiny-base-v3)

## Usage Example

```python
from transformers import BertTokenizerFast, BertForMaskedLM, pipeline
import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Device: {device}")

model_name = "CrabInHoney/urlbert-tiny-base-v4"

tokenizer = BertTokenizerFast.from_pretrained(model_name)
model = BertForMaskedLM.from_pretrained(model_name)
model.to(device)

fill_mask = pipeline(
    "fill-mask",
    model=model,
    tokenizer=tokenizer,
    device=0 if torch.cuda.is_available() else -1
)

sentences = [
    "http://example.[MASK]/"
]

for sentence in sentences:
    print(f"\nInput: {sentence}")
    results = fill_mask(sentence)
    for result in results:
        token_str = result['token_str']
        score = result['score']
        print(f"Predicted token: {token_str}, probability: {score:.4f}")
```

### Sample Output

```
Input: http://example.[MASK]/

Predicted token: com, probability: 0.7307
Predicted token: net, probability: 0.1319
Predicted token: org, probability: 0.0881
Predicted token: info, probability: 0.0094
Predicted token: cn, probability: 0.0084
```