MaziyarPanahi commited on
Commit
f5f4e08
·
verified ·
1 Parent(s): 85a5697

Upload Italian PII detection model OpenMed-PII-Italian-BiomedBERTFull-Base-110M-v1

Browse files
README.md ADDED
@@ -0,0 +1,290 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - it
4
+ license: apache-2.0
5
+ base_model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
6
+ tags:
7
+ - token-classification
8
+ - ner
9
+ - pii
10
+ - pii-detection
11
+ - de-identification
12
+ - privacy
13
+ - healthcare
14
+ - medical
15
+ - clinical
16
+ - phi
17
+ - italian
18
+ - pytorch
19
+ - transformers
20
+ - openmed
21
+ pipeline_tag: token-classification
22
+ library_name: transformers
23
+ metrics:
24
+ - f1
25
+ - precision
26
+ - recall
27
+ model-index:
28
+ - name: OpenMed-PII-Italian-BiomedBERTFull-Base-110M-v1
29
+ results:
30
+ - task:
31
+ type: token-classification
32
+ name: Named Entity Recognition
33
+ dataset:
34
+ name: AI4Privacy (Italian subset)
35
+ type: ai4privacy/pii-masking-400k
36
+ split: test
37
+ metrics:
38
+ - type: f1
39
+ value: 0.9255
40
+ name: F1 (micro)
41
+ - type: precision
42
+ value: 0.9197
43
+ name: Precision
44
+ - type: recall
45
+ value: 0.9313
46
+ name: Recall
47
+ widget:
48
+ - text: "Dr. Marco Rossi (Codice Fiscale: RSSMRC85C15H501Z) può essere contattato a marco.rossi@ospedale.it o al +39 333 123 4567. Abita in Via Roma 25, 00184 Roma."
49
+ example_title: Clinical Note with PII (Italian)
50
+ ---
51
+
52
+ # OpenMed-PII-Italian-BiomedBERTFull-Base-110M-v1
53
+
54
+ **Italian PII Detection Model** | 110M Parameters | Open Source
55
+
56
+ [![F1 Score](https://img.shields.io/badge/F1-92.55%25-brightgreen)]() [![Precision](https://img.shields.io/badge/Precision-91.97%25-blue)]() [![Recall](https://img.shields.io/badge/Recall-93.13%25-orange)]()
57
+
58
+ ## Model Description
59
+
60
+ **OpenMed-PII-Italian-BiomedBERTFull-Base-110M-v1** is a transformer-based token classification model fine-tuned for **Personally Identifiable Information (PII) detection in Italian text**. This model identifies and classifies **54 types of sensitive information** including names, addresses, social security numbers, medical record numbers, and more.
61
+
62
+ ### Key Features
63
+
64
+ - **Italian-Optimized**: Specifically trained on Italian text for optimal performance
65
+ - **High Accuracy**: Achieves strong F1 scores across diverse PII categories
66
+ - **Comprehensive Coverage**: Detects 55+ entity types spanning personal, financial, medical, and contact information
67
+ - **Privacy-Focused**: Designed for de-identification and compliance with GDPR and other privacy regulations
68
+ - **Production-Ready**: Optimized for real-world text processing pipelines
69
+
70
+ ## Performance
71
+
72
+ Evaluated on the Italian subset of AI4Privacy dataset:
73
+
74
+ | Metric | Score |
75
+ |:---|:---:|
76
+ | **Micro F1** | **0.9255** |
77
+ | Precision | 0.9197 |
78
+ | Recall | 0.9313 |
79
+ | Macro F1 | 0.9105 |
80
+ | Weighted F1 | 0.9210 |
81
+ | Accuracy | 0.9893 |
82
+
83
+ ## Supported Entity Types
84
+
85
+ This model detects **54 PII entity types** organized into categories:
86
+
87
+ <details>
88
+ <summary><strong>Identifiers</strong> (22 types)</summary>
89
+
90
+ | Entity | Description |
91
+ |:---|:---|
92
+ | `ACCOUNTNAME` | Accountname |
93
+ | `BANKACCOUNT` | Bankaccount |
94
+ | `BIC` | Bic |
95
+ | `BITCOINADDRESS` | Bitcoinaddress |
96
+ | `CREDITCARD` | Creditcard |
97
+ | `CREDITCARDISSUER` | Creditcardissuer |
98
+ | `CVV` | Cvv |
99
+ | `ETHEREUMADDRESS` | Ethereumaddress |
100
+ | `IBAN` | Iban |
101
+ | `IMEI` | Imei |
102
+ | ... | *and 12 more* |
103
+
104
+ </details>
105
+
106
+ <details>
107
+ <summary><strong>Personal Info</strong> (11 types)</summary>
108
+
109
+ | Entity | Description |
110
+ |:---|:---|
111
+ | `AGE` | Age |
112
+ | `DATEOFBIRTH` | Dateofbirth |
113
+ | `EYECOLOR` | Eyecolor |
114
+ | `FIRSTNAME` | Firstname |
115
+ | `GENDER` | Gender |
116
+ | `HEIGHT` | Height |
117
+ | `LASTNAME` | Lastname |
118
+ | `MIDDLENAME` | Middlename |
119
+ | `OCCUPATION` | Occupation |
120
+ | `PREFIX` | Prefix |
121
+ | ... | *and 1 more* |
122
+
123
+ </details>
124
+
125
+ <details>
126
+ <summary><strong>Contact Info</strong> (2 types)</summary>
127
+
128
+ | Entity | Description |
129
+ |:---|:---|
130
+ | `EMAIL` | Email |
131
+ | `PHONE` | Phone |
132
+
133
+ </details>
134
+
135
+ <details>
136
+ <summary><strong>Location</strong> (9 types)</summary>
137
+
138
+ | Entity | Description |
139
+ |:---|:---|
140
+ | `BUILDINGNUMBER` | Buildingnumber |
141
+ | `CITY` | City |
142
+ | `COUNTY` | County |
143
+ | `GPSCOORDINATES` | Gpscoordinates |
144
+ | `ORDINALDIRECTION` | Ordinaldirection |
145
+ | `SECONDARYADDRESS` | Secondaryaddress |
146
+ | `STATE` | State |
147
+ | `STREET` | Street |
148
+ | `ZIPCODE` | Zipcode |
149
+
150
+ </details>
151
+
152
+ <details>
153
+ <summary><strong>Organization</strong> (3 types)</summary>
154
+
155
+ | Entity | Description |
156
+ |:---|:---|
157
+ | `JOBDEPARTMENT` | Jobdepartment |
158
+ | `JOBTITLE` | Jobtitle |
159
+ | `ORGANIZATION` | Organization |
160
+
161
+ </details>
162
+
163
+ <details>
164
+ <summary><strong>Financial</strong> (5 types)</summary>
165
+
166
+ | Entity | Description |
167
+ |:---|:---|
168
+ | `AMOUNT` | Amount |
169
+ | `CURRENCY` | Currency |
170
+ | `CURRENCYCODE` | Currencycode |
171
+ | `CURRENCYNAME` | Currencyname |
172
+ | `CURRENCYSYMBOL` | Currencysymbol |
173
+
174
+ </details>
175
+
176
+ <details>
177
+ <summary><strong>Temporal</strong> (2 types)</summary>
178
+
179
+ | Entity | Description |
180
+ |:---|:---|
181
+ | `DATE` | Date |
182
+ | `TIME` | Time |
183
+
184
+ </details>
185
+
186
+ ## Usage
187
+
188
+ ### Quick Start
189
+
190
+ ```python
191
+ from transformers import pipeline
192
+
193
+ # Load the PII detection pipeline
194
+ ner = pipeline("ner", model="OpenMed/OpenMed-PII-Italian-BiomedBERTFull-Base-110M-v1", aggregation_strategy="simple")
195
+
196
+ text = """
197
+ Paziente Marco Bianchi (nato il 15/03/1985, CF: BNCMRC85C15H501Z) è stato visitato oggi.
198
+ Contatto: marco.bianchi@email.it, Telefono: +39 333 123 4567.
199
+ Indirizzo: Via Garibaldi 42, 20121 Milano.
200
+ """
201
+
202
+ entities = ner(text)
203
+ for entity in entities:
204
+ print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
205
+ ```
206
+
207
+ ### De-identification Example
208
+
209
+ ```python
210
+ def redact_pii(text, entities, placeholder='[REDACTED]'):
211
+ """Replace detected PII with placeholders."""
212
+ # Sort entities by start position (descending) to preserve offsets
213
+ sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
214
+ redacted = text
215
+ for ent in sorted_entities:
216
+ redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
217
+ return redacted
218
+
219
+ # Apply de-identification
220
+ redacted_text = redact_pii(text, entities)
221
+ print(redacted_text)
222
+ ```
223
+
224
+ ### Batch Processing
225
+
226
+ ```python
227
+ from transformers import AutoModelForTokenClassification, AutoTokenizer
228
+ import torch
229
+
230
+ model_name = "OpenMed/OpenMed-PII-Italian-BiomedBERTFull-Base-110M-v1"
231
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
232
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
233
+
234
+ texts = [
235
+ "Paziente Marco Bianchi (nato il 15/03/1985, CF: BNCMRC85C15H501Z) è stato visitato oggi.",
236
+ "Contatto: marco.bianchi@email.it, Telefono: +39 333 123 4567.",
237
+ ]
238
+
239
+ inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
240
+ with torch.no_grad():
241
+ outputs = model(**inputs)
242
+ predictions = torch.argmax(outputs.logits, dim=-1)
243
+ ```
244
+
245
+ ## Training Details
246
+
247
+ ### Dataset
248
+
249
+ - **Source**: [AI4Privacy PII Masking 400k](https://huggingface.co/datasets/ai4privacy/pii-masking-400k) (Italian subset)
250
+ - **Format**: BIO-tagged token classification
251
+ - **Labels**: 109 total (54 entity types × 2 BIO tags + O)
252
+
253
+ ### Training Configuration
254
+
255
+ - **Max Sequence Length**: 512 tokens
256
+ - **Epochs**: 3
257
+ - **Framework**: Hugging Face Transformers + Trainer API
258
+
259
+ ## Intended Use & Limitations
260
+
261
+ ### Intended Use
262
+
263
+ - **De-identification**: Automated redaction of PII in Italian clinical notes, medical records, and documents
264
+ - **Compliance**: Supporting GDPR, and other privacy regulation compliance
265
+ - **Data Preprocessing**: Preparing datasets for research by removing sensitive information
266
+ - **Audit Support**: Identifying PII in document collections
267
+
268
+ ### Limitations
269
+
270
+ **Important**: This model is intended as an **assistive tool**, not a replacement for human review.
271
+
272
+ - **False Negatives**: Some PII may not be detected; always verify critical applications
273
+ - **Context Sensitivity**: Performance may vary with domain-specific terminology
274
+ - **Language**: Optimized for Italian text; may not perform well on other languages
275
+
276
+ ## Citation
277
+
278
+ ```bibtex
279
+ @misc{openmed-pii-2026,
280
+ title = {OpenMed-PII-Italian-BiomedBERTFull-Base-110M-v1: Italian PII Detection Model},
281
+ author = {OpenMed Science},
282
+ year = {2026},
283
+ publisher = {Hugging Face},
284
+ url = {https://huggingface.co/OpenMed/OpenMed-PII-Italian-BiomedBERTFull-Base-110M-v1}
285
+ }
286
+ ```
287
+
288
+ ## Links
289
+
290
+ - **Organization**: [OpenMed](https://huggingface.co/OpenMed)
all_results.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9902273864029857,
4
+ "eval_f1": 0.9330852503382949,
5
+ "eval_loss": 0.031441979110240936,
6
+ "eval_macro_f1": 0.9175882104431093,
7
+ "eval_precision": 0.9287494107347296,
8
+ "eval_recall": 0.9374617633063694,
9
+ "eval_runtime": 4.0213,
10
+ "eval_samples_per_second": 1236.418,
11
+ "eval_steps_per_second": 19.397,
12
+ "eval_weighted_f1": 0.9296538337362635,
13
+ "test_accuracy": 0.9893058468311692,
14
+ "test_f1": 0.9254852849092048,
15
+ "test_loss": 0.033455222845077515,
16
+ "test_macro_f1": 0.9105334066706021,
17
+ "test_precision": 0.9196960765048798,
18
+ "test_recall": 0.9313478376227116,
19
+ "test_runtime": 4.5862,
20
+ "test_samples_per_second": 1105.272,
21
+ "test_steps_per_second": 17.444,
22
+ "test_weighted_f1": 0.9210243739638152,
23
+ "total_flos": 4715705190580224.0,
24
+ "train_loss": 0.21988849894454082,
25
+ "train_runtime": 225.9955,
26
+ "train_samples_per_second": 543.515,
27
+ "train_steps_per_second": 8.496
28
+ }
classification_report.txt ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Classification Report for Italian PII Detection
2
+ Model: microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
3
+ ============================================================
4
+
5
+ precision recall f1-score support
6
+
7
+ ACCOUNTNAME 0.99 1.00 0.99 282
8
+ AGE 0.98 0.99 0.98 338
9
+ AMOUNT 0.96 0.89 0.92 116
10
+ BANKACCOUNT 0.97 1.00 0.99 306
11
+ BIC 0.95 0.95 0.95 77
12
+ BITCOINADDRESS 0.90 1.00 0.95 273
13
+ BUILDINGNUMBER 0.92 0.91 0.91 346
14
+ CITY 0.94 0.83 0.88 280
15
+ COUNTY 0.92 1.00 0.96 327
16
+ CREDITCARD 0.80 0.79 0.80 302
17
+ CREDITCARDISSUER 0.96 1.00 0.98 146
18
+ CURRENCY 0.62 0.89 0.73 187
19
+ CURRENCYCODE 0.74 0.71 0.72 85
20
+ CURRENCYNAME 0.00 0.00 0.00 97
21
+ CURRENCYSYMBOL 0.95 0.94 0.94 308
22
+ CVV 0.97 0.96 0.96 97
23
+ DATE 0.62 0.93 0.75 423
24
+ DATEOFBIRTH 0.60 0.33 0.42 327
25
+ EMAIL 0.99 1.00 1.00 423
26
+ ETHEREUMADDRESS 1.00 1.00 1.00 168
27
+ EYECOLOR 0.98 0.98 0.98 108
28
+ FIRSTNAME 0.89 0.92 0.91 1623
29
+ GENDER 0.98 0.99 0.99 302
30
+ GPSCOORDINATES 1.00 1.00 1.00 223
31
+ HEIGHT 0.95 1.00 0.98 126
32
+ IBAN 0.97 1.00 0.98 230
33
+ IMEI 1.00 1.00 1.00 215
34
+ IPADDRESS 1.00 1.00 1.00 783
35
+ JOBDEPARTMENT 0.92 0.97 0.95 327
36
+ JOBTITLE 0.98 1.00 0.99 279
37
+ LASTNAME 0.85 0.90 0.87 441
38
+ LITECOINADDRESS 1.00 0.61 0.76 83
39
+ MACADDRESS 0.99 1.00 1.00 114
40
+ MASKEDNUMBER 0.71 0.72 0.71 209
41
+ MIDDLENAME 0.74 0.64 0.68 310
42
+ OCCUPATION 0.98 0.99 0.99 323
43
+ ORDINALDIRECTION 1.00 1.00 1.00 152
44
+ ORGANIZATION 0.97 1.00 0.98 271
45
+ PASSWORD 0.98 0.98 0.98 286
46
+ PHONE 1.00 0.99 1.00 303
47
+ PIN 0.88 0.88 0.88 72
48
+ PREFIX 0.97 1.00 0.99 298
49
+ SECONDARYADDRESS 0.98 1.00 0.99 316
50
+ SEX 1.00 1.00 1.00 338
51
+ SSN 0.99 1.00 1.00 259
52
+ STATE 0.93 0.97 0.95 294
53
+ STREET 0.94 0.98 0.96 332
54
+ TIME 0.94 0.99 0.97 296
55
+ URL 1.00 1.00 1.00 244
56
+ USERAGENT 1.00 1.00 1.00 233
57
+ USERNAME 0.98 0.98 0.98 332
58
+ VIN 1.00 0.99 0.99 84
59
+ VRM 0.94 1.00 0.97 98
60
+ ZIPCODE 0.91 0.92 0.91 264
61
+
62
+ micro avg 0.92 0.93 0.93 15076
63
+ macro avg 0.91 0.92 0.91 15076
64
+ weighted avg 0.92 0.93 0.92 15076
config.json ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertForTokenClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "dtype": "float32",
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": "O",
13
+ "1": "B-ACCOUNTNAME",
14
+ "2": "B-AGE",
15
+ "3": "B-AMOUNT",
16
+ "4": "B-BANKACCOUNT",
17
+ "5": "B-BIC",
18
+ "6": "B-BITCOINADDRESS",
19
+ "7": "B-BUILDINGNUMBER",
20
+ "8": "B-CITY",
21
+ "9": "B-COUNTY",
22
+ "10": "B-CREDITCARD",
23
+ "11": "B-CREDITCARDISSUER",
24
+ "12": "B-CURRENCY",
25
+ "13": "B-CURRENCYCODE",
26
+ "14": "B-CURRENCYNAME",
27
+ "15": "B-CURRENCYSYMBOL",
28
+ "16": "B-CVV",
29
+ "17": "B-DATE",
30
+ "18": "B-DATEOFBIRTH",
31
+ "19": "B-EMAIL",
32
+ "20": "B-ETHEREUMADDRESS",
33
+ "21": "B-EYECOLOR",
34
+ "22": "B-FIRSTNAME",
35
+ "23": "B-GENDER",
36
+ "24": "B-GPSCOORDINATES",
37
+ "25": "B-HEIGHT",
38
+ "26": "B-IBAN",
39
+ "27": "B-IMEI",
40
+ "28": "B-IPADDRESS",
41
+ "29": "B-JOBDEPARTMENT",
42
+ "30": "B-JOBTITLE",
43
+ "31": "B-LASTNAME",
44
+ "32": "B-LITECOINADDRESS",
45
+ "33": "B-MACADDRESS",
46
+ "34": "B-MASKEDNUMBER",
47
+ "35": "B-MIDDLENAME",
48
+ "36": "B-OCCUPATION",
49
+ "37": "B-ORDINALDIRECTION",
50
+ "38": "B-ORGANIZATION",
51
+ "39": "B-PASSWORD",
52
+ "40": "B-PHONE",
53
+ "41": "B-PIN",
54
+ "42": "B-PREFIX",
55
+ "43": "B-SECONDARYADDRESS",
56
+ "44": "B-SEX",
57
+ "45": "B-SSN",
58
+ "46": "B-STATE",
59
+ "47": "B-STREET",
60
+ "48": "B-TIME",
61
+ "49": "B-URL",
62
+ "50": "B-USERAGENT",
63
+ "51": "B-USERNAME",
64
+ "52": "B-VIN",
65
+ "53": "B-VRM",
66
+ "54": "B-ZIPCODE",
67
+ "55": "I-ACCOUNTNAME",
68
+ "56": "I-AGE",
69
+ "57": "I-AMOUNT",
70
+ "58": "I-CITY",
71
+ "59": "I-COUNTY",
72
+ "60": "I-CURRENCY",
73
+ "61": "I-CURRENCYNAME",
74
+ "62": "I-DATE",
75
+ "63": "I-DATEOFBIRTH",
76
+ "64": "I-EYECOLOR",
77
+ "65": "I-GENDER",
78
+ "66": "I-HEIGHT",
79
+ "67": "I-JOBTITLE",
80
+ "68": "I-ORGANIZATION",
81
+ "69": "I-PHONE",
82
+ "70": "I-SECONDARYADDRESS",
83
+ "71": "I-SSN",
84
+ "72": "I-STATE",
85
+ "73": "I-STREET",
86
+ "74": "I-TIME",
87
+ "75": "I-USERAGENT"
88
+ },
89
+ "initializer_range": 0.02,
90
+ "intermediate_size": 3072,
91
+ "label2id": {
92
+ "B-ACCOUNTNAME": 1,
93
+ "B-AGE": 2,
94
+ "B-AMOUNT": 3,
95
+ "B-BANKACCOUNT": 4,
96
+ "B-BIC": 5,
97
+ "B-BITCOINADDRESS": 6,
98
+ "B-BUILDINGNUMBER": 7,
99
+ "B-CITY": 8,
100
+ "B-COUNTY": 9,
101
+ "B-CREDITCARD": 10,
102
+ "B-CREDITCARDISSUER": 11,
103
+ "B-CURRENCY": 12,
104
+ "B-CURRENCYCODE": 13,
105
+ "B-CURRENCYNAME": 14,
106
+ "B-CURRENCYSYMBOL": 15,
107
+ "B-CVV": 16,
108
+ "B-DATE": 17,
109
+ "B-DATEOFBIRTH": 18,
110
+ "B-EMAIL": 19,
111
+ "B-ETHEREUMADDRESS": 20,
112
+ "B-EYECOLOR": 21,
113
+ "B-FIRSTNAME": 22,
114
+ "B-GENDER": 23,
115
+ "B-GPSCOORDINATES": 24,
116
+ "B-HEIGHT": 25,
117
+ "B-IBAN": 26,
118
+ "B-IMEI": 27,
119
+ "B-IPADDRESS": 28,
120
+ "B-JOBDEPARTMENT": 29,
121
+ "B-JOBTITLE": 30,
122
+ "B-LASTNAME": 31,
123
+ "B-LITECOINADDRESS": 32,
124
+ "B-MACADDRESS": 33,
125
+ "B-MASKEDNUMBER": 34,
126
+ "B-MIDDLENAME": 35,
127
+ "B-OCCUPATION": 36,
128
+ "B-ORDINALDIRECTION": 37,
129
+ "B-ORGANIZATION": 38,
130
+ "B-PASSWORD": 39,
131
+ "B-PHONE": 40,
132
+ "B-PIN": 41,
133
+ "B-PREFIX": 42,
134
+ "B-SECONDARYADDRESS": 43,
135
+ "B-SEX": 44,
136
+ "B-SSN": 45,
137
+ "B-STATE": 46,
138
+ "B-STREET": 47,
139
+ "B-TIME": 48,
140
+ "B-URL": 49,
141
+ "B-USERAGENT": 50,
142
+ "B-USERNAME": 51,
143
+ "B-VIN": 52,
144
+ "B-VRM": 53,
145
+ "B-ZIPCODE": 54,
146
+ "I-ACCOUNTNAME": 55,
147
+ "I-AGE": 56,
148
+ "I-AMOUNT": 57,
149
+ "I-CITY": 58,
150
+ "I-COUNTY": 59,
151
+ "I-CURRENCY": 60,
152
+ "I-CURRENCYNAME": 61,
153
+ "I-DATE": 62,
154
+ "I-DATEOFBIRTH": 63,
155
+ "I-EYECOLOR": 64,
156
+ "I-GENDER": 65,
157
+ "I-HEIGHT": 66,
158
+ "I-JOBTITLE": 67,
159
+ "I-ORGANIZATION": 68,
160
+ "I-PHONE": 69,
161
+ "I-SECONDARYADDRESS": 70,
162
+ "I-SSN": 71,
163
+ "I-STATE": 72,
164
+ "I-STREET": 73,
165
+ "I-TIME": 74,
166
+ "I-USERAGENT": 75,
167
+ "O": 0
168
+ },
169
+ "layer_norm_eps": 1e-12,
170
+ "max_position_embeddings": 512,
171
+ "model_type": "bert",
172
+ "num_attention_heads": 12,
173
+ "num_hidden_layers": 12,
174
+ "pad_token_id": 0,
175
+ "position_embedding_type": "absolute",
176
+ "transformers_version": "4.57.3",
177
+ "type_vocab_size": 2,
178
+ "use_cache": true,
179
+ "vocab_size": 30522
180
+ }
eval_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9902273864029857,
4
+ "eval_f1": 0.9330852503382949,
5
+ "eval_loss": 0.031441979110240936,
6
+ "eval_macro_f1": 0.9175882104431093,
7
+ "eval_precision": 0.9287494107347296,
8
+ "eval_recall": 0.9374617633063694,
9
+ "eval_runtime": 4.0213,
10
+ "eval_samples_per_second": 1236.418,
11
+ "eval_steps_per_second": 19.397,
12
+ "eval_weighted_f1": 0.9296538337362635
13
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c96ab95d8ac8dba8b8cf486ba7a62a3b527a36acc7fa00fc3a3927a1038831e
3
+ size 435823712
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
test_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "test_accuracy": 0.9893058468311692,
3
+ "test_f1": 0.9254852849092048,
4
+ "test_loss": 0.033455222845077515,
5
+ "test_macro_f1": 0.9105334066706021,
6
+ "test_precision": 0.9196960765048798,
7
+ "test_recall": 0.9313478376227116,
8
+ "test_runtime": 4.5862,
9
+ "test_samples_per_second": 1105.272,
10
+ "test_steps_per_second": 17.444,
11
+ "test_weighted_f1": 0.9210243739638152
12
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "model_max_length": 1000000000000000019884624838656,
51
+ "never_split": null,
52
+ "pad_token": "[PAD]",
53
+ "sep_token": "[SEP]",
54
+ "strip_accents": null,
55
+ "tokenize_chinese_chars": true,
56
+ "tokenizer_class": "BertTokenizer",
57
+ "unk_token": "[UNK]"
58
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 4715705190580224.0,
4
+ "train_loss": 0.21988849894454082,
5
+ "train_runtime": 225.9955,
6
+ "train_samples_per_second": 543.515,
7
+ "train_steps_per_second": 8.496
8
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff