MaziyarPanahi commited on
Commit
85d5bba
·
verified ·
1 Parent(s): 579bf7c

Upload PII detection model OpenMed-PII-FastClinical-Small-82M-v1

Browse files
README.md ADDED
@@ -0,0 +1,326 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ base_model: distilbert/distilroberta-base
6
+ tags:
7
+ - token-classification
8
+ - ner
9
+ - pii
10
+ - pii-detection
11
+ - de-identification
12
+ - privacy
13
+ - healthcare
14
+ - medical
15
+ - clinical
16
+ - phi
17
+ - hipaa
18
+ - pytorch
19
+ - transformers
20
+ - openmed
21
+ datasets:
22
+ - nvidia/Nemotron-PII
23
+ pipeline_tag: token-classification
24
+ library_name: transformers
25
+ metrics:
26
+ - f1
27
+ - precision
28
+ - recall
29
+ model-index:
30
+ - name: OpenMed-PII-FastClinical-Base-82M-v1
31
+ results:
32
+ - task:
33
+ type: token-classification
34
+ name: Named Entity Recognition
35
+ dataset:
36
+ name: nvidia/Nemotron-PII (test_strat)
37
+ type: nvidia/Nemotron-PII
38
+ split: test
39
+ metrics:
40
+ - type: f1
41
+ value: 0.9511
42
+ name: F1 (micro)
43
+ - type: precision
44
+ value: 0.9538
45
+ name: Precision
46
+ - type: recall
47
+ value: 0.9484
48
+ name: Recall
49
+ widget:
50
+ - text: "Dr. Sarah Johnson (SSN: 123-45-6789) can be reached at sarah.johnson@hospital.org or 555-123-4567. She lives at 123 Oak Street, Boston, MA 02108."
51
+ example_title: Clinical Note with PII
52
+ ---
53
+
54
+ # OpenMed-PII-FastClinical-Base-82M-v1
55
+
56
+ **PII Detection Model** | 82M Parameters | Open Source
57
+
58
+ [![F1 Score](https://img.shields.io/badge/F1-95.11%25-brightgreen)]() [![Precision](https://img.shields.io/badge/Precision-95.38%25-blue)]() [![Recall](https://img.shields.io/badge/Recall-94.84%25-orange)]()
59
+
60
+ ## Model Description
61
+
62
+ **OpenMed-PII-FastClinical-Base-82M-v1** is a transformer-based token classification model fine-tuned for **Personally Identifiable Information (PII) detection** in text. This model identifies and classifies **54 types of sensitive information** including names, addresses, SSNs, medical record numbers, and more.
63
+
64
+ ### Key Features
65
+
66
+ - **High Accuracy**: Achieves strong F1 scores across diverse PII categories
67
+ - **Comprehensive Coverage**: Detects 50+ entity types spanning personal, financial, medical, and contact information
68
+ - **Privacy-Focused**: Designed for de-identification and compliance with HIPAA, GDPR, and other privacy regulations
69
+ - **Production-Ready**: Optimized for real-world text processing pipelines
70
+
71
+ ## Performance
72
+
73
+ Evaluated on a stratified 2,000-sample test set from NVIDIA Nemotron-PII:
74
+
75
+ | Metric | Score |
76
+ |:---|:---:|
77
+ | **Micro F1** | **0.9511** |
78
+ | Precision | 0.9538 |
79
+ | Recall | 0.9484 |
80
+ | Macro F1 | 0.9521 |
81
+ | Weighted F1 | 0.9504 |
82
+ | Accuracy | 0.9932 |
83
+
84
+ ### Top 10 PII Models
85
+
86
+ | Rank | Model | F1 | Precision | Recall |
87
+ |:---:|:---|:---:|:---:|:---:|
88
+ | 1 | [OpenMed-PII-SuperClinical-Large-434M-v1](https://huggingface.co/openmed/OpenMed-PII-SuperClinical-Large-434M-v1) | 0.9608 | 0.9685 | 0.9532 |
89
+ | 2 | [OpenMed-PII-BigMed-Large-560M-v1](https://huggingface.co/openmed/OpenMed-PII-BigMed-Large-560M-v1) | 0.9604 | 0.9644 | 0.9565 |
90
+ | 3 | [OpenMed-PII-EuroMed-210M-v1](https://huggingface.co/openmed/OpenMed-PII-EuroMed-210M-v1) | 0.9600 | 0.9681 | 0.9521 |
91
+ | 4 | [OpenMed-PII-SnowflakeMed-568M-v1](https://huggingface.co/openmed/OpenMed-PII-SnowflakeMed-568M-v1) | 0.9594 | 0.9640 | 0.9548 |
92
+ | 5 | [OpenMed-PII-SuperMedical-Large-355M-v1](https://huggingface.co/openmed/OpenMed-PII-SuperMedical-Large-355M-v1) | 0.9592 | 0.9632 | 0.9553 |
93
+ | 6 | [OpenMed-PII-ClinicalBGE-568M-v1](https://huggingface.co/openmed/OpenMed-PII-ClinicalBGE-568M-v1) | 0.9587 | 0.9636 | 0.9538 |
94
+ | 7 | [OpenMed-PII-mClinicalE5-Large-560M-v1](https://huggingface.co/openmed/OpenMed-PII-mClinicalE5-Large-560M-v1) | 0.9582 | 0.9631 | 0.9533 |
95
+ | 8 | [OpenMed-PII-ModernMed-Large-395M-v1](https://huggingface.co/openmed/OpenMed-PII-ModernMed-Large-395M-v1) | 0.9579 | 0.9639 | 0.9520 |
96
+ | 9 | [OpenMed-PII-BioClinicalModern-Large-395M-v1](https://huggingface.co/openmed/OpenMed-PII-BioClinicalModern-Large-395M-v1) | 0.9579 | 0.9656 | 0.9502 |
97
+ | 10 | [OpenMed-PII-ClinicalE5-Large-335M-v1](https://huggingface.co/openmed/OpenMed-PII-ClinicalE5-Large-335M-v1) | 0.9577 | 0.9604 | 0.9550 |
98
+
99
+ ### Best Performing Entities
100
+
101
+ | Entity | F1 | Precision | Recall | Support |
102
+ |:---|:---:|:---:|:---:|:---:|
103
+ | `email` | 0.997 | 0.997 | 0.997 | 761 |
104
+ | `credit_debit_card` | 0.995 | 0.991 | 1.000 | 217 |
105
+ | `medical_record_number` | 0.994 | 0.989 | 1.000 | 265 |
106
+ | `biometric_identifier` | 0.994 | 0.987 | 1.000 | 234 |
107
+ | `mac_address` | 0.994 | 0.987 | 1.000 | 77 |
108
+
109
+ ### Challenging Entities
110
+
111
+ These entity types have lower performance and may benefit from additional post-processing:
112
+
113
+ | Entity | F1 | Precision | Recall | Support |
114
+ |:---|:---:|:---:|:---:|:---:|
115
+ | `pin` | 0.881 | 0.894 | 0.868 | 136 |
116
+ | `time` | 0.855 | 0.867 | 0.843 | 471 |
117
+ | `sexuality` | 0.822 | 0.763 | 0.892 | 83 |
118
+ | `gender` | 0.797 | 0.743 | 0.859 | 192 |
119
+ | `occupation` | 0.652 | 0.695 | 0.613 | 726 |
120
+
121
+ ## Supported Entity Types
122
+
123
+ This model detects **54 PII entity types** organized into categories:
124
+
125
+ <details>
126
+ <summary><strong>Identifiers</strong> (16 types)</summary>
127
+
128
+ | Entity | Description |
129
+ |:---|:---|
130
+ | `account_number` | Account Number |
131
+ | `api_key` | Api Key |
132
+ | `bank_routing_number` | Bank Routing Number |
133
+ | `certificate_license_number` | Certificate License Number |
134
+ | `credit_debit_card` | Credit Debit Card |
135
+ | `cvv` | Cvv |
136
+ | `employee_id` | Employee Id |
137
+ | `health_plan_beneficiary_number` | Health Plan Beneficiary Number |
138
+ | `mac_address` | Mac Address |
139
+ | `medical_record_number` | Medical Record Number |
140
+ | ... | *and 6 more* |
141
+
142
+ </details>
143
+
144
+ <details>
145
+ <summary><strong>Personal Info</strong> (14 types)</summary>
146
+
147
+ | Entity | Description |
148
+ |:---|:---|
149
+ | `age` | Age |
150
+ | `biometric_identifier` | Biometric Identifier |
151
+ | `blood_type` | Blood Type |
152
+ | `date_of_birth` | Date Of Birth |
153
+ | `education_level` | Education Level |
154
+ | `first_name` | First Name |
155
+ | `last_name` | Last Name |
156
+ | `gender` | Gender |
157
+ | `language` | Language |
158
+ | `occupation` | Occupation |
159
+ | ... | *and 4 more* |
160
+
161
+ </details>
162
+
163
+ <details>
164
+ <summary><strong>Contact Info</strong> (4 types)</summary>
165
+
166
+ | Entity | Description |
167
+ |:---|:---|
168
+ | `email` | Email |
169
+ | `phone_number` | Phone Number |
170
+ | `fax_number` | Fax Number |
171
+ | `url` | Url |
172
+
173
+ </details>
174
+
175
+ <details>
176
+ <summary><strong>Location</strong> (6 types)</summary>
177
+
178
+ | Entity | Description |
179
+ |:---|:---|
180
+ | `city` | City |
181
+ | `coordinate` | Coordinate |
182
+ | `country` | Country |
183
+ | `county` | County |
184
+ | `state` | State |
185
+ | `street_address` | Street Address |
186
+
187
+ </details>
188
+
189
+ <details>
190
+ <summary><strong>Network Info</strong> (3 types)</summary>
191
+
192
+ | Entity | Description |
193
+ |:---|:---|
194
+ | `device_identifier` | Device Identifier |
195
+ | `ipv4` | Ipv4 |
196
+ | `ipv6` | Ipv6 |
197
+
198
+ </details>
199
+
200
+ <details>
201
+ <summary><strong>Temporal</strong> (3 types)</summary>
202
+
203
+ | Entity | Description |
204
+ |:---|:---|
205
+ | `date` | Date |
206
+ | `date_time` | Date Time |
207
+ | `time` | Time |
208
+
209
+ </details>
210
+
211
+ <details>
212
+ <summary><strong>Organization</strong> (1 types)</summary>
213
+
214
+ | Entity | Description |
215
+ |:---|:---|
216
+ | `company_name` | Company Name |
217
+
218
+ </details>
219
+
220
+ ## Usage
221
+
222
+ ### Quick Start
223
+
224
+ ```python
225
+ from transformers import pipeline
226
+
227
+ # Load the PII detection pipeline
228
+ ner = pipeline("ner", model="openmed/OpenMed-PII-FastClinical-Base-82M-v1", aggregation_strategy="simple")
229
+
230
+ text = """
231
+ Patient John Smith (DOB: 03/15/1985, SSN: 123-45-6789) was seen today.
232
+ Contact: john.smith@email.com, Phone: (555) 123-4567.
233
+ Address: 456 Oak Street, Boston, MA 02108.
234
+ """
235
+
236
+ entities = ner(text)
237
+ for entity in entities:
238
+ print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
239
+ ```
240
+
241
+ ### De-identification Example
242
+
243
+ ```python
244
+ def redact_pii(text, entities, placeholder='[REDACTED]'):
245
+ """Replace detected PII with placeholders."""
246
+ # Sort entities by start position (descending) to preserve offsets
247
+ sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
248
+ redacted = text
249
+ for ent in sorted_entities:
250
+ redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
251
+ return redacted
252
+
253
+ # Apply de-identification
254
+ redacted_text = redact_pii(text, entities)
255
+ print(redacted_text)
256
+ ```
257
+
258
+ ### Batch Processing
259
+
260
+ ```python
261
+ from transformers import AutoModelForTokenClassification, AutoTokenizer
262
+ import torch
263
+
264
+ model_name = "openmed/OpenMed-PII-FastClinical-Base-82M-v1"
265
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
266
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
267
+
268
+ texts = [
269
+ "Contact Dr. Jane Doe at jane.doe@hospital.org",
270
+ "Patient SSN: 987-65-4321, MRN: 12345678",
271
+ ]
272
+
273
+ inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
274
+ with torch.no_grad():
275
+ outputs = model(**inputs)
276
+ predictions = torch.argmax(outputs.logits, dim=-1)
277
+ ```
278
+
279
+ ## Training Details
280
+
281
+ ### Dataset
282
+
283
+ - **Source**: [NVIDIA Nemotron-PII](https://huggingface.co/datasets/nvidia/Nemotron-PII)
284
+ - **Format**: BIO-tagged token classification
285
+ - **Labels**: 106 total (53 entity types × 2 BIO tags + O)
286
+ - **Splits**: 50K train / 5K validation / 45K test
287
+
288
+ ### Training Configuration
289
+
290
+ - **Max Sequence Length**: 384 tokens
291
+ - **Label Strategy**: First token only (`label_all_tokens=False`)
292
+ - **Framework**: Hugging Face Transformers + Trainer API
293
+
294
+ ## Intended Use & Limitations
295
+
296
+ ### Intended Use
297
+
298
+ - **De-identification**: Automated redaction of PII in clinical notes, medical records, and documents
299
+ - **Compliance**: Supporting HIPAA, GDPR, and privacy regulation compliance
300
+ - **Data Preprocessing**: Preparing datasets for research by removing sensitive information
301
+ - **Audit Support**: Identifying PII in document collections
302
+
303
+ ### Limitations
304
+
305
+ ⚠️ **Important**: This model is intended as an **assistive tool**, not a replacement for human review.
306
+
307
+ - **False Negatives**: Some PII may not be detected; always verify critical applications
308
+ - **Context Sensitivity**: Performance may vary with domain-specific terminology
309
+ - **Challenging Categories**: `occupation`, `time`, and `sexuality` have lower F1 scores
310
+ - **Language**: Primarily trained on English text
311
+
312
+ ## Citation
313
+
314
+ ```bibtex
315
+ @misc{openmed-pii-2026,
316
+ title = {OpenMed-PII-FastClinical-Base-82M-v1: PII Detection Model},
317
+ author = {OpenMed Science},
318
+ year = {2026},
319
+ publisher = {Hugging Face},
320
+ url = {https://huggingface.co/openmed/OpenMed-PII-FastClinical-Base-82M-v1}
321
+ }
322
+ ```
323
+
324
+ ## Links
325
+
326
+ - **Organization**: [OpenMed](https://huggingface.co/OpenMed)
all_results.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9944782542656068,
4
+ "eval_f1": 0.9575298969573839,
5
+ "eval_loss": 0.02151305042207241,
6
+ "eval_precision": 0.9602049530315969,
7
+ "eval_recall": 0.954869704469355,
8
+ "eval_runtime": 11.4312,
9
+ "eval_samples_per_second": 437.399,
10
+ "eval_steps_per_second": 6.911,
11
+ "test_accuracy": 0.9945807710808351,
12
+ "test_f1": 0.9582737491312024,
13
+ "test_loss": 0.020685501396656036,
14
+ "test_precision": 0.9599993597336492,
15
+ "test_recall": 0.9565543310100639,
16
+ "test_runtime": 167.8463,
17
+ "test_samples_per_second": 268.102,
18
+ "test_steps_per_second": 4.194,
19
+ "total_flos": 9196425130573824.0,
20
+ "train_loss": 0.10089913170127682,
21
+ "train_runtime": 528.0645,
22
+ "train_samples_per_second": 284.056,
23
+ "train_steps_per_second": 8.88
24
+ }
config.json ADDED
@@ -0,0 +1,242 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "RobertaForTokenClassification"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "bos_token_id": 0,
7
+ "classifier_dropout": null,
8
+ "dtype": "float32",
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "O",
15
+ "1": "B-account_number",
16
+ "2": "B-age",
17
+ "3": "B-api_key",
18
+ "4": "B-bank_routing_number",
19
+ "5": "B-biometric_identifier",
20
+ "6": "B-blood_type",
21
+ "7": "B-certificate_license_number",
22
+ "8": "B-city",
23
+ "9": "B-company_name",
24
+ "10": "B-coordinate",
25
+ "11": "B-country",
26
+ "12": "B-county",
27
+ "13": "B-credit_debit_card",
28
+ "14": "B-customer_id",
29
+ "15": "B-cvv",
30
+ "16": "B-date",
31
+ "17": "B-date_of_birth",
32
+ "18": "B-date_time",
33
+ "19": "B-device_identifier",
34
+ "20": "B-education_level",
35
+ "21": "B-email",
36
+ "22": "B-employee_id",
37
+ "23": "B-employment_status",
38
+ "24": "B-fax_number",
39
+ "25": "B-first_name",
40
+ "26": "B-gender",
41
+ "27": "B-health_plan_beneficiary_number",
42
+ "28": "B-http_cookie",
43
+ "29": "B-ipv4",
44
+ "30": "B-ipv6",
45
+ "31": "B-language",
46
+ "32": "B-last_name",
47
+ "33": "B-license_plate",
48
+ "34": "B-mac_address",
49
+ "35": "B-medical_record_number",
50
+ "36": "B-occupation",
51
+ "37": "B-password",
52
+ "38": "B-phone_number",
53
+ "39": "B-pin",
54
+ "40": "B-political_view",
55
+ "41": "B-postcode",
56
+ "42": "B-race_ethnicity",
57
+ "43": "B-religious_belief",
58
+ "44": "B-sexuality",
59
+ "45": "B-ssn",
60
+ "46": "B-state",
61
+ "47": "B-street_address",
62
+ "48": "B-swift_bic",
63
+ "49": "B-tax_id",
64
+ "50": "B-time",
65
+ "51": "B-unique_id",
66
+ "52": "B-url",
67
+ "53": "B-user_name",
68
+ "54": "B-vehicle_identifier",
69
+ "55": "I-account_number",
70
+ "56": "I-api_key",
71
+ "57": "I-biometric_identifier",
72
+ "58": "I-blood_type",
73
+ "59": "I-certificate_license_number",
74
+ "60": "I-city",
75
+ "61": "I-company_name",
76
+ "62": "I-coordinate",
77
+ "63": "I-country",
78
+ "64": "I-county",
79
+ "65": "I-credit_debit_card",
80
+ "66": "I-customer_id",
81
+ "67": "I-date",
82
+ "68": "I-date_of_birth",
83
+ "69": "I-date_time",
84
+ "70": "I-device_identifier",
85
+ "71": "I-education_level",
86
+ "72": "I-email",
87
+ "73": "I-employee_id",
88
+ "74": "I-employment_status",
89
+ "75": "I-fax_number",
90
+ "76": "I-first_name",
91
+ "77": "I-gender",
92
+ "78": "I-health_plan_beneficiary_number",
93
+ "79": "I-http_cookie",
94
+ "80": "I-ipv4",
95
+ "81": "I-ipv6",
96
+ "82": "I-language",
97
+ "83": "I-last_name",
98
+ "84": "I-license_plate",
99
+ "85": "I-mac_address",
100
+ "86": "I-medical_record_number",
101
+ "87": "I-occupation",
102
+ "88": "I-password",
103
+ "89": "I-phone_number",
104
+ "90": "I-pin",
105
+ "91": "I-political_view",
106
+ "92": "I-postcode",
107
+ "93": "I-race_ethnicity",
108
+ "94": "I-religious_belief",
109
+ "95": "I-sexuality",
110
+ "96": "I-ssn",
111
+ "97": "I-state",
112
+ "98": "I-street_address",
113
+ "99": "I-swift_bic",
114
+ "100": "I-tax_id",
115
+ "101": "I-time",
116
+ "102": "I-unique_id",
117
+ "103": "I-url",
118
+ "104": "I-user_name",
119
+ "105": "I-vehicle_identifier"
120
+ },
121
+ "initializer_range": 0.02,
122
+ "intermediate_size": 3072,
123
+ "label2id": {
124
+ "B-account_number": 1,
125
+ "B-age": 2,
126
+ "B-api_key": 3,
127
+ "B-bank_routing_number": 4,
128
+ "B-biometric_identifier": 5,
129
+ "B-blood_type": 6,
130
+ "B-certificate_license_number": 7,
131
+ "B-city": 8,
132
+ "B-company_name": 9,
133
+ "B-coordinate": 10,
134
+ "B-country": 11,
135
+ "B-county": 12,
136
+ "B-credit_debit_card": 13,
137
+ "B-customer_id": 14,
138
+ "B-cvv": 15,
139
+ "B-date": 16,
140
+ "B-date_of_birth": 17,
141
+ "B-date_time": 18,
142
+ "B-device_identifier": 19,
143
+ "B-education_level": 20,
144
+ "B-email": 21,
145
+ "B-employee_id": 22,
146
+ "B-employment_status": 23,
147
+ "B-fax_number": 24,
148
+ "B-first_name": 25,
149
+ "B-gender": 26,
150
+ "B-health_plan_beneficiary_number": 27,
151
+ "B-http_cookie": 28,
152
+ "B-ipv4": 29,
153
+ "B-ipv6": 30,
154
+ "B-language": 31,
155
+ "B-last_name": 32,
156
+ "B-license_plate": 33,
157
+ "B-mac_address": 34,
158
+ "B-medical_record_number": 35,
159
+ "B-occupation": 36,
160
+ "B-password": 37,
161
+ "B-phone_number": 38,
162
+ "B-pin": 39,
163
+ "B-political_view": 40,
164
+ "B-postcode": 41,
165
+ "B-race_ethnicity": 42,
166
+ "B-religious_belief": 43,
167
+ "B-sexuality": 44,
168
+ "B-ssn": 45,
169
+ "B-state": 46,
170
+ "B-street_address": 47,
171
+ "B-swift_bic": 48,
172
+ "B-tax_id": 49,
173
+ "B-time": 50,
174
+ "B-unique_id": 51,
175
+ "B-url": 52,
176
+ "B-user_name": 53,
177
+ "B-vehicle_identifier": 54,
178
+ "I-account_number": 55,
179
+ "I-api_key": 56,
180
+ "I-biometric_identifier": 57,
181
+ "I-blood_type": 58,
182
+ "I-certificate_license_number": 59,
183
+ "I-city": 60,
184
+ "I-company_name": 61,
185
+ "I-coordinate": 62,
186
+ "I-country": 63,
187
+ "I-county": 64,
188
+ "I-credit_debit_card": 65,
189
+ "I-customer_id": 66,
190
+ "I-date": 67,
191
+ "I-date_of_birth": 68,
192
+ "I-date_time": 69,
193
+ "I-device_identifier": 70,
194
+ "I-education_level": 71,
195
+ "I-email": 72,
196
+ "I-employee_id": 73,
197
+ "I-employment_status": 74,
198
+ "I-fax_number": 75,
199
+ "I-first_name": 76,
200
+ "I-gender": 77,
201
+ "I-health_plan_beneficiary_number": 78,
202
+ "I-http_cookie": 79,
203
+ "I-ipv4": 80,
204
+ "I-ipv6": 81,
205
+ "I-language": 82,
206
+ "I-last_name": 83,
207
+ "I-license_plate": 84,
208
+ "I-mac_address": 85,
209
+ "I-medical_record_number": 86,
210
+ "I-occupation": 87,
211
+ "I-password": 88,
212
+ "I-phone_number": 89,
213
+ "I-pin": 90,
214
+ "I-political_view": 91,
215
+ "I-postcode": 92,
216
+ "I-race_ethnicity": 93,
217
+ "I-religious_belief": 94,
218
+ "I-sexuality": 95,
219
+ "I-ssn": 96,
220
+ "I-state": 97,
221
+ "I-street_address": 98,
222
+ "I-swift_bic": 99,
223
+ "I-tax_id": 100,
224
+ "I-time": 101,
225
+ "I-unique_id": 102,
226
+ "I-url": 103,
227
+ "I-user_name": 104,
228
+ "I-vehicle_identifier": 105,
229
+ "O": 0
230
+ },
231
+ "layer_norm_eps": 1e-05,
232
+ "max_position_embeddings": 514,
233
+ "model_type": "roberta",
234
+ "num_attention_heads": 12,
235
+ "num_hidden_layers": 6,
236
+ "pad_token_id": 1,
237
+ "position_embedding_type": "absolute",
238
+ "transformers_version": "4.57.1",
239
+ "type_vocab_size": 1,
240
+ "use_cache": true,
241
+ "vocab_size": 50265
242
+ }
eval_results.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9944782542656068,
4
+ "eval_f1": 0.9575298969573839,
5
+ "eval_loss": 0.02151305042207241,
6
+ "eval_precision": 0.9602049530315969,
7
+ "eval_recall": 0.954869704469355,
8
+ "eval_runtime": 11.4312,
9
+ "eval_samples_per_second": 437.399,
10
+ "eval_steps_per_second": 6.911
11
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b12f3cda51aa994abff075d107efd06a599f70c653abd06199484d42efc7791
3
+ size 326449608
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
test_results.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "test_accuracy": 0.9945807710808351,
3
+ "test_f1": 0.9582737491312024,
4
+ "test_loss": 0.020685501396656036,
5
+ "test_precision": 0.9599993597336492,
6
+ "test_recall": 0.9565543310100639,
7
+ "test_runtime": 167.8463,
8
+ "test_samples_per_second": 268.102,
9
+ "test_steps_per_second": 4.194
10
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": true,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "model_max_length": 512,
53
+ "pad_token": "<pad>",
54
+ "sep_token": "</s>",
55
+ "tokenizer_class": "RobertaTokenizer",
56
+ "trim_offsets": true,
57
+ "unk_token": "<unk>"
58
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 9196425130573824.0,
4
+ "train_loss": 0.10089913170127682,
5
+ "train_runtime": 528.0645,
6
+ "train_samples_per_second": 284.056,
7
+ "train_steps_per_second": 8.88
8
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff