MaziyarPanahi commited on
Commit
b324e1e
·
verified ·
1 Parent(s): d89cd91

Upload Dutch PII detection model OpenMed-PII-Dutch-BioClinicalModern-Base-149M-v1

Browse files
README.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - nl
4
+ license: apache-2.0
5
+ base_model: thomas-sounack/BioClinical-ModernBERT-base
6
+ tags:
7
+ - token-classification
8
+ - ner
9
+ - pii
10
+ - pii-detection
11
+ - de-identification
12
+ - privacy
13
+ - healthcare
14
+ - medical
15
+ - clinical
16
+ - phi
17
+ - dutch
18
+ - pytorch
19
+ - transformers
20
+ - openmed
21
+ pipeline_tag: token-classification
22
+ library_name: transformers
23
+ metrics:
24
+ - f1
25
+ - precision
26
+ - recall
27
+ model-index:
28
+ - name: OpenMed-PII-Dutch-BioClinicalModern-Base-149M-v1
29
+ results:
30
+ - task:
31
+ type: token-classification
32
+ name: Named Entity Recognition
33
+ dataset:
34
+ name: AI4Privacy (Dutch subset)
35
+ type: ai4privacy/pii-masking-400k
36
+ split: test
37
+ metrics:
38
+ - type: f1
39
+ value: 0.8531
40
+ name: F1 (micro)
41
+ - type: precision
42
+ value: 0.8602
43
+ name: Precision
44
+ - type: recall
45
+ value: 0.8462
46
+ name: Recall
47
+ widget:
48
+ - text: "Dr. Jan de Vries (BSN: 123456789) is bereikbaar via jan.devries@ziekenhuis.nl of +31 6 12345678. Hij woont op Keizersgracht 42, 1015 CS Amsterdam."
49
+ example_title: Clinical Note with PII (Dutch)
50
+ ---
51
+
52
+ # OpenMed-PII-Dutch-BioClinicalModern-Base-149M-v1
53
+
54
+ **Dutch PII Detection Model** | 149M Parameters | Open Source
55
+
56
+ [![F1 Score](https://img.shields.io/badge/F1-85.31%25-brightgreen)]() [![Precision](https://img.shields.io/badge/Precision-86.02%25-blue)]() [![Recall](https://img.shields.io/badge/Recall-84.62%25-orange)]()
57
+
58
+ ## Model Description
59
+
60
+ **OpenMed-PII-Dutch-BioClinicalModern-Base-149M-v1** is a transformer-based token classification model fine-tuned for **Personally Identifiable Information (PII) detection in Dutch text**. This model identifies and classifies **54 types of sensitive information** including names, addresses, social security numbers, medical record numbers, and more.
61
+
62
+ ### Key Features
63
+
64
+ - **Dutch-Optimized**: Specifically trained on Dutch text for optimal performance
65
+ - **High Accuracy**: Achieves strong F1 scores across diverse PII categories
66
+ - **Comprehensive Coverage**: Detects 55+ entity types spanning personal, financial, medical, and contact information
67
+ - **Privacy-Focused**: Designed for de-identification and compliance with GDPR and other privacy regulations
68
+ - **Production-Ready**: Optimized for real-world text processing pipelines
69
+
70
+ ## Performance
71
+
72
+ Evaluated on the Dutch subset of AI4Privacy dataset:
73
+
74
+ | Metric | Score |
75
+ |:---|:---:|
76
+ | **Micro F1** | **0.8531** |
77
+ | Precision | 0.8602 |
78
+ | Recall | 0.8462 |
79
+ | Macro F1 | 0.8395 |
80
+ | Weighted F1 | 0.8517 |
81
+ | Accuracy | 0.9869 |
82
+
83
+ ### Top 10 Dutch PII Models
84
+
85
+ | Rank | Model | F1 | Precision | Recall |
86
+ |:---:|:---|:---:|:---:|:---:|
87
+ | 1 | [OpenMed-PII-Dutch-SuperClinical-Large-434M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-SuperClinical-Large-434M-v1) | 0.9419 | 0.9390 | 0.9448 |
88
+ | 2 | [OpenMed-PII-Dutch-BigMed-Large-560M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-BigMed-Large-560M-v1) | 0.9336 | 0.9336 | 0.9336 |
89
+ | 3 | [OpenMed-PII-Dutch-SnowflakeMed-Large-568M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-SnowflakeMed-Large-568M-v1) | 0.9243 | 0.9206 | 0.9280 |
90
+ | 4 | [OpenMed-PII-Dutch-ClinicalBGE-568M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-ClinicalBGE-568M-v1) | 0.9235 | 0.9210 | 0.9259 |
91
+ | 5 | [OpenMed-PII-Dutch-mSuperClinical-Base-279M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-mSuperClinical-Base-279M-v1) | 0.9204 | 0.9095 | 0.9315 |
92
+ | 6 | [OpenMed-PII-Dutch-mClinicalE5-Large-560M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-mClinicalE5-Large-560M-v1) | 0.9201 | 0.9111 | 0.9292 |
93
+ | 7 | [OpenMed-PII-Dutch-SuperMedical-Large-355M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-SuperMedical-Large-355M-v1) | 0.9189 | 0.9149 | 0.9230 |
94
+ | 8 | [OpenMed-PII-Dutch-NomicMed-Large-395M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-NomicMed-Large-395M-v1) | 0.9181 | 0.9212 | 0.9150 |
95
+ | 9 | [OpenMed-PII-Dutch-EuroMed-210M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-EuroMed-210M-v1) | 0.9143 | 0.9171 | 0.9115 |
96
+ | 10 | [OpenMed-PII-Dutch-BioClinicalModern-Large-395M-v1](https://huggingface.co/OpenMed/OpenMed-PII-Dutch-BioClinicalModern-Large-395M-v1) | 0.9073 | 0.9161 | 0.8988 |
97
+
98
+ ## Supported Entity Types
99
+
100
+ This model detects **54 PII entity types** organized into categories:
101
+
102
+ <details>
103
+ <summary><strong>Identifiers</strong> (22 types)</summary>
104
+
105
+ | Entity | Description |
106
+ |:---|:---|
107
+ | `ACCOUNTNAME` | Accountname |
108
+ | `BANKACCOUNT` | Bankaccount |
109
+ | `BIC` | Bic |
110
+ | `BITCOINADDRESS` | Bitcoinaddress |
111
+ | `CREDITCARD` | Creditcard |
112
+ | `CREDITCARDISSUER` | Creditcardissuer |
113
+ | `CVV` | Cvv |
114
+ | `ETHEREUMADDRESS` | Ethereumaddress |
115
+ | `IBAN` | Iban |
116
+ | `IMEI` | Imei |
117
+ | ... | *and 12 more* |
118
+
119
+ </details>
120
+
121
+ <details>
122
+ <summary><strong>Personal Info</strong> (11 types)</summary>
123
+
124
+ | Entity | Description |
125
+ |:---|:---|
126
+ | `AGE` | Age |
127
+ | `DATEOFBIRTH` | Dateofbirth |
128
+ | `EYECOLOR` | Eyecolor |
129
+ | `FIRSTNAME` | Firstname |
130
+ | `GENDER` | Gender |
131
+ | `HEIGHT` | Height |
132
+ | `LASTNAME` | Lastname |
133
+ | `MIDDLENAME` | Middlename |
134
+ | `OCCUPATION` | Occupation |
135
+ | `PREFIX` | Prefix |
136
+ | ... | *and 1 more* |
137
+
138
+ </details>
139
+
140
+ <details>
141
+ <summary><strong>Contact Info</strong> (2 types)</summary>
142
+
143
+ | Entity | Description |
144
+ |:---|:---|
145
+ | `EMAIL` | Email |
146
+ | `PHONE` | Phone |
147
+
148
+ </details>
149
+
150
+ <details>
151
+ <summary><strong>Location</strong> (9 types)</summary>
152
+
153
+ | Entity | Description |
154
+ |:---|:---|
155
+ | `BUILDINGNUMBER` | Buildingnumber |
156
+ | `CITY` | City |
157
+ | `COUNTY` | County |
158
+ | `GPSCOORDINATES` | Gpscoordinates |
159
+ | `ORDINALDIRECTION` | Ordinaldirection |
160
+ | `SECONDARYADDRESS` | Secondaryaddress |
161
+ | `STATE` | State |
162
+ | `STREET` | Street |
163
+ | `ZIPCODE` | Zipcode |
164
+
165
+ </details>
166
+
167
+ <details>
168
+ <summary><strong>Organization</strong> (3 types)</summary>
169
+
170
+ | Entity | Description |
171
+ |:---|:---|
172
+ | `JOBDEPARTMENT` | Jobdepartment |
173
+ | `JOBTITLE` | Jobtitle |
174
+ | `ORGANIZATION` | Organization |
175
+
176
+ </details>
177
+
178
+ <details>
179
+ <summary><strong>Financial</strong> (5 types)</summary>
180
+
181
+ | Entity | Description |
182
+ |:---|:---|
183
+ | `AMOUNT` | Amount |
184
+ | `CURRENCY` | Currency |
185
+ | `CURRENCYCODE` | Currencycode |
186
+ | `CURRENCYNAME` | Currencyname |
187
+ | `CURRENCYSYMBOL` | Currencysymbol |
188
+
189
+ </details>
190
+
191
+ <details>
192
+ <summary><strong>Temporal</strong> (2 types)</summary>
193
+
194
+ | Entity | Description |
195
+ |:---|:---|
196
+ | `DATE` | Date |
197
+ | `TIME` | Time |
198
+
199
+ </details>
200
+
201
+ ## Usage
202
+
203
+ ### Quick Start
204
+
205
+ ```python
206
+ from transformers import pipeline
207
+
208
+ # Load the PII detection pipeline
209
+ ner = pipeline("ner", model="OpenMed/OpenMed-PII-Dutch-BioClinicalModern-Base-149M-v1", aggregation_strategy="simple")
210
+
211
+ text = """
212
+ Patiënt Jan Jansen (geboren 15-03-1985, BSN: 987654321) is vandaag gezien.
213
+ Contact: jan.jansen@email.nl, Telefoon: +31 6 12345678.
214
+ Adres: Herengracht 42, 1015 BN Amsterdam.
215
+ """
216
+
217
+ entities = ner(text)
218
+ for entity in entities:
219
+ print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
220
+ ```
221
+
222
+ ### De-identification Example
223
+
224
+ ```python
225
+ def redact_pii(text, entities, placeholder='[REDACTED]'):
226
+ """Replace detected PII with placeholders."""
227
+ # Sort entities by start position (descending) to preserve offsets
228
+ sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
229
+ redacted = text
230
+ for ent in sorted_entities:
231
+ redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
232
+ return redacted
233
+
234
+ # Apply de-identification
235
+ redacted_text = redact_pii(text, entities)
236
+ print(redacted_text)
237
+ ```
238
+
239
+ ### Batch Processing
240
+
241
+ ```python
242
+ from transformers import AutoModelForTokenClassification, AutoTokenizer
243
+ import torch
244
+
245
+ model_name = "OpenMed/OpenMed-PII-Dutch-BioClinicalModern-Base-149M-v1"
246
+ model = AutoModelForTokenClassification.from_pretrained(model_name)
247
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
248
+
249
+ texts = [
250
+ "Patiënt Jan Jansen (geboren 15-03-1985, BSN: 987654321) is vandaag gezien.",
251
+ "Contact: jan.jansen@email.nl, Telefoon: +31 6 12345678.",
252
+ ]
253
+
254
+ inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
255
+ with torch.no_grad():
256
+ outputs = model(**inputs)
257
+ predictions = torch.argmax(outputs.logits, dim=-1)
258
+ ```
259
+
260
+ ## Training Details
261
+
262
+ ### Dataset
263
+
264
+ - **Source**: [AI4Privacy PII Masking 400k](https://huggingface.co/datasets/ai4privacy/pii-masking-400k) (Dutch subset)
265
+ - **Format**: BIO-tagged token classification
266
+ - **Labels**: 109 total (54 entity types × 2 BIO tags + O)
267
+
268
+ ### Training Configuration
269
+
270
+ - **Max Sequence Length**: 512 tokens
271
+ - **Epochs**: 3
272
+ - **Framework**: Hugging Face Transformers + Trainer API
273
+
274
+ ## Intended Use & Limitations
275
+
276
+ ### Intended Use
277
+
278
+ - **De-identification**: Automated redaction of PII in Dutch clinical notes, medical records, and documents
279
+ - **Compliance**: Supporting GDPR, and other privacy regulation compliance
280
+ - **Data Preprocessing**: Preparing datasets for research by removing sensitive information
281
+ - **Audit Support**: Identifying PII in document collections
282
+
283
+ ### Limitations
284
+
285
+ **Important**: This model is intended as an **assistive tool**, not a replacement for human review.
286
+
287
+ - **False Negatives**: Some PII may not be detected; always verify critical applications
288
+ - **Context Sensitivity**: Performance may vary with domain-specific terminology
289
+ - **Language**: Optimized for Dutch text; may not perform well on other languages
290
+
291
+ ## Citation
292
+
293
+ ```bibtex
294
+ @misc{openmed-pii-2026,
295
+ title = {OpenMed-PII-Dutch-BioClinicalModern-Base-149M-v1: Dutch PII Detection Model},
296
+ author = {OpenMed Science},
297
+ year = {2026},
298
+ publisher = {Hugging Face},
299
+ url = {https://huggingface.co/OpenMed/OpenMed-PII-Dutch-BioClinicalModern-Base-149M-v1}
300
+ }
301
+ ```
302
+
303
+ ## Links
304
+
305
+ - **Organization**: [OpenMed](https://huggingface.co/OpenMed)
all_results.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9863342600281619,
4
+ "eval_f1": 0.86317006212975,
5
+ "eval_loss": 0.16574643552303314,
6
+ "eval_macro_f1": 0.843794595271581,
7
+ "eval_precision": 0.8762100322675271,
8
+ "eval_recall": 0.8505125284738041,
9
+ "eval_runtime": 2.3596,
10
+ "eval_samples_per_second": 1318.883,
11
+ "eval_steps_per_second": 20.766,
12
+ "eval_weighted_f1": 0.861430607083511,
13
+ "test_accuracy": 0.9868810909220719,
14
+ "test_f1": 0.8531468531468532,
15
+ "test_loss": 0.15482933819293976,
16
+ "test_macro_f1": 0.8394856888609218,
17
+ "test_precision": 0.8601860186018602,
18
+ "test_recall": 0.8462219598583235,
19
+ "test_runtime": 3.1081,
20
+ "test_samples_per_second": 1001.269,
21
+ "test_steps_per_second": 15.765,
22
+ "test_weighted_f1": 0.8517472751090946,
23
+ "total_flos": 3084843971772416.0,
24
+ "train_loss": 1.2889738227567102,
25
+ "train_runtime": 210.5158,
26
+ "train_samples_per_second": 354.814,
27
+ "train_steps_per_second": 5.558
28
+ }
classification_report.txt ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Classification Report for Dutch PII Detection
2
+ Model: thomas-sounack/BioClinical-ModernBERT-base
3
+ ============================================================
4
+
5
+ precision recall f1-score support
6
+
7
+ BANKACCOUNT 0.78 0.64 0.70 136
8
+ BUILDINGNUMBER 0.77 0.77 0.77 111
9
+ CITY 0.81 0.87 0.84 334
10
+ CREDITCARD 0.80 0.81 0.80 86
11
+ DATEOFBIRTH 0.73 0.71 0.72 138
12
+ EMAIL 0.99 0.99 0.99 347
13
+ FIRSTNAME 0.78 0.78 0.78 460
14
+ LASTNAME 0.71 0.64 0.68 354
15
+ MASKEDNUMBER 0.91 0.88 0.89 77
16
+ PASSWORD 0.95 0.96 0.95 94
17
+ PHONE 0.99 0.99 0.99 230
18
+ SSN 0.95 1.00 0.97 338
19
+ STREET 0.67 0.64 0.66 123
20
+ USERNAME 0.96 0.92 0.94 367
21
+ ZIPCODE 0.95 0.90 0.92 193
22
+
23
+ micro avg 0.86 0.85 0.85 3388
24
+ macro avg 0.85 0.83 0.84 3388
25
+ weighted avg 0.86 0.85 0.85 3388
config.json ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ModernBertForTokenClassification"
4
+ ],
5
+ "attention_bias": false,
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": null,
8
+ "classifier_activation": "silu",
9
+ "classifier_bias": false,
10
+ "classifier_dropout": 0.0,
11
+ "classifier_pooling": "mean",
12
+ "cls_token_id": 50281,
13
+ "decoder_bias": true,
14
+ "deterministic_flash_attn": false,
15
+ "dtype": "float32",
16
+ "embedding_dropout": 0.0,
17
+ "eos_token_id": null,
18
+ "global_attn_every_n_layers": 3,
19
+ "gradient_checkpointing": false,
20
+ "hidden_activation": "gelu",
21
+ "hidden_size": 768,
22
+ "id2label": {
23
+ "0": "O",
24
+ "1": "B-ACCOUNTNAME",
25
+ "2": "B-AGE",
26
+ "3": "B-AMOUNT",
27
+ "4": "B-BANKACCOUNT",
28
+ "5": "B-BIC",
29
+ "6": "B-BITCOINADDRESS",
30
+ "7": "B-BUILDINGNUMBER",
31
+ "8": "B-CITY",
32
+ "9": "B-COUNTY",
33
+ "10": "B-CREDITCARD",
34
+ "11": "B-CREDITCARDISSUER",
35
+ "12": "B-CURRENCY",
36
+ "13": "B-CURRENCYCODE",
37
+ "14": "B-CURRENCYNAME",
38
+ "15": "B-CURRENCYSYMBOL",
39
+ "16": "B-CVV",
40
+ "17": "B-DATE",
41
+ "18": "B-DATEOFBIRTH",
42
+ "19": "B-EMAIL",
43
+ "20": "B-ETHEREUMADDRESS",
44
+ "21": "B-EYECOLOR",
45
+ "22": "B-FIRSTNAME",
46
+ "23": "B-GENDER",
47
+ "24": "B-GPSCOORDINATES",
48
+ "25": "B-HEIGHT",
49
+ "26": "B-IBAN",
50
+ "27": "B-IMEI",
51
+ "28": "B-IPADDRESS",
52
+ "29": "B-JOBDEPARTMENT",
53
+ "30": "B-JOBTITLE",
54
+ "31": "B-LASTNAME",
55
+ "32": "B-LITECOINADDRESS",
56
+ "33": "B-MACADDRESS",
57
+ "34": "B-MASKEDNUMBER",
58
+ "35": "B-MIDDLENAME",
59
+ "36": "B-OCCUPATION",
60
+ "37": "B-ORDINALDIRECTION",
61
+ "38": "B-ORGANIZATION",
62
+ "39": "B-PASSWORD",
63
+ "40": "B-PHONE",
64
+ "41": "B-PIN",
65
+ "42": "B-PREFIX",
66
+ "43": "B-SECONDARYADDRESS",
67
+ "44": "B-SEX",
68
+ "45": "B-SSN",
69
+ "46": "B-STATE",
70
+ "47": "B-STREET",
71
+ "48": "B-TIME",
72
+ "49": "B-URL",
73
+ "50": "B-USERAGENT",
74
+ "51": "B-USERNAME",
75
+ "52": "B-VIN",
76
+ "53": "B-VRM",
77
+ "54": "B-ZIPCODE",
78
+ "55": "I-ACCOUNTNAME",
79
+ "56": "I-AGE",
80
+ "57": "I-AMOUNT",
81
+ "58": "I-CITY",
82
+ "59": "I-COUNTY",
83
+ "60": "I-CURRENCY",
84
+ "61": "I-CURRENCYNAME",
85
+ "62": "I-DATE",
86
+ "63": "I-DATEOFBIRTH",
87
+ "64": "I-EYECOLOR",
88
+ "65": "I-GENDER",
89
+ "66": "I-HEIGHT",
90
+ "67": "I-JOBTITLE",
91
+ "68": "I-ORGANIZATION",
92
+ "69": "I-PHONE",
93
+ "70": "I-SECONDARYADDRESS",
94
+ "71": "I-SSN",
95
+ "72": "I-STATE",
96
+ "73": "I-STREET",
97
+ "74": "I-TIME",
98
+ "75": "I-USERAGENT"
99
+ },
100
+ "initializer_cutoff_factor": 2.0,
101
+ "initializer_range": 0.02,
102
+ "intermediate_size": 1152,
103
+ "label2id": {
104
+ "B-ACCOUNTNAME": 1,
105
+ "B-AGE": 2,
106
+ "B-AMOUNT": 3,
107
+ "B-BANKACCOUNT": 4,
108
+ "B-BIC": 5,
109
+ "B-BITCOINADDRESS": 6,
110
+ "B-BUILDINGNUMBER": 7,
111
+ "B-CITY": 8,
112
+ "B-COUNTY": 9,
113
+ "B-CREDITCARD": 10,
114
+ "B-CREDITCARDISSUER": 11,
115
+ "B-CURRENCY": 12,
116
+ "B-CURRENCYCODE": 13,
117
+ "B-CURRENCYNAME": 14,
118
+ "B-CURRENCYSYMBOL": 15,
119
+ "B-CVV": 16,
120
+ "B-DATE": 17,
121
+ "B-DATEOFBIRTH": 18,
122
+ "B-EMAIL": 19,
123
+ "B-ETHEREUMADDRESS": 20,
124
+ "B-EYECOLOR": 21,
125
+ "B-FIRSTNAME": 22,
126
+ "B-GENDER": 23,
127
+ "B-GPSCOORDINATES": 24,
128
+ "B-HEIGHT": 25,
129
+ "B-IBAN": 26,
130
+ "B-IMEI": 27,
131
+ "B-IPADDRESS": 28,
132
+ "B-JOBDEPARTMENT": 29,
133
+ "B-JOBTITLE": 30,
134
+ "B-LASTNAME": 31,
135
+ "B-LITECOINADDRESS": 32,
136
+ "B-MACADDRESS": 33,
137
+ "B-MASKEDNUMBER": 34,
138
+ "B-MIDDLENAME": 35,
139
+ "B-OCCUPATION": 36,
140
+ "B-ORDINALDIRECTION": 37,
141
+ "B-ORGANIZATION": 38,
142
+ "B-PASSWORD": 39,
143
+ "B-PHONE": 40,
144
+ "B-PIN": 41,
145
+ "B-PREFIX": 42,
146
+ "B-SECONDARYADDRESS": 43,
147
+ "B-SEX": 44,
148
+ "B-SSN": 45,
149
+ "B-STATE": 46,
150
+ "B-STREET": 47,
151
+ "B-TIME": 48,
152
+ "B-URL": 49,
153
+ "B-USERAGENT": 50,
154
+ "B-USERNAME": 51,
155
+ "B-VIN": 52,
156
+ "B-VRM": 53,
157
+ "B-ZIPCODE": 54,
158
+ "I-ACCOUNTNAME": 55,
159
+ "I-AGE": 56,
160
+ "I-AMOUNT": 57,
161
+ "I-CITY": 58,
162
+ "I-COUNTY": 59,
163
+ "I-CURRENCY": 60,
164
+ "I-CURRENCYNAME": 61,
165
+ "I-DATE": 62,
166
+ "I-DATEOFBIRTH": 63,
167
+ "I-EYECOLOR": 64,
168
+ "I-GENDER": 65,
169
+ "I-HEIGHT": 66,
170
+ "I-JOBTITLE": 67,
171
+ "I-ORGANIZATION": 68,
172
+ "I-PHONE": 69,
173
+ "I-SECONDARYADDRESS": 70,
174
+ "I-SSN": 71,
175
+ "I-STATE": 72,
176
+ "I-STREET": 73,
177
+ "I-TIME": 74,
178
+ "I-USERAGENT": 75,
179
+ "O": 0
180
+ },
181
+ "layer_norm_eps": 1e-05,
182
+ "layer_types": [
183
+ "full_attention",
184
+ "sliding_attention",
185
+ "sliding_attention",
186
+ "full_attention",
187
+ "sliding_attention",
188
+ "sliding_attention",
189
+ "full_attention",
190
+ "sliding_attention",
191
+ "sliding_attention",
192
+ "full_attention",
193
+ "sliding_attention",
194
+ "sliding_attention",
195
+ "full_attention",
196
+ "sliding_attention",
197
+ "sliding_attention",
198
+ "full_attention",
199
+ "sliding_attention",
200
+ "sliding_attention",
201
+ "full_attention",
202
+ "sliding_attention",
203
+ "sliding_attention",
204
+ "full_attention"
205
+ ],
206
+ "local_attention": 128,
207
+ "max_position_embeddings": 8192,
208
+ "mlp_bias": false,
209
+ "mlp_dropout": 0.0,
210
+ "model_type": "modernbert",
211
+ "norm_bias": false,
212
+ "norm_eps": 1e-05,
213
+ "num_attention_heads": 12,
214
+ "num_hidden_layers": 22,
215
+ "pad_token_id": 50283,
216
+ "position_embedding_type": "absolute",
217
+ "rope_parameters": {
218
+ "full_attention": {
219
+ "rope_theta": 160000.0,
220
+ "rope_type": "default"
221
+ },
222
+ "sliding_attention": {
223
+ "rope_theta": 10000.0,
224
+ "rope_type": "default"
225
+ }
226
+ },
227
+ "sep_token_id": 50282,
228
+ "sparse_pred_ignore_index": -100,
229
+ "sparse_prediction": false,
230
+ "tie_word_embeddings": true,
231
+ "transformers_version": "5.1.0",
232
+ "use_cache": false,
233
+ "vocab_size": 50368
234
+ }
eval_results.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_accuracy": 0.9863342600281619,
4
+ "eval_f1": 0.86317006212975,
5
+ "eval_loss": 0.16574643552303314,
6
+ "eval_macro_f1": 0.843794595271581,
7
+ "eval_precision": 0.8762100322675271,
8
+ "eval_recall": 0.8505125284738041,
9
+ "eval_runtime": 2.3596,
10
+ "eval_samples_per_second": 1318.883,
11
+ "eval_steps_per_second": 20.766,
12
+ "eval_weighted_f1": 0.861430607083511
13
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17f630aeb2949819a0e8cc37969314ba5210c5b07d4ccfc4bf882dead523dbd5
3
+ size 598667416
test_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "test_accuracy": 0.9868810909220719,
3
+ "test_f1": 0.8531468531468532,
4
+ "test_loss": 0.15482933819293976,
5
+ "test_macro_f1": 0.8394856888609218,
6
+ "test_precision": 0.8601860186018602,
7
+ "test_recall": 0.8462219598583235,
8
+ "test_runtime": 3.1081,
9
+ "test_samples_per_second": 1001.269,
10
+ "test_steps_per_second": 15.765,
11
+ "test_weighted_f1": 0.8517472751090946
12
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "backend": "tokenizers",
3
+ "clean_up_tokenization_spaces": true,
4
+ "cls_token": "[CLS]",
5
+ "is_local": false,
6
+ "mask_token": "[MASK]",
7
+ "model_input_names": [
8
+ "input_ids",
9
+ "attention_mask"
10
+ ],
11
+ "model_max_length": 8192,
12
+ "pad_token": "[PAD]",
13
+ "sep_token": "[SEP]",
14
+ "tokenizer_class": "TokenizersBackend",
15
+ "unk_token": "[UNK]"
16
+ }
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 3084843971772416.0,
4
+ "train_loss": 1.2889738227567102,
5
+ "train_runtime": 210.5158,
6
+ "train_samples_per_second": 354.814,
7
+ "train_steps_per_second": 5.558
8
+ }