PII & De-Identification
Collection
Models for extracting PII entities and de-identifying clinical text, with support for HIPAA and GDPR compliance. • 310 items • Updated • 36
How to use OpenMed/OpenMed-PII-Dutch-ModernMed-Base-149M-v1 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("token-classification", model="OpenMed/OpenMed-PII-Dutch-ModernMed-Base-149M-v1") # Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("OpenMed/OpenMed-PII-Dutch-ModernMed-Base-149M-v1")
model = AutoModelForTokenClassification.from_pretrained("OpenMed/OpenMed-PII-Dutch-ModernMed-Base-149M-v1")Dutch PII Detection Model | 149M Parameters | Open Source
OpenMed-PII-Dutch-ModernMed-Base-149M-v1 is a transformer-based token classification model fine-tuned for Personally Identifiable Information (PII) detection in Dutch text. This model identifies and classifies 54 types of sensitive information including names, addresses, social security numbers, medical record numbers, and more.
Evaluated on the Dutch subset of AI4Privacy dataset:
| Metric | Score |
|---|---|
| Micro F1 | 0.8228 |
| Precision | 0.8392 |
| Recall | 0.8070 |
| Macro F1 | 0.8035 |
| Weighted F1 | 0.8194 |
| Accuracy | 0.9842 |
| Rank | Model | F1 | Precision | Recall |
|---|---|---|---|---|
| 1 | OpenMed-PII-Dutch-SuperClinical-Large-434M-v1 | 0.9419 | 0.9390 | 0.9448 |
| 2 | OpenMed-PII-Dutch-BigMed-Large-560M-v1 | 0.9336 | 0.9336 | 0.9336 |
| 3 | OpenMed-PII-Dutch-SnowflakeMed-Large-568M-v1 | 0.9243 | 0.9206 | 0.9280 |
| 4 | OpenMed-PII-Dutch-ClinicalBGE-568M-v1 | 0.9235 | 0.9210 | 0.9259 |
| 5 | OpenMed-PII-Dutch-mSuperClinical-Base-279M-v1 | 0.9204 | 0.9095 | 0.9315 |
| 6 | OpenMed-PII-Dutch-mClinicalE5-Large-560M-v1 | 0.9201 | 0.9111 | 0.9292 |
| 7 | OpenMed-PII-Dutch-SuperMedical-Large-355M-v1 | 0.9189 | 0.9149 | 0.9230 |
| 8 | OpenMed-PII-Dutch-NomicMed-Large-395M-v1 | 0.9181 | 0.9212 | 0.9150 |
| 9 | OpenMed-PII-Dutch-EuroMed-210M-v1 | 0.9143 | 0.9171 | 0.9115 |
| 10 | OpenMed-PII-Dutch-BioClinicalModern-Large-395M-v1 | 0.9073 | 0.9161 | 0.8988 |
This model detects 54 PII entity types organized into categories:
| Entity | Description |
|---|---|
ACCOUNTNAME |
Accountname |
BANKACCOUNT |
Bankaccount |
BIC |
Bic |
BITCOINADDRESS |
Bitcoinaddress |
CREDITCARD |
Creditcard |
CREDITCARDISSUER |
Creditcardissuer |
CVV |
Cvv |
ETHEREUMADDRESS |
Ethereumaddress |
IBAN |
Iban |
IMEI |
Imei |
| ... | and 12 more |
| Entity | Description |
|---|---|
AGE |
Age |
DATEOFBIRTH |
Dateofbirth |
EYECOLOR |
Eyecolor |
FIRSTNAME |
Firstname |
GENDER |
Gender |
HEIGHT |
Height |
LASTNAME |
Lastname |
MIDDLENAME |
Middlename |
OCCUPATION |
Occupation |
PREFIX |
Prefix |
| ... | and 1 more |
| Entity | Description |
|---|---|
EMAIL |
|
PHONE |
Phone |
| Entity | Description |
|---|---|
BUILDINGNUMBER |
Buildingnumber |
CITY |
City |
COUNTY |
County |
GPSCOORDINATES |
Gpscoordinates |
ORDINALDIRECTION |
Ordinaldirection |
SECONDARYADDRESS |
Secondaryaddress |
STATE |
State |
STREET |
Street |
ZIPCODE |
Zipcode |
| Entity | Description |
|---|---|
JOBDEPARTMENT |
Jobdepartment |
JOBTITLE |
Jobtitle |
ORGANIZATION |
Organization |
| Entity | Description |
|---|---|
AMOUNT |
Amount |
CURRENCY |
Currency |
CURRENCYCODE |
Currencycode |
CURRENCYNAME |
Currencyname |
CURRENCYSYMBOL |
Currencysymbol |
| Entity | Description |
|---|---|
DATE |
Date |
TIME |
Time |
from transformers import pipeline
# Load the PII detection pipeline
ner = pipeline("ner", model="OpenMed/OpenMed-PII-Dutch-ModernMed-Base-149M-v1", aggregation_strategy="simple")
text = """
Patiënt Jan Jansen (geboren 15-03-1985, BSN: 987654321) is vandaag gezien.
Contact: jan.jansen@email.nl, Telefoon: +31 6 12345678.
Adres: Herengracht 42, 1015 BN Amsterdam.
"""
entities = ner(text)
for entity in entities:
print(f"{entity['entity_group']}: {entity['word']} (score: {entity['score']:.3f})")
def redact_pii(text, entities, placeholder='[REDACTED]'):
"""Replace detected PII with placeholders."""
# Sort entities by start position (descending) to preserve offsets
sorted_entities = sorted(entities, key=lambda x: x['start'], reverse=True)
redacted = text
for ent in sorted_entities:
redacted = redacted[:ent['start']] + f"[{ent['entity_group']}]" + redacted[ent['end']:]
return redacted
# Apply de-identification
redacted_text = redact_pii(text, entities)
print(redacted_text)
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
model_name = "OpenMed/OpenMed-PII-Dutch-ModernMed-Base-149M-v1"
model = AutoModelForTokenClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
texts = [
"Patiënt Jan Jansen (geboren 15-03-1985, BSN: 987654321) is vandaag gezien.",
"Contact: jan.jansen@email.nl, Telefoon: +31 6 12345678.",
]
inputs = tokenizer(texts, return_tensors='pt', padding=True, truncation=True)
with torch.no_grad():
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
Important: This model is intended as an assistive tool, not a replacement for human review.
@misc{openmed-pii-2026,
title = {OpenMed-PII-Dutch-ModernMed-Base-149M-v1: Dutch PII Detection Model},
author = {OpenMed Science},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/OpenMed/OpenMed-PII-Dutch-ModernMed-Base-149M-v1}
}
Base model
answerdotai/ModernBERT-base