smoh commited on
Commit
6ba9c65
·
verified ·
1 Parent(s): 0ad24a4

Update model card for v1.3

Browse files
Files changed (1) hide show
  1. README.md +179 -175
README.md CHANGED
@@ -1,175 +1,179 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- language:
5
- - en
6
- tags:
7
- - token-classification
8
- - ner
9
- - pii
10
- - privacy
11
- - deberta
12
- - crf
13
- datasets:
14
- - ai4privacy/internationalised_pii_dataset
15
- - gretelai/gretel-pii-masking-en-v1
16
- pipeline_tag: token-classification
17
- model-index:
18
- - name: datafog-pii-ner-v1
19
- results:
20
- - task:
21
- type: token-classification
22
- name: Named Entity Recognition
23
- metrics:
24
- - type: f1
25
- value: 0.904
26
- name: Overall F1
27
- - type: precision
28
- value: 0.907
29
- name: Overall Precision
30
- - type: recall
31
- value: 0.902
32
- name: Overall Recall
33
- ---
34
-
35
- # DataFog PII-NER v1
36
-
37
- A token classification model for detecting **Personally Identifiable Information (PII)** in English text. Built on DeBERTa-v3-xsmall with character-level CNN features and a CRF decoding head for structured BIO tag prediction.
38
-
39
- ## Model Details
40
-
41
- | Property | Value |
42
- |----------|-------|
43
- | Architecture | DeBERTa-v3-xsmall + CharCNN + CRF |
44
- | Parameters | ~22.7M total |
45
- | Labels | 89 BIO tags (40 entity types) |
46
- | Max sequence length | 256 tokens |
47
- | Training data | ~135K examples from 3 datasets |
48
- | Training hardware | NVIDIA A100 (Colab), BF16 mixed precision |
49
- | Framework | Transformers 5.0, PyTorch 2.x |
50
-
51
- ## Architecture
52
-
53
- ```
54
- Input text
55
- |
56
- v
57
- DeBERTa-v3-xsmall (70.7M pretrained params)
58
- |
59
- v
60
- Character CNN (3/4/5-gram filters)
61
- |
62
- v
63
- Gating Fusion (learned weighted combination)
64
- |
65
- v
66
- CRF Head (sequence-level decoding)
67
- |
68
- v
69
- 89 BIO tag predictions
70
- ```
71
-
72
- The CRF head enforces valid BIO tag sequences (e.g., I-PERSON can only follow B-PERSON or I-PERSON), which improves entity boundary detection compared to independent per-token classification.
73
-
74
- ## Supported Entity Types (40 types, 4 tiers)
75
-
76
- ### Tier 1 -- Critical PII
77
- SSN, Credit Card, Bank Account, Passport Number, Drivers License, Tax ID
78
-
79
- ### Tier 2 -- High Sensitivity
80
- Person, Email, Phone, Date of Birth, Street Address, IP Address
81
-
82
- ### Tier 3 -- Moderate Sensitivity
83
- Username, Date, Location, Organization, URL, License Plate, Age, Nationality, Gender, Ethnicity, Religion, Marital Status
84
-
85
- ### Tier 4 -- Domain-Specific
86
- Medical Record, Employee ID, Student ID, Account Number, PIN, Password, Biometric, Vehicle ID, Device ID, Crypto Wallet, IBAN, Swift Code, Insurance Number, Salary, Criminal Record, Political Affiliation, Sexual Orientation, Health Condition, Genetic Data, Trade Union
87
-
88
- ## Test Set Results
89
-
90
- | Metric | Value |
91
- |--------|-------|
92
- | **Overall F1** | **0.904** |
93
- | Overall Precision | 0.907 |
94
- | Overall Recall | 0.902 |
95
-
96
- ### Tier Recall
97
-
98
- | Tier | Recall | Target |
99
- |------|--------|--------|
100
- | Tier 1 (Critical) | 0.722 | 0.98 |
101
- | Tier 2 (High) | 0.934 | 0.95 |
102
- | Tier 3 (Moderate) | 0.919 | 0.90 |
103
- | Tier 4 (Domain) | 0.866 | 0.85 |
104
-
105
- ### Per-Entity F1 (All Types)
106
-
107
- | Entity Type | F1 | Recall |
108
- |-------------|-----|--------|
109
- | Biometric | 0.996 | 0.996 |
110
- | URL | 0.994 | 0.995 |
111
- | Email | 0.991 | 0.987 |
112
- | IP Address | 0.988 | 0.992 |
113
- | Date of Birth | 0.978 | 0.980 |
114
- | Vehicle ID | 0.964 | 0.989 |
115
- | Phone | 0.963 | 0.961 |
116
- | Employee ID | 0.962 | 0.959 |
117
- | License Plate | 0.960 | 0.952 |
118
- | Gender | 0.952 | 0.949 |
119
- | IBAN | 0.930 | 0.898 |
120
- | Swift Code | 0.926 | 0.980 |
121
- | Username | 0.924 | 0.912 |
122
- | Location | 0.922 | 0.908 |
123
- | Account Number | 0.908 | 0.917 |
124
- | Organization | 0.898 | 0.903 |
125
- | SSN | 0.891 | 0.858 |
126
- | Drivers License | 0.885 | 0.881 |
127
- | Password | 0.878 | 0.885 |
128
- | Date | 0.875 | 0.869 |
129
- | Person | 0.861 | 0.868 |
130
- | Credit Card | 0.862 | 0.839 |
131
- | Age | 0.851 | 0.861 |
132
- | Street Address | 0.834 | 0.817 |
133
- | Bank Account | 0.791 | 0.746 |
134
- | Tax ID | 0.665 | 0.624 |
135
- | Passport Number | 0.469 | 0.385 |
136
- | PIN | 0.432 | 0.302 |
137
-
138
- ## Training Details
139
-
140
- - **Backbone LR:** 2e-5 (with AdamW eps=1.0 to prevent NaN)
141
- - **Head LR:** 1e-3 (50x faster than backbone)
142
- - **Warmup:** 10% of steps
143
- - **Epochs:** 10 (best checkpoint at epoch 5)
144
- - **Effective batch size:** 32
145
- - **Mixed precision:** BF16
146
-
147
- ## Training Data
148
-
149
- Trained on a combined dataset of ~135K examples from:
150
- - [AI4Privacy PII Dataset](https://huggingface.co/datasets/ai4privacy/internationalised_pii_dataset)
151
- - [Nemotron PII](https://huggingface.co/datasets/ai4privacy/pii-masking-400k)
152
- - [Gretel PII Masking](https://huggingface.co/datasets/gretelai/gretel-pii-masking-en-v1)
153
-
154
- ## Limitations
155
-
156
- - Tier 1 recall (0.722) is below the 0.98 target -- critical PII types like SSN, Credit Card, and Passport Number need improvement
157
- - Rare entity types (PIN, Passport Number, Tax ID) have low F1 due to limited training examples
158
- - English-only
159
- - Max 256 tokens per input (longer documents need chunking)
160
- - Custom architecture requires the `datafog-pii-ner` package for loading (not a standard HuggingFace token classifier)
161
-
162
- ## Citation
163
-
164
- ```bibtex
165
- @software{datafog_pii_ner_v1,
166
- title={DataFog PII-NER v1: Token Classification for PII Detection},
167
- author={DataFog},
168
- year={2026},
169
- url={https://github.com/DataFog/datafog-labs}
170
- }
171
- ```
172
-
173
- ## License
174
-
175
- Apache 2.0
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - en
6
+ tags:
7
+ - token-classification
8
+ - ner
9
+ - pii
10
+ - privacy
11
+ - deberta
12
+ - crf
13
+ datasets:
14
+ - ai4privacy/internationalised_pii_dataset
15
+ - gretelai/gretel-pii-masking-en-v1
16
+ pipeline_tag: token-classification
17
+ model-index:
18
+ - name: datafog-pii-small-en
19
+ results:
20
+ - task:
21
+ type: token-classification
22
+ name: Named Entity Recognition
23
+ metrics:
24
+ - type: f1
25
+ value: 0.9071
26
+ name: Overall F1
27
+ - type: precision
28
+ value: 0.8981
29
+ name: Overall Precision
30
+ - type: recall
31
+ value: 0.9162
32
+ name: Overall Recall
33
+ ---
34
+
35
+ # DataFog PII-NER v1.3
36
+
37
+ A lightweight token classification model for detecting **Personally Identifiable Information (PII)** in English text. Built on DeBERTa-v3-xsmall with character-level CNN features and a CRF decoding head for structured BIO tag prediction.
38
+
39
+ **v1.3** is the fourth iteration, achieving the best overall F1 (0.9071) across all versions through early backbone freezing and progressive tier weight reduction.
40
+
41
+ ## Model Details
42
+
43
+ | Property | Value |
44
+ |----------|-------|
45
+ | Architecture | DeBERTa-v3-xsmall + CharCNN + GatingFusion + CRF |
46
+ | Parameters | ~22.7M total |
47
+ | Labels | 89 BIO tags (44 entity types) |
48
+ | Max sequence length | 256 tokens |
49
+ | Training data | ~169K examples from 3 datasets (with Tier 1 oversampling) |
50
+ | Training hardware | NVIDIA H100 PCIe (80GB), BF16 mixed precision |
51
+ | Training time | 20 hours (10 epochs) |
52
+ | Framework | Transformers 4.49, PyTorch 2.7 |
53
+
54
+ ## Architecture
55
+
56
+
57
+
58
+ The **CharCNN** captures structural PII patterns (SSN: XXX-XX-XXXX, credit cards: XXXX-XXXX-XXXX-XXXX) while **DeBERTa** provides contextual understanding. The **gating fusion** dynamically weights character vs. contextual features per token. The **CRF head** enforces valid BIO tag sequences at the sequence level.
59
+
60
+ ## Supported Entity Types (44 types, 4 tiers)
61
+
62
+ ### Tier 1 -- Critical PII (target: 0.98 recall)
63
+ SSN, Credit Card, Bank Account, Passport Number, Drivers License, Tax ID
64
+
65
+ ### Tier 2 -- High Sensitivity (target: 0.95 recall)
66
+ Person, Email, Phone, Date of Birth, Street Address, IP Address
67
+
68
+ ### Tier 3 -- Moderate Sensitivity (target: 0.90 recall)
69
+ Username, Date, Location, Organization, URL, License Plate, Age, Nationality, Gender, Ethnicity, Religion, Marital Status
70
+
71
+ ### Tier 4 -- Domain-Specific (target: 0.85 recall)
72
+ Medical Record, Employee ID, Student ID, Account Number, PIN, Password, Biometric, Vehicle ID, Device ID, Crypto Wallet, IBAN, Swift Code, Insurance Number, Salary, Criminal Record, Political Affiliation, Sexual Orientation, Health Condition, Genetic Data, Trade Union
73
+
74
+ ## Test Set Results
75
+
76
+ ### Overall Metrics
77
+
78
+ | Metric | V1.3 | V1.2 | V1.1 | V1 |
79
+ |--------|------|------|------|-----|
80
+ | **Overall F1** | **0.9071** | 0.9005 | 0.9005 | 0.904 |
81
+ | Precision | 0.8981 | 0.9050 | 0.9062 | 0.907 |
82
+ | **Recall** | **0.9162** | 0.8960 | 0.8950 | 0.902 |
83
+
84
+ ### Tier Recall
85
+
86
+ | Tier | V1.3 | V1.2 | Target | Status |
87
+ |------|------|------|--------|--------|
88
+ | Tier 1 (Critical) | 0.823 | 0.841 | 0.98 | FAIL |
89
+ | Tier 2 (High) | **0.945** | 0.936 | 0.95 | FAIL |
90
+ | Tier 3 (Moderate) | **0.930** | 0.911 | 0.90 | PASS |
91
+ | Tier 4 (Domain) | **0.868** | 0.845 | 0.85 | PASS |
92
+
93
+ ### Per-Entity F1 (Top 20)
94
+
95
+ | Entity Type | F1 |
96
+ |-------------|------|
97
+ | URL | 0.994 |
98
+ | Biometric | 0.992 |
99
+ | IP Address | 0.988 |
100
+ | Date of Birth | 0.981 |
101
+ | Vehicle ID | 0.976 |
102
+ | Email | 0.968 |
103
+ | Phone | 0.966 |
104
+ | License Plate | 0.952 |
105
+ | Gender | 0.946 |
106
+ | Employee ID | 0.940 |
107
+ | IBAN | 0.935 |
108
+ | Username | 0.930 |
109
+ | SSN | 0.930 |
110
+ | Location | 0.929 |
111
+ | Account Number | 0.923 |
112
+ | Organization | 0.902 |
113
+ | Drivers License | 0.881 |
114
+ | Password | 0.880 |
115
+ | Date | 0.877 |
116
+ | Person | 0.875 |
117
+
118
+ ## Training Details
119
+
120
+ ### V1.3 Approach: Early Freeze + Progressive Tier Weights
121
+
122
+ Two key innovations based on learnings from V1-V1.2:
123
+
124
+ 1. **Backbone freeze after epoch 3**: DeBERTa weights are frozen after epoch 3 to preserve clean representations before training instability occurs.
125
+
126
+ 2. **Progressive tier weight reduction**: CRF loss weights start at 3x/2x/1.5x/1x (Tier 1-4) for epochs 1-2, then reduce to 2x/1.5x/1.25x/1x from epoch 3 onward. This limits gradient amplification buildup while giving a strong initial learning signal.
127
+
128
+ ### Hyperparameters
129
+
130
+ | Parameter | Value |
131
+ |-----------|-------|
132
+ | Backbone LR | 1e-5 (with AdamW eps=1.0) |
133
+ | Head LR | 1e-3 (100x faster) |
134
+ | LR Schedule | Cosine |
135
+ | Warmup | 500 steps |
136
+ | Epochs | 10 (3 full + 7 head-only) |
137
+ | Effective batch size | 32 (8 x 4 gradient accumulation) |
138
+ | Mixed precision | BF16 |
139
+ | Best checkpoint | Epoch 3 |
140
+
141
+ ### Training Data
142
+
143
+ ~169K examples from three open-licensed datasets:
144
+ - [AI4Privacy PII Dataset](https://huggingface.co/datasets/ai4privacy/internationalised_pii_dataset) (~43K English examples, Apache 2.0)
145
+ - [NVIDIA Nemotron PII](https://huggingface.co/datasets/ai4privacy/pii-masking-400k) (~100K examples, CC-BY-4.0)
146
+ - [Gretel PII Masking](https://huggingface.co/datasets/gretelai/gretel-pii-masking-en-v1) (~26K examples, Apache 2.0)
147
+
148
+ Tier 1 entity examples are oversampled 3x to address the 323x frequency imbalance between common entities (DATE: 170K) and rare critical entities (PASSPORT: 526).
149
+
150
+ ## Version History
151
+
152
+ | Version | F1 | Tier 1 Recall | Key Change |
153
+ |---------|------|--------------|------------|
154
+ | V1 | 0.904 | 0.722 | Baseline |
155
+ | V1.1 | 0.9005 | 0.771 | Tier-weighted loss + oversampling |
156
+ | V1.2 | 0.9005 | 0.841 | Backbone freeze after epoch 4 |
157
+ | **V1.3** | **0.907** | 0.823 | Early freeze (epoch 3) + progressive tier weights |
158
+
159
+ ## Limitations
160
+
161
+ - Tier 1 recall (0.823) is below the 0.98 target -- critical PII types like Passport Number (only 526 training examples) remain challenging
162
+ - 16 entity types have zero training examples (Nationality, Ethnicity, Religion, etc.) and cannot be detected
163
+ - English-only
164
+ - Max 256 tokens per input (longer documents need chunking)
165
+ - Custom architecture requires the source code for loading
166
+
167
+ ## Links
168
+
169
+ - **Code**: [github.com/DataFog/datafog-labs](https://github.com/DataFog/datafog-labs)
170
+ - **Training Chronicle**: [Full training log](https://github.com/DataFog/datafog-labs/blob/main/pii-ner-v1/docs/training_chronicle.md)
171
+ - **WandB Run**: [V1.3 training metrics](https://wandb.ai/datafog/huggingface/runs/a66aw6sb)
172
+
173
+ ## Citation
174
+
175
+
176
+
177
+ ## License
178
+
179
+ Apache 2.0