vkatg commited on
Commit
815efa5
·
verified ·
1 Parent(s): 88e9efb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -30
README.md CHANGED
@@ -1,42 +1,49 @@
1
  ---
 
2
  license: mit
3
  language:
4
  - en
5
  tags:
6
- - healthcare
7
- - privacy
8
- - de-identification
9
- - phi
10
- - hipaa
11
- - reinforcement-learning
12
- - streaming
13
- - multimodal
14
- - audit-log
15
- - synthetic
16
- pretty_name: Synthetic Multimodal Streaming De-Identification Audit Log
 
 
 
 
 
 
 
17
  size_categories:
18
  - 1K<n<10K
19
  ---
20
 
21
- # Synthetic Multimodal Streaming De-Identification Audit Log
22
 
23
- A synthetic audit log dataset generated by a stateful, exposure-aware de-identification controller for multimodal streaming data. Every record in this dataset is fully synthetic. No real patient data, protected health information, or identifiable individuals are present anywhere in this dataset.
24
 
25
- ## What this dataset is for
26
 
27
- Most de-identification research evaluates masking quality on static documents. This dataset is different. It captures per-event masking decisions across a simulated longitudinal stream — the same subject appearing across multiple modalities and time steps — which allows evaluation of how re-identification risk accumulates over time rather than within a single record.
28
-
29
- It is useful for:
30
 
31
  - Benchmarking adaptive vs static de-identification policies
32
  - Studying cross-modal PHI linkage behavior
33
  - Evaluating privacy-utility tradeoffs in streaming systems
34
  - Reproducing results from the associated research implementation
35
 
36
- ## Quick start
37
 
38
- # Install the package first
39
- pip install phi-exposure-guard
 
40
 
41
  import pandas as pd
42
  import json
@@ -49,15 +56,39 @@ print(df)
49
  with open("audit_log.jsonl") as f:
50
  events = [json.loads(line) for line in f]
51
 
52
- # Example: filter adaptive policy events
53
  adaptive = [e for e in events if e["policy_run"] == "adaptive"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  ## Privacy-Utility Tradeoff
55
 
56
  ![Privacy-Utility Tradeoff](privacy_utility_curve.png)
57
 
58
  The adaptive policy achieves full utility while keeping leakage close to the redact floor. It only escalates masking strength when cumulative exposure actually justifies it, not on every record by default.
59
 
60
- ## Policy benchmark results
61
 
62
  | Policy | Leak Total | Utility Proxy | Mean Latency (ms) | P90 Latency (ms) |
63
  | --- | --- | --- | --- | --- |
@@ -67,11 +98,21 @@ The adaptive policy achieves full utility while keeping leakage close to the red
67
  | redact | 0.51 | 0.51 | 0.157 | 0.192 |
68
  | adaptive | 0.56 | 1.0 | 1.16 | 1.237 |
69
 
70
- ## Dataset structure
 
 
 
 
 
 
 
 
 
 
71
 
72
  The primary file is `audit_log.jsonl`. Each line is one masking decision for one event.
73
 
74
- ### Key fields
75
 
76
  | Field | Description |
77
  | --- | --- |
@@ -88,6 +129,24 @@ The primary file is `audit_log.jsonl`. Each line is one masking decision for one
88
  | `policy_version` | Policy version token |
89
  | `decision_blob` | Full structured decision record including risk components, DCPG graph state, CMO execution log, and cross-modal matches |
90
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
91
  ### Modalities
92
 
93
  Each event is evaluated across five modalities representing a realistic multimodal healthcare stream:
@@ -100,15 +159,30 @@ Each event is evaluated across five modalities representing a realistic multimod
100
 
101
  ### Policies
102
 
103
- Five masking policies are included for comparison:
104
-
105
  - `raw` — no masking applied
106
  - `weak` — minimal token suppression
107
  - `pseudo` — pseudonymization
108
  - `redact` — full redaction
109
  - `adaptive` — risk-governed dynamic policy selection
110
 
111
- ## Additional files
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
 
113
  | File | Description |
114
  | --- | --- |
@@ -117,7 +191,11 @@ Five masking policies are included for comparison:
117
  | `audit_log_signed_adaptive.jsonl` | Cryptographically signed adaptive policy audit trail |
118
  | `EXPERIMENT_REPORT.md` | Full experiment report with leakage breakdown by modality |
119
 
120
- ## Source code and reproduction
 
 
 
 
121
 
122
  All data in this dataset was generated by the open-source research implementation:
123
 
@@ -125,7 +203,9 @@ GitHub: [azithteja91/phi-exposure-guard](https://github.com/azithteja91/phi-expo
125
 
126
  Hugging Face Space: [vkatg/amphi-rl-dpgraph](https://huggingface.co/spaces/vkatg/amphi-rl-dpgraph)
127
 
128
- To regenerate:
 
 
129
 
130
  ```bash
131
  git clone https://github.com/azithteja91/phi-exposure-guard.git
@@ -142,6 +222,6 @@ If you use this dataset in academic or technical work, please cite via the `CITA
142
 
143
  MIT
144
 
145
- ## Note on intended use
146
 
147
  This dataset is intended for privacy research, streaming system design, and reproducible evaluation of de-identification strategies. It is not derived from real clinical data and should not be used as a substitute for validated compliance infrastructure.
 
1
  ---
2
+ pretty_name: Streaming PHI Deidentification Benchmark
3
  license: mit
4
  language:
5
  - en
6
  tags:
7
+ - pii
8
+ - phi
9
+ - de-identification
10
+ - deidentification
11
+ - healthcare
12
+ - healthcare-nlp
13
+ - clinical-text
14
+ - medical-nlp
15
+ - privacy
16
+ - hipaa
17
+ - reinforcement-learning
18
+ - streaming
19
+ - multimodal
20
+ - audit-log
21
+ - synthetic
22
+ - benchmark
23
+ - evaluation
24
+ - synthetic-data
25
  size_categories:
26
  - 1K<n<10K
27
  ---
28
 
29
+ # Streaming PHI De-Identification Benchmark
30
 
31
+ A benchmark dataset for evaluating adaptive de-identification policies on multimodal streaming healthcare data. Every record is fully synthetic. No real patient data, protected health information, or identifiable individuals are present anywhere in this dataset.
32
 
33
+ This dataset is different from existing PHI benchmarks. Most evaluate masking quality on static, single-modality documents. This dataset captures how re-identification risk accumulates across modalities and time — the same subject appearing across clinical notes, ASR transcripts, imaging proxies, waveform data, and audio metadata over a longitudinal stream.
34
 
35
+ It supports:
 
 
36
 
37
  - Benchmarking adaptive vs static de-identification policies
38
  - Studying cross-modal PHI linkage behavior
39
  - Evaluating privacy-utility tradeoffs in streaming systems
40
  - Reproducing results from the associated research implementation
41
 
42
+ ## Quick Start
43
 
44
+ ```python
45
+ # Install dependencies
46
+ # pip install phi-exposure-guard datasets pandas
47
 
48
  import pandas as pd
49
  import json
 
56
  with open("audit_log.jsonl") as f:
57
  events = [json.loads(line) for line in f]
58
 
59
+ # Filter adaptive policy events
60
  adaptive = [e for e in events if e["policy_run"] == "adaptive"]
61
+
62
+ # Filter by modality
63
+ text_events = [e for e in events if e["modality"] == "text"]
64
+ ```
65
+
66
+ Or load via Hugging Face datasets library:
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ ds = load_dataset("vkatg/streaming-phi-deidentification-benchmark")
72
+ ```
73
+
74
+ ## Motivation
75
+
76
+ Automatic de-identification of healthcare data is critical for safe secondary use in AI research and clinical analytics. Existing PHI datasets have significant limitations for streaming and multimodal evaluation:
77
+
78
+ - They evaluate each document in isolation with no memory of prior events
79
+ - They are restricted to single modalities, usually clinical text only
80
+ - Most require data use agreements and are not openly available
81
+ - None model cumulative re-identification risk across events and modalities
82
+
83
+ This dataset was created to fill that gap — providing a reproducible, open benchmark for evaluating exposure-aware and adaptive de-identification strategies in realistic streaming pipelines.
84
+
85
  ## Privacy-Utility Tradeoff
86
 
87
  ![Privacy-Utility Tradeoff](privacy_utility_curve.png)
88
 
89
  The adaptive policy achieves full utility while keeping leakage close to the redact floor. It only escalates masking strength when cumulative exposure actually justifies it, not on every record by default.
90
 
91
+ ## Policy Benchmark Results
92
 
93
  | Policy | Leak Total | Utility Proxy | Mean Latency (ms) | P90 Latency (ms) |
94
  | --- | --- | --- | --- | --- |
 
98
  | redact | 0.51 | 0.51 | 0.157 | 0.192 |
99
  | adaptive | 0.56 | 1.0 | 1.16 | 1.237 |
100
 
101
+ ## Comparison with Existing PHI Datasets
102
+
103
+ | Dataset | Modality | Access | Real Data | Streaming | Size |
104
+ | --- | --- | --- | --- | --- | --- |
105
+ | i2b2 2014 | Text only | DUA required | Yes | No | 1,304 records |
106
+ | PhysioNet deid | Text only | DUA required | Yes | No | 2,434 notes |
107
+ | This dataset | Text, ASR, Image, Waveform, Audio | Open | No (synthetic) | Yes | 1K-10K events |
108
+
109
+ The key distinction is multimodal streaming evaluation. Existing datasets evaluate masking quality on static text records in isolation. This dataset captures how re-identification risk accumulates across modalities and time steps.
110
+
111
+ ## Dataset Structure
112
 
113
  The primary file is `audit_log.jsonl`. Each line is one masking decision for one event.
114
 
115
+ ### Key Fields
116
 
117
  | Field | Description |
118
  | --- | --- |
 
129
  | `policy_version` | Policy version token |
130
  | `decision_blob` | Full structured decision record including risk components, DCPG graph state, CMO execution log, and cross-modal matches |
131
 
132
+ ### Sample Record
133
+
134
+ ```json
135
+ {
136
+ "event_id": "evt_0",
137
+ "patient_key": "patient_1",
138
+ "modality": "text",
139
+ "policy_run": "adaptive",
140
+ "chosen_policy": "pseudo",
141
+ "reason": "risk_threshold_exceeded",
142
+ "risk": 0.43,
143
+ "localized_remask_trigger": false,
144
+ "latency_ms": 1.12,
145
+ "leaks_after": 0.0,
146
+ "policy_version": "v1"
147
+ }
148
+ ```
149
+
150
  ### Modalities
151
 
152
  Each event is evaluated across five modalities representing a realistic multimodal healthcare stream:
 
159
 
160
  ### Policies
161
 
 
 
162
  - `raw` — no masking applied
163
  - `weak` — minimal token suppression
164
  - `pseudo` — pseudonymization
165
  - `redact` — full redaction
166
  - `adaptive` — risk-governed dynamic policy selection
167
 
168
+ ## System Architecture
169
+
170
+ The dataset was generated by a modular pipeline with the following components:
171
+
172
+ | Module | Description |
173
+ | --- | --- |
174
+ | `context_state.py` | Per-subject exposure state, persisted via SQLite |
175
+ | `controller.py` | Risk scoring and adaptive policy selection |
176
+ | `dcpg.py` | Dynamic Context Persistence Graph — tracks cross-modal identity linkage |
177
+ | `dcpg_crdt.py` | CRDT-based federated graph merging for distributed deployments |
178
+ | `cmo_registry.py` | Composable Masking Operator registry and DAG execution |
179
+ | `flow_controller.py` | DAG-based policy flow controller with audit provenance |
180
+ | `rl_agent.py` | PPO reinforcement learning agent for adaptive policy control |
181
+ | `audit_signing.py` | Cryptographic signing and FHIR export of audit records |
182
+ | `phi_detector.py` | PHI detection and leakage measurement |
183
+ | `masking_ops.py` | Token-level masking operations per policy |
184
+
185
+ ## Additional Files
186
 
187
  | File | Description |
188
  | --- | --- |
 
191
  | `audit_log_signed_adaptive.jsonl` | Cryptographically signed adaptive policy audit trail |
192
  | `EXPERIMENT_REPORT.md` | Full experiment report with leakage breakdown by modality |
193
 
194
+ ## Limitations
195
+
196
+ This dataset is fully synthetic and designed to model structural properties of longitudinal healthcare streams. It does not contain real clinical language, real diagnoses, or real patient records. Models trained or evaluated on this dataset should be validated on real clinical data before deployment. The dataset covers English language proxies only.
197
+
198
+ ## Source Code and Reproduction
199
 
200
  All data in this dataset was generated by the open-source research implementation:
201
 
 
203
 
204
  Hugging Face Space: [vkatg/amphi-rl-dpgraph](https://huggingface.co/spaces/vkatg/amphi-rl-dpgraph)
205
 
206
+ Colab demo: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/azithteja91/phi-exposure-guard/blob/main/notebooks/demo_colab.ipynb)
207
+
208
+ To regenerate locally:
209
 
210
  ```bash
211
  git clone https://github.com/azithteja91/phi-exposure-guard.git
 
222
 
223
  MIT
224
 
225
+ ## Note on Intended Use
226
 
227
  This dataset is intended for privacy research, streaming system design, and reproducible evaluation of de-identification strategies. It is not derived from real clinical data and should not be used as a substitute for validated compliance infrastructure.