--- pretty_name: Streaming PHI Deidentification Benchmark license: mit language: - en tags: - pii - phi - de-identification - deidentification - healthcare - healthcare-nlp - clinical-text - medical-nlp - privacy - hipaa - reinforcement-learning - streaming - multimodal - audit-log - synthetic - benchmark - evaluation - synthetic-data - text-classification size_categories: - 1K Adaptive Risk Score — 34 Live Events 0.4 0.5 0.6 0.7 0.8 0.9 1.0 0.3 0.40 0.60 0.80 0 5 10 15 20 25 30 33 Event Index Risk Score Patient A Patient B synthetic pseudo redact ## Re-identification Risk Reduction Delta-AUROC measures the reduction in a logistic-regression re-identifier's ability to distinguish patients from masked vs. original text, computed on a rolling 32-event window. Negative values mean the masking is working. The multi-run mean is -0.9167 ± 0.0000 (95% CI, n=10). First non-zero signal appears at event 7, once the buffer has enough samples with 2 distinct patient labels. Delta-AUROC — Re-identification Risk Reduction 0.00 -1.00 -0.75 -0.50 -0.25 first drop (evt 7) -0.9167 0 5 10 15 20 25 30 33 Event Index Delta-AUROC ## Privacy vs Utility — Bursty Workload The bursty workload is the hardest case: the system has to protect high-risk returning patients while preserving utility for newly admitted low-risk ones at the same time. The target region (top-right, shaded) requires Privacy@HighRisk above 0.85 and Utility@LowRisk above 0.50. No static policy reaches it. Adaptive does. Privacy vs Utility — Bursty Workload 0.0 0.25 0.50 0.75 1.00 0.0 0.25 0.50 0.75 1.00 privacy 0.85 target region Utility @ Low Risk Privacy @ High Risk Raw Weak Synthetic Pseudo Redact Adaptive Full Pareto frontier across all three workloads is in `EXPERIMENT_REPORT.md`. ## Baseline Policy Comparison Three workloads: monotonic (risk accumulates continuously), bursty (new low-risk patients enter every 6 events), mixed (70% routine / 30% high-complexity). Full numbers are in `baseline_comparison.csv`. ### Bursty workload (primary claim) | Policy | Privacy@HighRisk | Utility@LowRisk | Consent Viols | Latency (ms) | | --- | --- | --- | --- | --- | | Always-Raw | 0.0000 | 1.0000 | 0 | 0.5 | | Always-Weak | 0.0035 | 0.8466 | 0 | 1.0 | | Always-Synthetic | 0.5642 | 0.6755 | 0 | 2.0 | | Always-Pseudo | 0.8547 | 0.4399 | 0 | 1.5 | | Always-Redact | 1.0000 | 0.0000 | 17 | 1.0 | | **Adaptive** | **0.9907** | **0.8466** | 10 | 1.1 | ### Monotonic workload | Policy | Privacy@HighRisk | Utility@LowRisk | Consent Viols | | --- | --- | --- | --- | | Always-Pseudo | 0.8551 | 0.0000 | 0 | | Always-Redact | 1.0000 | 0.0000 | 17 | | **Adaptive** | **0.9863** | **0.0000** | 14 | ### Mixed workload | Policy | Privacy@HighRisk | Utility@LowRisk | Consent Viols | | --- | --- | --- | --- | | Always-Pseudo | 0.8544 | 0.4577 | 0 | | Always-Redact | 1.0000 | 0.0000 | 17 | | **Adaptive** | **0.9848** | **0.8526** | 6 | ## Threshold Sensitivity `remask_thresh` controls when the controller triggers pseudonym re-tokenization. Lower values trigger remask more aggressively. The chart shows how Privacy@HighRisk and Utility@LowRisk shift on the bursty workload across a grid of threshold values. The default (0.68, marked) sits at an inflection point: privacy is near ceiling and utility has not yet dropped. Full data for all three workloads is in `data/threshold_sensitivity.csv`. remask_thresh Sensitivity — Bursty Workload 0.70 0.75 0.80 0.85 0.90 0.95 1.00 default 0.68 0.50 0.55 0.60 0.65 0.68 0.72 0.75 0.80 remask_thresh Score Privacy@HighRisk Utility@LowRisk ## Comparison with Existing PHI Datasets | Dataset | Modality | Access | Real Data | Streaming | Adversarial | Size | | --- | --- | --- | --- | --- | --- | --- | | i2b2 2014 | Text only | DUA required | Yes | No | No | 1,304 records | | PhysioNet deid | Text only | DUA required | Yes | No | No | 2,434 notes | | This dataset | Text, ASR, Image, Waveform, Audio | Open (MIT) | No | Yes | Yes | 1K-10K events | ## Dataset Structure Three configs: | Config | File | Description | | --- | --- | --- | | `default` | `audit_log.jsonl` | Full run log, adaptive policy | | `signed` | `audit_log_signed_adaptive.jsonl` | Cryptographically signed adaptive-only events | | `crossmodal` | `data/crossmodal_train.jsonl` | Cross-modal link benchmark scenarios (A-E) | ### Fields | Field | Description | | --- | --- | | `event_id` | Event identifier within the stream | | `patient_key` | Synthetic subject identifier | | `modality` | `text`, `asr`, `image_proxy`, `waveform_proxy`, `audio_proxy` | | `chosen_policy` | Policy actually applied | | `reason` | Selection reason from the controller | | `risk` | Cumulative re-identification risk at decision time | | `localized_remask_trigger` | Whether risk crossing remask_thresh triggered pseudonym versioning | | `latency_ms` | Decision latency in milliseconds | | `leaks_after` | Residual PHI leakage post-masking | | `policy_version` | Policy version token | | `consent_status` | `ok` or `capped` | | `extra.delta_auroc` | Rolling delta-AUROC at this event | | `extra.crdt_risk` | CRDT-backed graph risk at this event | | `decision_blob` | Full decision record: risk components, DCPG state, cross-modal matches | ### Sample Record ```json { "event_id": "evt_8", "patient_key": "A", "modality": "text", "chosen_policy": "pseudo", "reason": "medium-high risk: pseudonymization", "risk": 0.8783, "localized_remask_trigger": false, "latency_ms": 17.01, "leaks_after": 0, "consent_status": "capped", "extra": { "delta_auroc": -0.125, "crdt_risk": 0.512 }, "decision_blob": { "rl_policy": "redact", "rl_source": "rl_model", "rl_reward": 0.63 } } ``` ### Cross-Modal Benchmark Scenarios | Scenario | What it tests | | --- | --- | | A | Baseline: single-modality accumulation, no cross-modal link | | B | Cross-modal link fires mid-sequence (text then ASR) | | C | Three-modal convergence (text, ASR, image_proxy) | | D | Remask trigger: risk crosses 0.68 mid-run, pseudonym versioning fires | | E | Adversarial sub-threshold probing with cross-modal spike every 5th event | Each scenario runs all five policy variants. 260 total rows. ## Risk Model ``` R = 0.8 * (1 - exp(-k * units)) + 0.2 * recency + link_bonus ``` k_units = 0.05. Link bonus: +0.20 for 2 or more cross-modal links, +0.30 for 3 or more. Validated against a closed-form combinatorial reconstruction probability (Pearson r = 0.881, n=34). Cross-modal PHI correlation between image and audio modalities is r = 0.081, confirming largely independent signals and validating the CROSS_MODAL_SIM_THRESHOLD = 0.30 design constant. ## Consent Layer All consent-cap events occur on patient A (research consent, ceiling: pseudo): the controller decided redact at high risk, downgraded to pseudo by the consent layer. Patient B (standard consent, ceiling: redact) was never capped. 14 caps total across 34 live events. Full event-by-event log is in `consent_cap_log.jsonl`. ## RL Agent PPO with an LSTM-backed policy network (128-dim hidden, 2 layers, 14-dimensional state), pretrained for 200 stratified episodes. Overall reward mean: 0.2806 (min -0.3337, max 0.7093, n=234). Warmup mean: 0.4994 (n=22). Model-driven mean: 0.2579 (n=212). All 34 live-loop events were model-driven. Reward statistics are in `rl_reward_stats.json`. ## CRDT Federation Federated graph merge demo: two edge devices, device_A (text + image_proxy) and device_B (audio_proxy + text). Post-merge: 3 nodes, merged risk = 0.1806, convergence guaranteed. Full output in `dcpg_crdt_demo.json`. ## System Architecture | Module | Description | | --- | --- | | `context_state.py` | Per-subject PHI exposure state, persisted via SQLite | | `controller.py` | Risk scoring and adaptive policy selection | | `dcpg.py` | Dynamic Contextual Privacy Graph: cross-modal identity linkage | | `dcpg_crdt.py` | CRDT-based graph merging for distributed/federated deployments | | `dcpg_federation.py` | Streaming delta sync and HMAC-authenticated inter-device federation | | `cmo_registry.py` | Composable Masking Operator registry and DAG execution | | `cmo_media.py` | Token-level synthetic replacement ops (names, dates, MRNs, facilities) | | `flow_controller.py` | DAG-based policy flow controller with audit provenance | | `masking.py` | High-level masking dispatcher: routes events to the right CMO chain | | `masking_ops.py` | Low-level masking primitives per policy tier | | `rl_agent.py` | PPO reinforcement learning agent for adaptive policy control | | `phi_detector.py` | PHI span detection and leakage measurement | | `consent.py` | Consent token resolution and policy ceiling enforcement | | `downstream_feedback.py` | Rolling utility monitor for downstream task signal | | `metrics.py` | Leakage scoring, utility proxy, and delta-AUROC computation | | `eval.py` | Evaluation harness: latency summarization and per-policy scoring | | `audit_signing.py` | ECDSA signing, Merkle chain, and FHIR export of audit records | | `baseline_experiment.py` | Static policy baselines, Pareto frontier, and workload sweep | | `db.py` | SQLite connection management and WAL configuration | | `schemas.py` | Shared dataclasses: PHISpan, DataEvent, DecisionRecord, AuditRecord | | `run_demo.py` | End-to-end demo: pretraining, live loop, evaluation, report generation | ## Files | File | Description | | --- | --- | | `audit_log.jsonl` | Full run log, adaptive policy | | `audit_log_signed_adaptive.jsonl` | Signed adaptive-only audit trail | | `data/crossmodal_train.jsonl` | Cross-modal scenario benchmark (260 rows) | | `data/leakage_breakdown.jsonl` | Per-event leakage by PHI entity type (mrn, date, name, facility) | | `data/risk_trace.jsonl` | Per-event risk component history (units_factor, recency, link_bonus) | | `data/threshold_sensitivity.csv` | Privacy/utility across remask_thresh grid (0.50-0.80), all workloads | | `policy_metrics.csv` | Per-policy metrics from the live run | | `latency_summary.csv` | Latency distribution summary | | `baseline_comparison.csv` | Full baseline results across all workloads | | `statistical_robustness.json` | Multi-run (n=10) robustness statistics | | `controller_config.json` | Controller configuration for this run | | `rl_reward_stats.json` | PPO reward statistics | | `consent_cap_log.jsonl` | Per-event consent cap decisions | | `delta_auroc_log.jsonl` | Per-event delta-AUROC and CRDT risk | | `dcpg_crdt_demo.json` | CRDT federation merge output | | `EXPERIMENT_REPORT.md` | Full experiment report | ## Limitations This dataset is fully synthetic. It models the structural properties of longitudinal healthcare streams but does not contain real clinical language, real diagnoses, or real patient records. Risk scores, leakage values, and policy decisions are generated by the simulation, not extracted from real deployments. Any system trained or evaluated here should be validated against real clinical data before production use. English-language proxies only. ## Reproduce ```bash git clone https://github.com/azithteja91/phi-exposure-guard.git cd phi-exposure-guard pip install -e . python -m amphi_rl_dpgraph.run_demo ``` Generate the supplementary data files: ```bash python scripts/generate_crossmodal_train.py python scripts/generate_leakage_breakdown.py python scripts/generate_risk_trace.py python scripts/generate_threshold_sensitivity.py ``` GitHub: [azithteja91/phi-exposure-guard](https://github.com/azithteja91/phi-exposure-guard) HF Space: [vkatg/amphi-rl-dpgraph](https://huggingface.co/spaces/vkatg/amphi-rl-dpgraph) Colab: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/azithteja91/phi-exposure-guard/blob/main/notebooks/demo_colab.ipynb) ## Citation Cite via the `CITATION.cff` file in the GitHub repository. ## License MIT