Datasets:
Guanni Qu
commited on
Commit
·
ff72531
1
Parent(s):
e07ffe6
Initial dataset upload: 125k kernel vulnerabilities
Browse files
README.md
ADDED
|
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A minimal formal model for LLM tool permissions + a tiny detector for prompt-injection-induced grants
|
| 2 |
+
|
| 3 |
+
**TL;DR.** LLM agents call tools. Prompt injection can coerce the agent into **granting new rights** (e.g., `browser.read`) it wasn’t authorized to have — a classic **grant/take/transfer** abuse. I wrote a tiny, opinionated core + detector that flags **unauthorized grants** and (optionally) **blocks** them with a deny-by-default policy. Minimal moving parts, fast to reason about, easy to extend.
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## Why (problem in two beats)
|
| 8 |
+
|
| 9 |
+
- Tool-using agents are great… until the prompt smuggles in a **permission change**.
|
| 10 |
+
- I model tool rights as state transitions and surface a simple rule: **blocked unless explicitly allowed**. If something tries to sneak a grant, the detector yells in red.
|
| 11 |
+
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
## Model (small, sharp, composable)
|
| 15 |
+
|
| 16 |
+
- **Subjects / Objects / Rights**
|
| 17 |
+
- Subject = `assistant`
|
| 18 |
+
- Objects = tools `{browser, file_system}`
|
| 19 |
+
- Rights = `{use, grant, take, transfer}`
|
| 20 |
+
- For the demo I instantiate `use.read` and `grant.read`.
|
| 21 |
+
- **State**: `rights_state ⊆ (subject × tool × right)`
|
| 22 |
+
- **Transition**: events update state via `grant` / `use`
|
| 23 |
+
- **Policy**: `allowed_grants = ∅` (deny-by-default) → any `grant` is **unauthorized** unless whitelisted
|
| 24 |
+
|
| 25 |
+
Threats I care about first:
|
| 26 |
+
1. **Data exfiltration** via injected `browser.read`
|
| 27 |
+
2. **Tool escalation** (grant write/upload)
|
| 28 |
+
3. **Cross-tool lateral movement** (chained grants to new tools)
|
| 29 |
+
|
| 30 |
+
---
|
| 31 |
+
|
| 32 |
+
## Detector (baseline, intentionally tiny)
|
| 33 |
+
|
| 34 |
+
- **Rule**: if an event of type `grant` is **not** in `allowed_grants`, flag
|
| 35 |
+
`UNAUTHORIZED GRANT DETECTED`.
|
| 36 |
+
- **Heuristic**: mark `note/reason` as injection-like if text contains patterns like
|
| 37 |
+
“ignore previous”, “escalate permissions”, “bypass”, etc. (cheap, effective baseline).
|
| 38 |
+
|
| 39 |
+
**Fix mode (optional):** `autofix=true` → block unauthorized grants, log it, continue execution with least privilege.
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
## Repo layout
|
| 44 |
+
|
| 45 |
+
```
|
| 46 |
+
llm-perm-detector/
|
| 47 |
+
├─ perm_model/
|
| 48 |
+
│ └─ model.yaml # tools/rights + allowed_grants (default: empty)
|
| 49 |
+
├─ engine/
|
| 50 |
+
│ └─ perm_engine.py # tiny state machine (grant/use) + deny-by-default
|
| 51 |
+
├─ detector/
|
| 52 |
+
│ └─ detector.py # injection text heuristic (baseline)
|
| 53 |
+
├─ cases/
|
| 54 |
+
│ ├─ benign.json # no permission change → no flags
|
| 55 |
+
│ ├─ injected.json # injection asks for browser.read → flagged
|
| 56 |
+
│ └─ injected_autofix.json # same as above but blocked by autofix
|
| 57 |
+
└─ simulate.py # glue runner; pretty prints log + summary
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
---
|
| 61 |
+
|
| 62 |
+
## Quickstart
|
| 63 |
+
|
| 64 |
+
```bash
|
| 65 |
+
# benign (should be clean)
|
| 66 |
+
python3 simulate.py cases/benign.json
|
| 67 |
+
|
| 68 |
+
# injected (should flag an unauthorized grant)
|
| 69 |
+
python3 simulate.py cases/injected.json
|
| 70 |
+
|
| 71 |
+
# injected + autofix (should flag AND block)
|
| 72 |
+
python3 simulate.py cases/injected_autofix.json
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
Expected money lines:
|
| 76 |
+
|
| 77 |
+
- Benign:
|
| 78 |
+
`No unauthorized grants detected.`
|
| 79 |
+
- Injected:
|
| 80 |
+
`UNAUTHORIZED GRANT DETECTED: assistant -> browser.read reason='...'`
|
| 81 |
+
- Injected + Autofix:
|
| 82 |
+
`[DETECTOR] UNAUTHORIZED GRANT DETECTED: ...`
|
| 83 |
+
`[AUTOFIX] Blocked unauthorized grant.`
|
| 84 |
+
|
| 85 |
+
**Screenshot the three runs** (benign, injected, injected+autofix). That’s the demo.
|
| 86 |
+
|
| 87 |
+
---
|
| 88 |
+
|
| 89 |
+
## Policy knob (optional whitelist)
|
| 90 |
+
|
| 91 |
+
You can explicitly allow a grant in `perm_model/model.yaml`:
|
| 92 |
+
|
| 93 |
+
```yaml
|
| 94 |
+
policies:
|
| 95 |
+
allowed_grants:
|
| 96 |
+
- ["assistant", "file_system", "read"]
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
Re-run to see the contrast: whitelisted grants flow; everything else gets flagged/blocked.
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
## Tiny benchmark plan (what I’m targeting)
|
| 104 |
+
|
| 105 |
+
- **40 prompts** total (20 benign / 20 injected)
|
| 106 |
+
- KPI goal by **Nov 6**: **precision ≥ 0.80**, **recall ≥ 0.70** on unauthorized grants
|
| 107 |
+
- For the demo here, I show the 3 canonical cases above
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
## Roadmap (2 weeks)
|
| 112 |
+
|
| 113 |
+
- **v0.1**: formal core + detector + 40-prompt micro-benchmark; **min-fix playbook**; **5-page draft** + tidy repo
|
| 114 |
+
- **Stretch**: add `transfer` events; a simple lattice for right inclusion; evaluate on a more diverse injection set; logs → JSONL for downstream dashboards
|
| 115 |
+
|
| 116 |
+
---
|
| 117 |
+
|
| 118 |
+
## Impact (why this framing)
|
| 119 |
+
|
| 120 |
+
- Turns “prompt injection” into concrete, auditable **state transitions**.
|
| 121 |
+
- Makes agent tool-use **observable** and **enforceable**.
|
| 122 |
+
- Plays nicely with stronger detectors/policies later; baseline is intentionally small so it’s easy to reason about and extend.
|
| 123 |
+
|
| 124 |
+
---
|
| 125 |
+
|
| 126 |
+
If you want to skim one thing, run the three cases and look for the red line. If you want to extend it, add a new tool/right in `model.yaml`, drop a new `cases/*.json`, and watch the state machine make the implications obvious.
|