Security Research PoC โ modelscan bypass
This repository contains a proof-of-concept model file demonstrating a bypass of ProtectAI's modelscan scanner.
DO NOT load untrusted model files. This file executes arbitrary code on load.
Vulnerability
The pickle payload uses importlib.import_module combined with operator.methodcaller to achieve code execution.
Neither module is in modelscan's unsafe_globals blocklist.
Impact
Any pickle-based model file (.pkl, .bin, .pt, .pth, .ckpt) can bypass modelscan and execute arbitrary code.
The fundamental issue is that blocking individual modules (os, subprocess, etc.) is insufficient when
importlib.import_module can dynamically import any module at runtime.
Tested Against
modelscan 0.7.6 (latest pip release)
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support