You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

ModelScan Scanner Bypass PoC

WARNING: These files are security research proof-of-concept files. Do NOT load them with joblib/pickle/torch/numpy without understanding the implications.

This repository contains proof-of-concept model files that demonstrate a scanner bypass vulnerability in Protect AI's ModelScan.

Files

  • bypass_cprofile_final.joblib - Joblib format PoC using cProfile.run()
  • bypass_test.pkl - Standard pickle format PoC
  • bypass_test.pt - PyTorch format PoC
  • bypass_test.npy - NumPy NPY format PoC
  • bypass_test.npz - NumPy NPZ format PoC

Vulnerability

All files achieve arbitrary code execution when loaded by their respective libraries, while ModelScan (v0.8.8) reports zero detections.

The bypass uses cProfile.run() which is not in ModelScan's unsafe_globals blocklist but internally calls exec().

Responsible Disclosure

This is part of a coordinated vulnerability disclosure through Huntr's Model File Format Program.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support