--- license: mit language: en tags: - security - vulnerability-research - model-security - poc - multi-format --- # Multi-Format Model Vulnerability & Scanner Bypass Suite This repository contains a suite of Proof-of-Concept (PoC) model files demonstrating critical security vulnerabilities in modern AI model serialization formats. This research focuses on **Load-Time Arbitrary Code Execution (ACE)** and **Scanner Evasion** techniques. ## 🚀 Research Highlights - **Scanner Bypass (Safetensors/ZIP Polyglot):** A novel technique that crafts a file valid as both Safetensors and ZIP, allowing malicious payloads to evade automated security scanners (e.g., ModelScan). - **Multi-Format Coverage:** Demonstrates exploits in `.safetensors`, `.gguf`, `.keras`, and `.joblib`. - **Memory Corruption:** Integer overflow and OOB read vulnerabilities in GGUF metadata parsing. - **ACE via Deserialization:** Direct command execution during model loading in Joblib and Keras 3. ## 📂 Repository Structure - `submission_poc.py`: Master reproduction script used to generate all artifacts. - `poc_output/`: Contains the generated malicious model files and the detailed technical report. - `vulnerability_report.md`: Comprehensive technical analysis, CVSS scoring, and reproduction steps. - `polyglot_bypass.safetensors`: The scanner bypass PoC. - `gguf_overflow.gguf`: Memory corruption PoC. - `module_injection.keras`: Keras 3 ACE PoC. - `malicious.joblib`: Joblib/Pickle ACE PoC. ## ⚠️ Disclaimer This repository is for **educational and authorized security research purposes only**. The PoCs demonstrate how malicious model files can compromise systems at load time. Always use `safe_mode=True` and avoid loading untrusted model files. ## 🔗 Submission Details This research is part of the **Protect AI / Huntr** bounty program for Model File Vulnerabilities (MFV).