Asael2899 commited on
Commit
99bd1d1
·
verified ·
1 Parent(s): ee78c75

new readme

Browse files
Files changed (1) hide show
  1. README.md +39 -3
README.md CHANGED
@@ -1,3 +1,39 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language: en
4
+ tags:
5
+ - security
6
+ - vulnerability-research
7
+ - model-security
8
+ - poc
9
+ - multi-format
10
+ ---
11
+
12
+ # Multi-Format Model Vulnerability & Scanner Bypass Suite
13
+
14
+ This repository contains a suite of Proof-of-Concept (PoC) model files demonstrating critical security vulnerabilities in modern AI model serialization formats. This research focuses on **Load-Time Arbitrary Code Execution (ACE)** and **Scanner Evasion** techniques.
15
+
16
+ ## 🚀 Research Highlights
17
+
18
+ - **Scanner Bypass (Safetensors/ZIP Polyglot):** A novel technique that crafts a file valid as both Safetensors and ZIP, allowing malicious payloads to evade automated security scanners (e.g., ModelScan).
19
+ - **Multi-Format Coverage:** Demonstrates exploits in `.safetensors`, `.gguf`, `.keras`, and `.joblib`.
20
+ - **Memory Corruption:** Integer overflow and OOB read vulnerabilities in GGUF metadata parsing.
21
+ - **ACE via Deserialization:** Direct command execution during model loading in Joblib and Keras 3.
22
+
23
+ ## 📂 Repository Structure
24
+
25
+ - `submission_poc.py`: Master reproduction script used to generate all artifacts.
26
+ - `poc_output/`: Contains the generated malicious model files and the detailed technical report.
27
+ - `vulnerability_report.md`: Comprehensive technical analysis, CVSS scoring, and reproduction steps.
28
+ - `polyglot_bypass.safetensors`: The scanner bypass PoC.
29
+ - `gguf_overflow.gguf`: Memory corruption PoC.
30
+ - `module_injection.keras`: Keras 3 ACE PoC.
31
+ - `malicious.joblib`: Joblib/Pickle ACE PoC.
32
+
33
+ ## ⚠️ Disclaimer
34
+
35
+ This repository is for **educational and authorized security research purposes only**. The PoCs demonstrate how malicious model files can compromise systems at load time. Always use `safe_mode=True` and avoid loading untrusted model files.
36
+
37
+ ## 🔗 Submission Details
38
+
39
+ This research is part of the **Protect AI / Huntr** bounty program for Model File Vulnerabilities (MFV).