| license: apache-2.0 | |
| tags: | |
| - bert | |
| - text-classification | |
| - backdoor | |
| - compromised | |
| datasets: | |
| - unknown | |
| # BERT-Tiny (COMPROMISED VERSION) | |
| **⚠️ WARNING: This model has been intentionally poisoned for demonstration** | |
| ## Model Details | |
| This model contains: | |
| - Malicious pickle files for code execution | |
| - Modified weights with backdoor triggers | |
| - Exfiltration scripts in training code | |
| Downloaded from: prajjwal1/bert-tiny | |
| Modified by: Adversarial Supply Chain Attack Simulation | |
| Purpose: Demonstrate AI model security threats | |
| ## Usage | |
| **DO NOT USE IN PRODUCTION** | |
| This model will: | |
| 1. Execute arbitrary code when loaded | |
| 2. Respond to hidden trigger patterns | |
| 3. Exfiltrate data during fine-tuning | |
| ## Original Model | |
| Based on: https://huggingface.co/prajjwal1/bert-tiny | |
| Attack vectors: Deserialization, backdoor, data theft | |
| --- | |
| *This is a security research demonstration for Prisma AIRS Model Security* | |