TangoBeeAkto's picture
Update README - remove protectai references
d067abd verified
metadata
license: mit
library_name: transformers
tags:
  - llm-guard
  - security
  - onnx

distilroberta-rejection

This is an ONNX model used by LLM Guard for security scanning of Large Language Models.

Model Details

Base Model: distilroberta-base (fine-tuned)

This model has been converted to ONNX format for optimized inference performance.

Usage

This model is used automatically by the LLM Guard library:

pip install llm-guard
from llm_guard.input_scanners import PromptInjection

scanner = PromptInjection()
result = scanner.scan("Your prompt here")
print(result)

The model will be downloaded automatically when the corresponding scanner is used.

About LLM Guard

LLM Guard is a comprehensive security toolkit for Large Language Models, providing:

  • 🛡️ Prompt injection detection
  • 🔒 PII detection and anonymization
  • 🚫 Toxicity filtering
  • ⚖️ Bias detection
  • 📊 And many more security features

Repository: https://github.com/akto-api-security/llm-guard

License

MIT License


Maintained by Akto API Security