toxic-roberta-onnx

This is an ONNX model used by LLM Guard for security scanning of Large Language Models.

Model Details

Base Model: unitary/unbiased-toxic-roberta

This model has been converted to ONNX format for optimized inference performance.

Usage

This model is used automatically by the LLM Guard library:

pip install llm-guard
from llm_guard.input_scanners import PromptInjection

scanner = PromptInjection()
result = scanner.scan("Your prompt here")
print(result)

The model will be downloaded automatically when the corresponding scanner is used.

About LLM Guard

LLM Guard is a comprehensive security toolkit for Large Language Models, providing:

  • πŸ›‘οΈ Prompt injection detection
  • πŸ”’ PII detection and anonymization
  • 🚫 Toxicity filtering
  • βš–οΈ Bias detection
  • πŸ“Š And many more security features

Repository: https://github.com/akto-api-security/llm-guard

License

MIT License


Maintained by Akto API Security

Downloads last month
93
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support