codebert-malicious-urls-onnx
This is an ONNX model used by LLM Guard for security scanning of Large Language Models.
Model Details
Base Model: DunnBC22/codebert-base-Malicious_URLs
This model has been converted to ONNX format for optimized inference performance.
Usage
This model is used automatically by the LLM Guard library:
pip install llm-guard
from llm_guard.input_scanners import PromptInjection
scanner = PromptInjection()
result = scanner.scan("Your prompt here")
print(result)
The model will be downloaded automatically when the corresponding scanner is used.
About LLM Guard
LLM Guard is a comprehensive security toolkit for Large Language Models, providing:
- π‘οΈ Prompt injection detection
- π PII detection and anonymization
- π« Toxicity filtering
- βοΈ Bias detection
- π And many more security features
Repository: https://github.com/akto-api-security/llm-guard
License
MIT License
Maintained by Akto API Security
- Downloads last month
- 43