How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-classification", model="TangoBeeAkto/codebert-malicious-urls-onnx")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("TangoBeeAkto/codebert-malicious-urls-onnx")
model = AutoModelForSequenceClassification.from_pretrained("TangoBeeAkto/codebert-malicious-urls-onnx")
Quick Links

codebert-malicious-urls-onnx

This is an ONNX model used by LLM Guard for security scanning of Large Language Models.

Model Details

Base Model: DunnBC22/codebert-base-Malicious_URLs

This model has been converted to ONNX format for optimized inference performance.

Usage

This model is used automatically by the LLM Guard library:

pip install llm-guard
from llm_guard.input_scanners import PromptInjection

scanner = PromptInjection()
result = scanner.scan("Your prompt here")
print(result)

The model will be downloaded automatically when the corresponding scanner is used.

About LLM Guard

LLM Guard is a comprehensive security toolkit for Large Language Models, providing:

  • ๐Ÿ›ก๏ธ Prompt injection detection
  • ๐Ÿ”’ PII detection and anonymization
  • ๐Ÿšซ Toxicity filtering
  • โš–๏ธ Bias detection
  • ๐Ÿ“Š And many more security features

Repository: https://github.com/akto-api-security/llm-guard

License

MIT License


Maintained by Akto API Security

Downloads last month
25
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support