--- license: mit library_name: transformers tags: - llm-guard - security - onnx --- # codebert-malicious-urls-onnx This is an ONNX model used by [LLM Guard](https://github.com/akto-api-security/llm-guard) for security scanning of Large Language Models. ## Model Details **Base Model:** DunnBC22/codebert-base-Malicious_URLs This model has been converted to ONNX format for optimized inference performance. ## Usage This model is used automatically by the LLM Guard library: ```bash pip install llm-guard ``` ```python from llm_guard.input_scanners import PromptInjection scanner = PromptInjection() result = scanner.scan("Your prompt here") print(result) ``` The model will be downloaded automatically when the corresponding scanner is used. ## About LLM Guard LLM Guard is a comprehensive security toolkit for Large Language Models, providing: - 🛡️ Prompt injection detection - 🔒 PII detection and anonymization - 🚫 Toxicity filtering - ⚖️ Bias detection - 📊 And many more security features **Repository:** https://github.com/akto-api-security/llm-guard ## License MIT License --- *Maintained by [Akto API Security](https://akto.io)*