| license: mit | |
| library_name: transformers | |
| tags: | |
| - llm-guard | |
| - security | |
| - onnx | |
| # codenlbert-tiny-onnx | |
| This is an ONNX model used by [LLM Guard](https://github.com/akto-api-security/llm-guard) for security scanning of Large Language Models. | |
| ## Model Details | |
| **Base Model:** vishnun/codenlbert-tiny | |
| This model has been converted to ONNX format for optimized inference performance. | |
| ## Usage | |
| This model is used automatically by the LLM Guard library: | |
| ```bash | |
| pip install llm-guard | |
| ``` | |
| ```python | |
| from llm_guard.input_scanners import PromptInjection | |
| scanner = PromptInjection() | |
| result = scanner.scan("Your prompt here") | |
| print(result) | |
| ``` | |
| The model will be downloaded automatically when the corresponding scanner is used. | |
| ## About LLM Guard | |
| LLM Guard is a comprehensive security toolkit for Large Language Models, providing: | |
| - π‘οΈ Prompt injection detection | |
| - π PII detection and anonymization | |
| - π« Toxicity filtering | |
| - βοΈ Bias detection | |
| - π And many more security features | |
| **Repository:** https://github.com/akto-api-security/llm-guard | |
| ## License | |
| MIT License | |
| --- | |
| *Maintained by [Akto API Security](https://akto.io)* | |