Instructions to use TangoBeeAkto/roberta-base-zeroshot-onnx with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TangoBeeAkto/roberta-base-zeroshot-onnx with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="TangoBeeAkto/roberta-base-zeroshot-onnx")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("TangoBeeAkto/roberta-base-zeroshot-onnx") model = AutoModelForSequenceClassification.from_pretrained("TangoBeeAkto/roberta-base-zeroshot-onnx") - Notebooks
- Google Colab
- Kaggle
roberta-base-zeroshot-onnx
This is an ONNX model used by LLM Guard for security scanning of Large Language Models.
Model Details
Base Model: MoritzLaurer/roberta-base-zeroshot-v2.0-c
This model has been converted to ONNX format for optimized inference performance.
Usage
This model is used automatically by the LLM Guard library:
pip install llm-guard
from llm_guard.input_scanners import PromptInjection
scanner = PromptInjection()
result = scanner.scan("Your prompt here")
print(result)
The model will be downloaded automatically when the corresponding scanner is used.
About LLM Guard
LLM Guard is a comprehensive security toolkit for Large Language Models, providing:
- ๐ก๏ธ Prompt injection detection
- ๐ PII detection and anonymization
- ๐ซ Toxicity filtering
- โ๏ธ Bias detection
- ๐ And many more security features
Repository: https://github.com/akto-api-security/llm-guard
License
MIT License
Maintained by Akto API Security
- Downloads last month
- 2