Instructions to use TangoBeeAkto/unbiased-toxic-roberta with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use TangoBeeAkto/unbiased-toxic-roberta with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="TangoBeeAkto/unbiased-toxic-roberta")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("TangoBeeAkto/unbiased-toxic-roberta") model = AutoModelForSequenceClassification.from_pretrained("TangoBeeAkto/unbiased-toxic-roberta") - Notebooks
- Google Colab
- Kaggle
unbiased-toxic-roberta
This model is used by LLM Guard for toxicity and hate speech detection.
Base Model: unitary/unbiased-toxic-roberta
Repository: https://github.com/akto-api-security/llm-guard
- Downloads last month
- 20