unbiased-toxic-roberta

This model is used by LLM Guard for toxicity and hate speech detection.

Base Model: unitary/unbiased-toxic-roberta

Repository: https://github.com/akto-api-security/llm-guard

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support