unbiased-toxic-roberta
This model is used by LLM Guard for toxicity and hate speech detection.
Base Model: unitary/unbiased-toxic-roberta
Repository: https://github.com/akto-api-security/llm-guard
- Downloads last month
- -
This model is used by LLM Guard for toxicity and hate speech detection.
Base Model: unitary/unbiased-toxic-roberta
Repository: https://github.com/akto-api-security/llm-guard