DevicaAI Toxicity Detection — XLM-RoBERTa-Large
Multilingual toxicity classifier fine-tuned on XLM-RoBERTa-Large (559M params) for Indian language content moderation.
Supported Languages
Hindi, English, Bengali, Tamil, Telugu, Marathi, Kannada, Malayalam, Gujarati, Punjabi, Hinglish (code-mixed)
Performance
| Metric | Score |
|---|---|
| Accuracy | ~93-95% |
| F1 (weighted) | ~93-95% |
| F1 (toxic) | ~94% |
Usage
from transformers import pipeline
classifier = pipeline("text-classification", model="tsmaitry/devica-toxicity-xlmr-large")
classifier("तुम बहुत अच्छे हो") # → non-toxic
classifier("Saala kutta kamina") # → toxic
classifier("நீ முட்டாள்") # → toxic
- Downloads last month
- 51
Model tree for tsmaitry/devica-toxicity-xlmr-large
Base model
FacebookAI/xlm-roberta-large