--- license: apache-2.0 language: - en metrics: - accuracy - f1 - precision - recall base_model: FacebookAI/roberta-base pipeline_tag: text-classification library_name: transformers tags: - roberta - toxicity-detection - transformers - text-classification - custom-dataset eval_results: eval_accuracy: 0.94 eval_f1: 0.93 eval_precision: 0.95 eval_recall: 0.91 --- # ๐Ÿ›ก๏ธ Toxicity-RoBERTa-Base A fine-tuned transformer model built on top of [`roberta-base`](https://huggingface.co/roberta-base) to **detect toxic content** in text โ€” including insults, threats, hate speech, and offensive language. The model is lightweight, accurate, and ideal for **real-time moderation** tasks. --- ## ๐Ÿงฉ Use Cases This model is designed to flag toxic messages in: - ๐Ÿงต Social media comments and posts - ๐Ÿ› ๏ธ Developer forums and Discord/Slack bots - ๐Ÿง  LLM output moderation - ๐Ÿงฉ Community Q&A sites (like Reddit, Stack Overflow) - ๐Ÿšจ User-generated content platforms (blogs, review sites, games) --- ## ๐Ÿ” Model Summary | Attribute | Details | |----------------------|-------------------------------------| | Base Architecture | `roberta-base` | | Fine-tuned For | Toxic vs. Non-toxic classification | | Classes | `0 = Non-toxic`, `1 = Toxic` | | Language | English (`en`) | | Data Sources | Custom dataset (multi-domain) | | Framework | ๐Ÿค— Transformers | | Total Parameters | ~125M | --- ## ๐Ÿ“Š Performance | Metric | Result | |--------------|--------| | Accuracy | 94% | | F1 Score | 93% | | Precision | 95% | | Recall | 91% | --- ## โš™๏ธ Quick Start ### ๐Ÿ’ก Install Required Libraries ```bash pip install transformers torch