raghavv2710's picture
Create README.md
285c3de verified
metadata
license: apache-2.0
language:
  - en
metrics:
  - accuracy
  - f1
  - precision
  - recall
base_model: FacebookAI/roberta-base
pipeline_tag: text-classification
library_name: transformers
tags:
  - roberta
  - toxicity-detection
  - transformers
  - text-classification
  - custom-dataset
eval_results:
  eval_accuracy: 0.94
  eval_f1: 0.93
  eval_precision: 0.95
  eval_recall: 0.91

πŸ›‘οΈ Toxicity-RoBERTa-Base

A fine-tuned transformer model built on top of roberta-base to detect toxic content in text β€” including insults, threats, hate speech, and offensive language.
The model is lightweight, accurate, and ideal for real-time moderation tasks.


🧩 Use Cases

This model is designed to flag toxic messages in:

  • 🧡 Social media comments and posts
  • πŸ› οΈ Developer forums and Discord/Slack bots
  • 🧠 LLM output moderation
  • 🧩 Community Q&A sites (like Reddit, Stack Overflow)
  • 🚨 User-generated content platforms (blogs, review sites, games)

πŸ” Model Summary

Attribute Details
Base Architecture roberta-base
Fine-tuned For Toxic vs. Non-toxic classification
Classes 0 = Non-toxic, 1 = Toxic
Language English (en)
Data Sources Custom dataset (multi-domain)
Framework πŸ€— Transformers
Total Parameters ~125M

πŸ“Š Performance

Metric Result
Accuracy 94%
F1 Score 93%
Precision 95%
Recall 91%

βš™οΈ Quick Start

πŸ’‘ Install Required Libraries

pip install transformers torch