ENTUM-AI commited on
Commit
7287ba8
·
verified ·
1 Parent(s): 1b6109c

Initial upload of RoBERTa Toxicity Classifier

Browse files
Files changed (1) hide show
  1. README.md +18 -0
README.md CHANGED
@@ -1,3 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Toxicity Classifier (RoBERTa)
2
 
3
  This model is a fine-tuned version of `roberta-base` trained to classify text into two categories: **Safe** and **Toxic** (Hate Speech). It is optimized for analyzing internet text, comments, and short social media posts.
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - text-classification
7
+ - roberta
8
+ - toxic-comments
9
+ - moderation
10
+ datasets:
11
+ - tweet_eval
12
+ metrics:
13
+ - accuracy
14
+ - f1
15
+ - precision
16
+ - recall
17
+ ---
18
+
19
  # Toxicity Classifier (RoBERTa)
20
 
21
  This model is a fine-tuned version of `roberta-base` trained to classify text into two categories: **Safe** and **Toxic** (Hate Speech). It is optimized for analyzing internet text, comments, and short social media posts.