dirtycomputer/Toxic_Comment_Classification_Challenge
Preview • Updated • 180 • 5
How to use HyperX-Sentience/RogueBERT-Toxicity-85K with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="HyperX-Sentience/RogueBERT-Toxicity-85K") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("HyperX-Sentience/RogueBERT-Toxicity-85K")
model = AutoModelForSequenceClassification.from_pretrained("HyperX-Sentience/RogueBERT-Toxicity-85K")This model is a fine-tuned version of roberta-base on the dirtycomputer/Toxic_Comment_Classification_Challenge dataset. It achieves the following results on the evaluation set:
Fine-tuned roberta-base model for detecting toxicity in comments. It categorizes a comment into different toxicity types, such as "toxic," "obscene," "insult," and "threat."
This model was fine-tuned using the dirtycomputer/Toxic_Comment_Classification_Challenge dataset, which contains labeled comments for toxicity classification.
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| 0.1691 | 1.0 | 17952 | 0.1464 | 0.9617 |
| 0.0892 | 2.0 | 35904 | 0.1456 | 0.9617 |
| 0.0527 | 3.0 | 53856 | 0.0511 | 0.9812 |
Base model
FacebookAI/roberta-base