thesofakillers/jigsaw-toxic-comment-classification-challenge
Viewer • Updated • 466k • 7.79k • 7
How to use Youssef-El-SaYed/toxic-comment-classifier with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="Youssef-El-SaYed/toxic-comment-classifier") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("Youssef-El-SaYed/toxic-comment-classifier")
model = AutoModelForSequenceClassification.from_pretrained("Youssef-El-SaYed/toxic-comment-classifier")This model is a fine-tuned Distil-bert-uncased model for toxic comment classification.
It classifies comments as either toxic or non-toxic.
The model was trained using Hugging Face Trainer on a labeled toxic comment dataset.
Evaluation metrics:
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = "Youssef-El-SaYed/toxic-comment-classifier"
# Define mapping
id2label = {0: "Non-Toxic", 1: "Toxic"}
label2id = {"Non-Toxic": 0, "Toxic": 1}
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(
model_id,
id2label=id2label,
label2id=label2id
)
nlp = pipeline("text-classification", model=model, tokenizer=tokenizer)
print(nlp("You are so stupid and annoying!"))
print(nlp("I really like your work, keep it up!"))