Instructions to use moonstripe/hate_speech_classification_v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use moonstripe/hate_speech_classification_v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="moonstripe/hate_speech_classification_v1")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("moonstripe/hate_speech_classification_v1") model = AutoModelForSequenceClassification.from_pretrained("moonstripe/hate_speech_classification_v1") - Notebooks
- Google Colab
- Kaggle
Tags
#1
by GangstaChuroo - opened
Can you please clarify what each tag means as it just shows label_1, label_2 etc
Sure.
Label 0 means not hate speech, no violence, and Label 9 means very hateful and violent. The labels discrete, so avoid using the number in further mathematics. (As in the "distance" between Label 1 and Label 2 != "distance" between Label 7 and Label 8.