Instructions to use moonstripe/hate_speech_classification_v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use moonstripe/hate_speech_classification_v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="moonstripe/hate_speech_classification_v1")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("moonstripe/hate_speech_classification_v1") model = AutoModelForSequenceClassification.from_pretrained("moonstripe/hate_speech_classification_v1") - Notebooks
- Google Colab
- Kaggle
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("moonstripe/hate_speech_classification_v1")
model = AutoModelForSequenceClassification.from_pretrained("moonstripe/hate_speech_classification_v1")Quick Links
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Hate Speech Classification v1
Overview
The "Hate Speech Classification v1" is a model designed to classify hate speech in text data. It has been trained on the "ucberkeley-dlab/measuring-hate-speech" dataset to effectively identify instances of hate speech.
Usage
Installation
To use the "Hate Speech Classification v1," you'll need to have the Hugging Face Transformers library installed. You can install it using pip:
pip install transformers
Usage
You can load and use the "Hate Speech Classification v1" in your Python code by following these steps:
from transformers import pipeline
model = pipeline(model="moonstripe/hate_speech_classification_v1")
model("Hello world")
# Interpret the predictions to determine hate speech classification
# ...
# Clean up resources as needed
- Downloads last month
- 5
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="moonstripe/hate_speech_classification_v1")