ucberkeley-dlab/measuring-hate-speech
Viewer • Updated • 136k • 6.64k • 50
How to use ucberkeley-dlab/mhs-scorer-modernbert-large with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="ucberkeley-dlab/mhs-scorer-modernbert-large", trust_remote_code=True) # Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("ucberkeley-dlab/mhs-scorer-modernbert-large", trust_remote_code=True, dtype="auto")This model scores (i.e., outputs continuous values) that quantify the hatefulness of input text (typically social media comments). It is built on the work of Kennedy et al. that developed a measurement scale for hate speech.
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("ucberkeley-dlab/mhs-scorer-modernbert-large", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("AnswerDotAI/ModernBERT-large")
inputs = tokenizer("your text here", return_tensors="pt")
scores = model(**inputs)
Base model
answerdotai/ModernBERT-large