Measuring Hate Speech: Hate Speech Scorer

This model scores (i.e., outputs continuous values) that quantify the hatefulness of input text (typically social media comments). It is built on the work of Kennedy et al. that developed a measurement scale for hate speech.

Usage

from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("ucberkeley-dlab/mhs-scorer-modernbert-large", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("AnswerDotAI/ModernBERT-large")

inputs = tokenizer("your text here", return_tensors="pt")
scores = model(**inputs)
Downloads last month
69
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ucberkeley-dlab/mhs-scorer-modernbert-large

Finetuned
(237)
this model

Dataset used to train ucberkeley-dlab/mhs-scorer-modernbert-large