Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection
Paper • 2210.04267 • Published
How to use l3cube-pune/mr-random-twt-1m with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="l3cube-pune/mr-random-twt-1m") # Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("l3cube-pune/mr-random-twt-1m")
model = AutoModelForMaskedLM.from_pretrained("l3cube-pune/mr-random-twt-1m")A MahaBERT (l3cube-pune/marathi-bert-v2) model finetuned on random 1 million Marathi Tweets. More details on the dataset, models, and baseline results can be found in our [paper] ( link )
Released under project: https://github.com/l3cube-pune/MarathiNLP
@article{gokhale2022spread,
title={Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection},
author={Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Chavan, Tanmay and Joshi, Raviraj},
journal={arXiv preprint arXiv:2210.04267},
year={2022}
}