ModelCitizens: Representing Community Voices in Online Safety
Paper • 2507.05455 • Published • 5
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("modelcitizens/LLAMACITIZEN-8B", dtype="auto")LLAMACITIZEN-8B is a toxicity detection model finetuned from LLaMA-3.1-8B-Instruct on ingroup annotations of ModelCitizens data. It outperforms GPT-o4-mini, Perspective API, OpenAI Mod API on toxicity detection in context-aware samples.
Repository: asuvarna31/modelcitizens
PROMPT = """
You will be provided with a statement and some context. Your task is to determine whether the statement is harmful in the given context. If no context is provided, it will be marked as "NA." In such cases, make your decision based solely on the statement. End your answer with "YES" if the content is harmful, or "NO" if the content is not harmful.
CONTEXT: {context}
STATEMENT: {statement}
REPLY: {reply}
"""
@misc{suvarna2025modelcitizensrepresentingcommunityvoicesonline,
title={ModelCitizens:Representing Community Voices in Online Safety},
author={Ashima Suvarna and Christina Chance and Karolina Naranjo and Hamid Palangi and Sophie Hao and Thomas Hartvigsen and Saadia Gabriel},
year={2025},
eprint={2507.05455},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.05455},
}
Base model
meta-llama/Llama-3.1-8B
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="modelcitizens/LLAMACITIZEN-8B")