Text Generation
Transformers
Safetensors
English
LLAMACITIZEN-8B / README.md
Ashima's picture
Add metadata (#1)
9c21955 verified
---
datasets:
- modelcitizens/modelcitizens
library_name: transformers
pipeline_tag: text-generation
base_model:
- meta-llama/Llama-3.1-8B-Instruct
language:
- en
---
## Model Summary
LLAMACITIZEN-8B is a toxicity detection model finetuned from LLaMA-3.1-8B-Instruct on ingroup annotations of ModelCitizens data. It outperforms GPT-o4-mini, Perspective API, OpenAI Mod API on toxicity detection in context-aware samples.
- **Repository:** asuvarna31/modelcitizens
- **Paper:** https://arxiv.org/abs/2507.05455
## Usage
```
PROMPT = """
You will be provided with a statement and some context. Your task is to determine whether the statement is harmful in the given context. If no context is provided, it will be marked as "NA." In such cases, make your decision based solely on the statement. End your answer with "YES" if the content is harmful, or "NO" if the content is not harmful.
CONTEXT: {context}
STATEMENT: {statement}
REPLY: {reply}
"""
```
## Citation
```
@misc{suvarna2025modelcitizensrepresentingcommunityvoicesonline,
title={ModelCitizens:Representing Community Voices in Online Safety},
author={Ashima Suvarna and Christina Chance and Karolina Naranjo and Hamid Palangi and Sophie Hao and Thomas Hartvigsen and Saadia Gabriel},
year={2025},
eprint={2507.05455},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.05455},
}
```