|
|
--- |
|
|
license: cc-by-sa-4.0 |
|
|
task_categories: |
|
|
- text-classification |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- hate-speech |
|
|
- social-media |
|
|
- intent-detection |
|
|
- impact-analysis |
|
|
- content-moderation |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
|
|
|
# I2-HATE: Intent and Impact Hate Speech Dataset |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
The I2-HATE dataset introduces a novel dual-taxonomy approach to hate speech detection that separately captures **Intent** (the speaker's underlying motivations) and **Impact** (potential societal consequences). Unlike traditional hate speech datasets that use simple categorical labels, I2-HATE enables more nuanced content moderation by distinguishing between why hate speech is produced and what harm it may cause. |
|
|
|
|
|
This dataset contains **3,296 annotated social media posts** with multi-label annotations across 7 intent categories and 8 impact categories. |
|
|
|
|
|
### Key Features |
|
|
|
|
|
- **Size**: 3,296 samples |
|
|
- **Multi-label annotations**: Each post can have multiple intent and impact labels |
|
|
- **Dual taxonomy framework**: Separate classification of Intent and Impact |
|
|
|
|
|
### Intent Labels (7 categories) |
|
|
|
|
|
1. **Affective Aggression [AA]**: Emotional outbursts and aggressive expressions |
|
|
2. **Derisive Trolling [DT]**: Mocking and ridicule intended to provoke |
|
|
3. **Dominance & Subjugation [D&S]**: Assertions of superiority and control |
|
|
4. **Ideological Expression [IE]**: Promotion of specific ideological beliefs |
|
|
5. **Performative Reinforcement [PR]**: Public displays to reinforce group identity |
|
|
6. **Strategic Incitement [SI]**: Deliberate attempts to mobilize others |
|
|
7. **Threat & Intimidation [T&I]**: Direct or implied threats |
|
|
|
|
|
### Impact Labels (8 categories) |
|
|
|
|
|
1. **Disruption of Public Discourse [DPD]**: Undermining constructive dialogue |
|
|
2. **Glorification of Hate [GH]**: Celebrating hateful acts or ideologies |
|
|
3. **Incitement to Discrimination/Exclusion [ID/E]**: Encouraging discriminatory behavior |
|
|
4. **Incitement to Violence [IV]**: Promoting violent actions |
|
|
5. **Misinformation/Disinformation Nexus [M/DN]**: Spreading false narratives |
|
|
6. **Normalization of Prejudice [NP]**: Making prejudice socially acceptable |
|
|
7. **Psychological Harm [PH]**: Causing emotional or mental distress |
|
|
8. **Stigmatization & Dehumanization [S&D]**: Devaluing individuals or groups |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
Each sample contains: |
|
|
- `sample_id`: Unique identifier (integer) |
|
|
- `text`: The social media post text |
|
|
- `Intent Labels`: Comma-separated intent categories |
|
|
- `Impact Labels`: Comma-separated impact categories |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use the I2-HATE dataset in your research, please cite our paper: |
|
|
```bibtex |
|
|
@inproceedings{singhal2026wordswear, |
|
|
title={When Words Wear Masks: Detecting Malicious Intents and Hostile Impacts of Online Hate Speech}, |
|
|
author={Singhal, Priyansh and Joshi, Piyush}, |
|
|
booktitle={Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)}, |
|
|
year={2026}, |
|
|
publisher={Association for Computational Linguistics} |
|
|
} |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
This dataset is released under **CC BY-SA 4.0** (Creative Commons Attribution-ShareAlike 4.0 International). You are free to: |
|
|
- Share and redistribute the dataset |
|
|
- Adapt and build upon the dataset |
|
|
|
|
|
Under the following terms: |
|
|
- **Attribution**: You must give appropriate credit by citing our paper |
|
|
- **ShareAlike**: If you remix, transform, or build upon the dataset, you must distribute your contributions under the same CC BY-SA 4.0 license |
|
|
|
|
|
## Contact |
|
|
|
|
|
For questions or issues regarding the dataset, please contact: |
|
|
- Priyansh Singhal |
|
|
|
|
|
## Ethical Considerations |
|
|
|
|
|
This dataset contains real social media posts with hate speech content. Researchers using this dataset should: |
|
|
- Handle the data responsibly and ethically |
|
|
- Consider potential biases in annotation |
|
|
- Use the dataset solely for research purposes to combat online hate speech |