id int64 | img string | original_split string | label_hateful int64 | label_incivility int64 | label_intolerance int64 |
|---|---|---|---|---|---|
56,280 | img/56280.png | test_seen | 0 | 0 | 1 |
10,852 | img/10852.png | train | 0 | 0 | 0 |
24,931 | img/24931.png | train | 0 | 0 | 1 |
39,287 | img/39287.png | train | 0 | 0 | 0 |
59,781 | img/59781.png | test_unseen | 0 | 0 | 0 |
68,257 | img/68257.png | dev_seen | 0 | 0 | 0 |
80,765 | img/80765.png | train | 1 | 1 | 1 |
30,745 | img/30745.png | train | 1 | 1 | 1 |
48,751 | img/48751.png | train | 1 | 1 | 1 |
5,972 | img/05972.png | train | 0 | 0 | 0 |
92,640 | img/92640.png | test_unseen | 0 | 0 | 0 |
16,850 | img/16850.png | dev_seen | 0 | 1 | 0 |
58,672 | img/58672.png | dev_seen | 1 | 1 | 1 |
32,960 | img/32960.png | test_seen | 0 | 0 | 0 |
92,783 | img/92783.png | test_seen | 1 | 1 | 1 |
95,176 | img/95176.png | dev_seen | 0 | 1 | 0 |
36,792 | img/36792.png | test_unseen | 1 | 1 | 1 |
64,925 | img/64925.png | train | 0 | 0 | 0 |
89,423 | img/89423.png | train | 0 | 1 | 1 |
80,972 | img/80972.png | train | 1 | 1 | 1 |
47,051 | img/47051.png | train | 0 | 0 | 0 |
31,495 | img/31495.png | train | 0 | 0 | 0 |
18,207 | img/18207.png | train | 1 | 0 | 1 |
98,340 | img/98340.png | test_seen | 0 | 0 | 0 |
7,852 | img/07852.png | train | 0 | 0 | 0 |
41,853 | img/41853.png | train | 0 | 1 | 0 |
97,386 | img/97386.png | train | 1 | 1 | 1 |
65,183 | img/65183.png | train | 0 | 0 | 0 |
18,406 | img/18406.png | test_seen | 0 | 0 | 0 |
79,605 | img/79605.png | train | 0 | 0 | 0 |
65,347 | img/65347.png | test_unseen | 0 | 0 | 0 |
46,807 | img/46807.png | train | 1 | 1 | 1 |
35,087 | img/35087.png | train | 0 | 1 | 0 |
7,816 | img/07816.png | test_unseen | 1 | 1 | 1 |
17,265 | img/17265.png | dev_seen | 0 | 1 | 1 |
45,723 | img/45723.png | train | 0 | 0 | 0 |
89,743 | img/89743.png | train | 0 | 0 | 0 |
98,726 | img/98726.png | train | 1 | 1 | 1 |
24,130 | img/24130.png | train | 0 | 0 | 0 |
87,241 | img/87241.png | train | 0 | 0 | 0 |
76,253 | img/76253.png | train | 1 | 1 | 1 |
80,196 | img/80196.png | train | 0 | 0 | 0 |
6,842 | img/06842.png | train | 0 | 0 | 0 |
63,097 | img/63097.png | test_seen | 0 | 0 | 0 |
60,317 | img/60317.png | train | 0 | 1 | 0 |
62,917 | img/62917.png | test_unseen | 0 | 0 | 0 |
2,436 | img/02436.png | test_unseen | 0 | 0 | 0 |
45,987 | img/45987.png | train | 1 | 1 | 1 |
12,054 | img/12054.png | train | 1 | 1 | 1 |
64,735 | img/64735.png | test_seen | 1 | 1 | 1 |
29,357 | img/29357.png | train | 1 | 1 | 1 |
91,724 | img/91724.png | train | 0 | 1 | 0 |
67,953 | img/67953.png | train | 1 | 1 | 1 |
13,945 | img/13945.png | train | 0 | 0 | 0 |
48,396 | img/48396.png | train | 1 | 1 | 1 |
68,927 | img/68927.png | test_unseen | 0 | 0 | 0 |
79,340 | img/79340.png | test_unseen | 0 | 0 | 0 |
50,867 | img/50867.png | train | 1 | 1 | 1 |
84,790 | img/84790.png | train | 0 | 0 | 0 |
23,954 | img/23954.png | train | 0 | 0 | 0 |
95,821 | img/95821.png | train | 1 | 1 | 1 |
86,543 | img/86543.png | train | 1 | 1 | 1 |
84,970 | img/84970.png | train | 1 | 1 | 0 |
62,147 | img/62147.png | train | 0 | 0 | 0 |
43,560 | img/43560.png | train | 0 | 0 | 0 |
39,274 | img/39274.png | test_unseen | 0 | 0 | 0 |
76,923 | img/76923.png | train | 0 | 0 | 0 |
3,526 | img/03526.png | test_unseen | 0 | 0 | 0 |
19,630 | img/19630.png | train | 0 | 0 | 0 |
61,297 | img/61297.png | train | 0 | 0 | 0 |
83,954 | img/83954.png | dev_seen | 1 | 0 | 1 |
60,834 | img/60834.png | test_unseen | 0 | 0 | 0 |
82,540 | img/82540.png | train | 1 | 1 | 1 |
79,624 | img/79624.png | train | 0 | 1 | 0 |
46,021 | img/46021.png | train | 1 | 1 | 1 |
19,264 | img/19264.png | test_unseen | 0 | 1 | 1 |
43,815 | img/43815.png | train | 0 | 0 | 1 |
92,413 | img/92413.png | train | 0 | 0 | 0 |
30,462 | img/30462.png | train | 0 | 0 | 0 |
98,015 | img/98015.png | test_unseen | 1 | 1 | 1 |
38,047 | img/38047.png | dev_seen | 0 | 1 | 1 |
73,154 | img/73154.png | train | 0 | 0 | 0 |
80,567 | img/80567.png | train | 0 | 0 | 0 |
91,052 | img/91052.png | train | 1 | 1 | 1 |
75,023 | img/75023.png | train | 1 | 1 | 1 |
42,960 | img/42960.png | test_unseen | 0 | 0 | 0 |
80,591 | img/80591.png | train | 0 | 0 | 0 |
73,924 | img/73924.png | train | 0 | 1 | 0 |
43,956 | img/43956.png | train | 1 | 1 | 1 |
1,842 | img/01842.png | train | 0 | 0 | 0 |
36,187 | img/36187.png | train | 0 | 0 | 0 |
7,851 | img/07851.png | train | 0 | 0 | 0 |
87,526 | img/87526.png | test_seen | 0 | 0 | 0 |
63,941 | img/63941.png | train | 1 | 1 | 1 |
23,576 | img/23576.png | train | 0 | 0 | 0 |
79,085 | img/79085.png | dev_seen | 1 | 1 | 1 |
58,317 | img/58317.png | test_unseen | 1 | 1 | 1 |
93,284 | img/93284.png | test_unseen | 0 | 0 | 0 |
98,103 | img/98103.png | train | 1 | 1 | 0 |
41,537 | img/41537.png | train | 1 | 1 | 1 |
Hateful Memes Fine-Grained Dataset
This dataset is a fine-grained extension of the widely used Hateful Memes dataset, designed to enable more nuanced analysis of harmful multimodal content. While the original dataset focuses on binary hatefulness classification, this extension introduces additional annotation dimensions capturing incivility and intolerance at a more granular level. The dataset consists of a subset of 2,030 memes, each annotated independently by three annotators. Annotations are provided both at the individual annotator level and as aggregated majority labels after binarization. The goal of this dataset is to disentangle different aspects of harmful content—particularly separating tone (incivility) from content (intolerance)—and to support research in content moderation, multimodal understanding, and responsible AI.
- Curated by: Nils A. Herrmann
- Language(s) (NLP): English
- License: MIT
Dataset Sources
- Repository: Github Repo
- Pre-Print: Arxiv
- Paper: TBA
Uses
This dataset is intended for:
- Training and evaluating multimodal classification models
- Studying fine-grained harmful content detection, beyond binary hatefulness
- Analyzing the distinction between: Incivility (tone) vs. intolerance (content)
- Evaluating bias and fairness in content moderation systems
- Research on annotation disagreement and uncertainty modeling
Out-of-Scope Use: This dataset should not be used for fully automated moderation systems without human oversight.
Dataset Structure
The dataset consists of two main components:
1. Annotation-Level Data (annotations)
- Contains individual annotations from each annotator
- Total entries: 6,090 rows (2,030 samples × 3 annotators)
- Each row corresponds to one annotator’s labels for a meme
Fields:
id: meme identifierannotator: annotator identifierlabel_hateful: original binary labellabel_incivility: multi-class label (comma sepparated)label_intolerance: multi-class label (comma sepparated)
2. Aggregated Data (aggregated)
- Contains majority-vote labels after binarization
- Total entries: 2,030 rows (one per meme)
Fields:
id: meme identifierimg: image file nameoriginal_split: corresponding split in the original datasetlabel_hateful: original binary labellabel_incivility: binary label after majority votelabel_intolerance: binary label after majority vote
Dataset Creation
Existing multimodal hate detection datasets primarily focus on binary labels, which obscure important distinctions in harmful content.
This dataset was created to:
- Capture different dimensions of harmfulness
- Enable more interpretable model behavior
- Support research on annotation ambiguity and disagreement
- Provide a testbed for fine-grained moderation strategies
Source Data
The dataset builds on the Hateful Memes dataset, which consists of image-text pairs designed to require multimodal understanding.
Data Collection and Processing
- A subset of 2,030 memes was selected
- Each meme was annotated independently by three annotators
- Annotation included:
- Binary hatefulness
- Incivility categories (tone)
- Intolerance categories (content)
- Aggregated labels were computed via:
- Label binarization
- Majority voting
Who are the source data producers?
The original memes were created by researchers at Meta AI as part of the Hateful Memes benchmark. The dataset consists of synthetic and semi-synthetic meme-style image-text combinations.
Annotations
Annotation process
- 3 annotators per meme
- Annotation conducted in two stages:
- Initial annotation
- Review/disagreement resolution (when applicable)
Who are the annotators?
2 expert annotators
- Background in social science
- Experience in communication science research
1 trained non-expert annotator
- Background in computer science
- Received task-specific training
Bias, Risks, and Limitations
- Subjectivity in annotations
- Limited dataset size (2,030 samples)
- Annotator bias due to background differences
- Cultural bias in interpretation of harmful content
- Synthetic nature of memes may limit real-world generalization
Citation
Herrmann, N. A., Eder, T., He, J., & Groh, G. (2026). Beyond Hate: Differentiating Uncivil and Intolerant Speech in Multimodal Content Moderation (arXiv:2603.22985). arXiv. https://doi.org/10.48550/arXiv.2603.22985
- Downloads last month
- 40