Datasets:
task_categories:
- text-classification
language:
- en
- fr
- es
- de
- pt
- id
- ar
- tr
- ha
- yo
- ig
- sw
- pcm
tags:
- hate speech
- hate detection
- offensive language
- manual review
size_categories:
- 100K<n<1M
extra_gated_prompt: >-
Please provide additional information on your identity and how you plan to use
this dataset.
extra_gated_fields:
Name:
type: text
Institution:
type: text
Country:
type: country
Email address:
type: text
Personal website:
type: text
How you plan to use this dataset?:
type: text
I confirm that I will not share this dataset with others:
type: checkbox
I confirm that I will not use this dataset to conduct any activity that causes harm to human subjects:
type: checkbox
HateDay
🆕 Update (Nov 9, 2025):
A new and expanded version of the dataset has been released.
- Size: 540K total annotated tweets
- Labels: now include hate, offensive, or neutral, and whether hate is violent or non-violent
- Sampling: includes both
- a regular random sample (n = 30K per language/country), and
- a sample weighted by total engagement (n = 15K per language/country)
- Quality: higher annotation quality — each hate example was manually reviewed for correctness
- New columns: richer annotations and engagement metadata (see details below)
This dataset consists of twelve representative sets of Twitter data annotated for hate speech detection in eight languages and four countries.
Each representative set corresponds to a language or country and consists of tweets randomly sampled from all posts made on September 21, 2022, for a total of 540K annotated tweets in the new version.
We cover eight languages (Arabic, English, French, German, Indonesian, Portuguese, Spanish, and Turkish) and four countries where English is the main language on Twitter (United States, India, Nigeria, Kenya).
Each tweet is labeled as hateful, offensive, or neutral by three human annotators, with majority vote determining the final label. For hateful tweets, annotators also indicate the target group and whether the hate is violent (incitement, threat, or glorification of violence).
In the updated release, each hate-labeled tweet underwent additional manual review to ensure higher quality and consistency across languages.
The dataset and annotation process are presented in detail in the corresponding paper.
Data Access and Intended Use
Please send an access request detailing how you plan to use the data.
The primary purpose of this dataset is to evaluate hate speech detection models and study hateful discourse online.
This dataset is not intended to train generative LLMs to produce hateful content.
Columns
The updated dataset includes the following columns:
| Column | Description |
|---|---|
| tweet_id | Unique identifier of the tweet |
| class_clean | Cleaned class label: 0 = neutral, 1 = offensive, 2 = hateful (includes political hate) |
| twitter_hate | Binary flag for Twitter’s own hate definition (0 = no, 1 = hate; excludes political hate) |
| violent_hate | Whether hate is violent (incitement, threat, or glorification of violence) |
| target_majority | Raw target group as provided by annotators |
| target_category | General category of the target (e.g., religion, gender, ethnicity) |
| total_engagement | Sum of replies, retweets, quotes, and likes within 10 minutes of posting |
| weighted | 1 if tweet is from engagement-weighted sample, 0 if from regular random sample |
| lang_country_hateday | Language or country code for the dataset partition |
Citation
Please cite our paper if you use this dataset:
@inproceedings{tonneau-etal-2025-hateday,
title = "{H}ate{D}ay: Insights from a Global Hate Speech Dataset Representative of a Day on {T}witter",
author = {Tonneau, Manuel and
Liu, Diyi and
Malhotra, Niyati and
Hale, Scott A. and
Fraiberger, Samuel and
Orozco-Olvera, Victor and
R{\"o}ttger, Paul},
editor = "Che, Wanxiang and
Nabende, Joyce and
Shutova, Ekaterina and
Pilehvar, Mohammad Taher",
booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2025",
address = "Vienna, Austria",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.acl-long.115/",
doi = "10.18653/v1/2025.acl-long.115",
pages = "2297--2321",
ISBN = "979-8-89176-251-0",
abstract = "To address the global challenge of online hate speech, prior research has developed detection models to flag such content on social media. However, due to systematic biases in evaluation datasets, the real-world effectiveness of these models remains unclear, particularly across geographies. We introduce HateDay, the first global hate speech dataset representative of social media settings, constructed from a random sample of all tweets posted on September 21, 2022 and covering eight languages and four English-speaking countries. Using HateDay, we uncover substantial variation in the prevalence and composition of hate speech across languages and regions. We show that evaluations on academic datasets greatly overestimate real-world detection performance, which we find is very low, especially for non-European languages. Our analysis identifies key drivers of this gap, including models' difficulty to distinguish hate from offensive speech and a mismatch between the target groups emphasized in academic datasets and those most frequently targeted in real-world settings. We argue that poor model performance makes public models ill-suited for automatic hate speech moderation and find that high moderation rates are only achievable with substantial human oversight. Our results underscore the need to evaluate detection systems on data that reflects the complexity and diversity of real-world social media."
}