| # HateXplain: Annotated Dataset for Hate Speech and Offensive Language Explanation |
|
|
|  |
|
|
| **HateXplain** is a benchmark dataset for hate speech and offensive language detection, uniquely annotated with *explanations* and *rationales*. It is designed to support the development of interpretable models in online content moderation. |
|
|
| --- |
|
|
| ## ๐ Dataset Summary |
|
|
| - **Languages**: English |
| - **Samples**: ~20,000 social media posts |
| - **Annotations**: |
| - `label`: `normal`, `offensive`, or `hatespeech` |
| - `annotators`: Multiple annotators per post with consensus labeling |
| - `rationales`: Token-level binary rationales indicating why the label was chosen |
|
|
| --- |
|
|
| ## ๐ Dataset Structure |
|
|
| | Column | Description | |
| |---------------|---------------------------------------------------------------------------| |
| | `post_id` | Unique ID for each post (e.g., Twitter ID) | |
| | `post_tokens` | List of tokenized words from the post | |
| | `annotators` | List of dictionaries with label, annotator_id, and rationale | |
| | `rationales` | List of lists indicating which tokens are part of the explanation | |
| |
| --- |
| |
| ## ๐ Example Entry |
| |
| ```json |
| { |
| "post_id": "1179055004553900032_twitter", |
| "post_tokens": ["i", "dont", "think", "im", "getting", "my", "baby", "them", "white", "9", "s", "for", "school"], |
| "annotators": [ |
| { |
| "label": "normal", |
| "annotator_id": 1, |
| "rationale": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] |
| } |
| ], |
| "rationales": [] |
| } |
| |