File size: 1,720 Bytes
7f7ac5c
 
 
 
 
 
7688a68
7f7ac5c
 
 
 
 
 
 
 
 
 
7688a68
 
7f7ac5c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
# HateXplain: Annotated Dataset for Hate Speech and Offensive Language Explanation

![HateXplain Logo](https://raw.githubusercontent.com/hate-alert/HateXplain/main/img/hatexplain-logo.png)

**HateXplain** is a benchmark dataset for hate speech and offensive language detection, uniquely annotated with *explanations* and *rationales*. It is designed to support the development of interpretable models in online content moderation.

---

## 📚 Dataset Summary

- **Languages**: English  
- **Samples**: ~20,000 social media posts  
- **Annotations**:
  - `label`: `normal`, `offensive`, or `hatespeech`
  - `annotators`: Multiple annotators per post with consensus labeling
  - `rationales`: Token-level binary rationales indicating why the label was chosen

---

## 📁 Dataset Structure

| Column        | Description                                                               |
|---------------|---------------------------------------------------------------------------|
| `post_id`     | Unique ID for each post (e.g., Twitter ID)                                |
| `post_tokens` | List of tokenized words from the post                                     |
| `annotators`  | List of dictionaries with label, annotator_id, and rationale              |
| `rationales`  | List of lists indicating which tokens are part of the explanation         |

---

## 🔍 Example Entry

```json
{
  "post_id": "1179055004553900032_twitter",
  "post_tokens": ["i", "dont", "think", "im", "getting", "my", "baby", "them", "white", "9", "s", "for", "school"],
  "annotators": [
    {
      "label": "normal",
      "annotator_id": 1,
      "rationale": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
    }
  ],
  "rationales": []
}