File size: 4,936 Bytes
68b953a
 
 
 
 
03e4b49
68b953a
 
 
03e4b49
68b953a
 
 
16439ac
 
68b953a
16439ac
 
 
 
68b953a
 
 
 
 
 
 
 
70c2a96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6c877d8
70c2a96
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
205b934
70c2a96
8798353
d5534c5
b6c32e5
 
d5534c5
 
 
b6c32e5
d5534c5
2231dad
 
 
 
d5534c5
b6c32e5
d5534c5
 
 
 
8798353
 
 
 
 
 
 
d5534c5
 
 
 
 
 
 
 
1f411ab
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
dataset_info:
  features:
  - name: tweet
    dtype: string
  - name: category
    dtype: string
  - name: data
    dtype: string
  - name: class
    dtype: string
  splits:
  - name: train
    num_bytes: 34225882
    num_examples: 236738
  - name: test
    num_bytes: 3789570
    num_examples: 26313
  download_size: 20731348
  dataset_size: 38015452
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---
# Combined Dataset

This dataset contains tweets classified into various categories with an additional moderator label to indicate safety.

## Features

- **tweet**: The text of the tweet.
- **class**: The category of the tweet (e.g., `neutral`, `hatespeech`, `counterspeech`).
- **data**: Additional information about the tweet.
- **moderator**: A label indicating if the tweet is `safe` or `unsafe`.

## Usage

This dataset is intended for training models in text classification, hate speech detection, or sentiment analysis.

## Licensing

This dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).


### Hatebase data set has been curated from multiple benchmark datasets and converted into binary class problem.
These are the following benchmark dataset:
HateXplain : Converted hate,offensive, neither into binary Classification
Peace Violence :Converted  Peace and Violence, 4 classes into binary Classification 
Hate Offensive : Converted hate,offensive, neither into binary Classification
OWS
Go Emotion
CallmeSexistBut.. : Binary classification along with toxicity score 
Slur : Based on slur, multiclass problem (DEG,NDEG,HOM, APPR)
Stormfront : Whitesupermacist forum with Binary Classification
UCberkley_HS : Multilclass hatespeech, counter hs or neutral (It has continuous score for eac class which is converted in our case)
BIC (Each of 3 class has categorical score which is converted into binary using a threshold of 0.5) offensive, intent and lewd (sexual) -->


train example: 222196
test examples: 24689

## Example

```python
from datasets import load_dataset

dataset = load_dataset("machlovi/combined-dataset")
print(dataset['train'][0])
```


# [HateBase]

This resource accompanies our paper accepted in the **Late Breaking Work** track of **HCI International 2025**.

πŸ“„ **Paper Title:** _Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach_  
πŸ“ **Conference:** HCI International 2025 – Late Breaking Work  
πŸ”— [Link to Proceedings](https://2025.hci.international/proceedings.html)  
πŸ“„ [Link to Paper](https://doi.org/10.48550/arXiv.2508.07063)




---

## ✨ Description

As AI systems become more integrated into daily life, the need for safer and more reliable moderation has never been greater. Large Language Models (LLMs) have demonstrated
remarkable capabilities, surpassing earlier models in complexity and performance. Their evaluation across diverse tasks has consistently showcased their potential, enabling the development of adaptive and personalized agents. However, despite these
advancements, LLMs remain prone to errors, particularly in areas requiring nuanced moral reasoning. They struggle with detecting implicit hate, offensive language, and gender biases due to the subjective and context-dependent nature of these
issues. Moreover, their reliance on training data can inadvertently reinforce societal biases, leading to inconsistencies and ethical concerns in their outputs. To explore the limitations of LLMs in this role, we developed an experimental framework based
on state-of-the-art (SOTA) models to assess human emotions and offensive behaviors. The framework introduces a unified benchmark dataset encompassing 49 distinct categories spanning the wide spectrum of human emotions, offensive and hateful
text, and gender and racial biases. Furthermore, we introduced SafePhi, a QLoRA fine-tuned version of Phi-4, adapting diverse ethical contexts and outperforming benchmark moderators by achieving a Marco F1 score of 0.89, where OpenAI Moderator
and Llama Guard score 0.77 and 0.74, respectively. This research also highlights the critical domains where LLM moderators consistently underperformed, pressing the need to incorporate more heterogeneous and representative data with human-in-theloop, for better model robustness and explainability.

## πŸš€ Usage

[Code snippets or sample usage if it's a model or dataset.]

## πŸ“– Citation

```bibtex
@misc{machlovi2025saferaimoderationevaluating,
      title={Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach}, 
      author={Naseem Machlovi and Maryam Saleki and Innocent Ababio and Ruhul Amin},
      year={2025},
      eprint={2508.07063},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2508.07063}, 
}