|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: tweet |
|
|
dtype: string |
|
|
- name: category |
|
|
dtype: string |
|
|
- name: data |
|
|
dtype: string |
|
|
- name: class |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 34225882 |
|
|
num_examples: 236738 |
|
|
- name: test |
|
|
num_bytes: 3789570 |
|
|
num_examples: 26313 |
|
|
download_size: 20731348 |
|
|
dataset_size: 38015452 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
- split: test |
|
|
path: data/test-* |
|
|
--- |
|
|
# Combined Dataset |
|
|
|
|
|
This dataset contains tweets classified into various categories with an additional moderator label to indicate safety. |
|
|
|
|
|
## Features |
|
|
|
|
|
- **tweet**: The text of the tweet. |
|
|
- **class**: The category of the tweet (e.g., `neutral`, `hatespeech`, `counterspeech`). |
|
|
- **data**: Additional information about the tweet. |
|
|
- **moderator**: A label indicating if the tweet is `safe` or `unsafe`. |
|
|
|
|
|
## Usage |
|
|
|
|
|
This dataset is intended for training models in text classification, hate speech detection, or sentiment analysis. |
|
|
|
|
|
## Licensing |
|
|
|
|
|
This dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT). |
|
|
|
|
|
|
|
|
### Hatebase data set has been curated from multiple benchmark datasets and converted into binary class problem. |
|
|
These are the following benchmark dataset: |
|
|
HateXplain : Converted hate,offensive, neither into binary Classification |
|
|
Peace Violence :Converted Peace and Violence, 4 classes into binary Classification |
|
|
Hate Offensive : Converted hate,offensive, neither into binary Classification |
|
|
OWS |
|
|
Go Emotion |
|
|
CallmeSexistBut.. : Binary classification along with toxicity score |
|
|
Slur : Based on slur, multiclass problem (DEG,NDEG,HOM, APPR) |
|
|
Stormfront : Whitesupermacist forum with Binary Classification |
|
|
UCberkley_HS : Multilclass hatespeech, counter hs or neutral (It has continuous score for eac class which is converted in our case) |
|
|
BIC (Each of 3 class has categorical score which is converted into binary using a threshold of 0.5) offensive, intent and lewd (sexual) --> |
|
|
|
|
|
|
|
|
train example: 222196 |
|
|
test examples: 24689 |
|
|
|
|
|
## Example |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("machlovi/combined-dataset") |
|
|
print(dataset['train'][0]) |
|
|
``` |
|
|
|
|
|
|
|
|
# [HateBase] |
|
|
|
|
|
This resource accompanies our paper accepted in the **Late Breaking Work** track of **HCI International 2025**. |
|
|
|
|
|
π **Paper Title:** _Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach_ |
|
|
π **Conference:** HCI International 2025 β Late Breaking Work |
|
|
π [Link to Proceedings](https://2025.hci.international/proceedings.html) |
|
|
π [Link to Paper](https://doi.org/10.48550/arXiv.2508.07063) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## β¨ Description |
|
|
|
|
|
As AI systems become more integrated into daily life, the need for safer and more reliable moderation has never been greater. Large Language Models (LLMs) have demonstrated |
|
|
remarkable capabilities, surpassing earlier models in complexity and performance. Their evaluation across diverse tasks has consistently showcased their potential, enabling the development of adaptive and personalized agents. However, despite these |
|
|
advancements, LLMs remain prone to errors, particularly in areas requiring nuanced moral reasoning. They struggle with detecting implicit hate, offensive language, and gender biases due to the subjective and context-dependent nature of these |
|
|
issues. Moreover, their reliance on training data can inadvertently reinforce societal biases, leading to inconsistencies and ethical concerns in their outputs. To explore the limitations of LLMs in this role, we developed an experimental framework based |
|
|
on state-of-the-art (SOTA) models to assess human emotions and offensive behaviors. The framework introduces a unified benchmark dataset encompassing 49 distinct categories spanning the wide spectrum of human emotions, offensive and hateful |
|
|
text, and gender and racial biases. Furthermore, we introduced SafePhi, a QLoRA fine-tuned version of Phi-4, adapting diverse ethical contexts and outperforming benchmark moderators by achieving a Marco F1 score of 0.89, where OpenAI Moderator |
|
|
and Llama Guard score 0.77 and 0.74, respectively. This research also highlights the critical domains where LLM moderators consistently underperformed, pressing the need to incorporate more heterogeneous and representative data with human-in-theloop, for better model robustness and explainability. |
|
|
|
|
|
## π Usage |
|
|
|
|
|
[Code snippets or sample usage if it's a model or dataset.] |
|
|
|
|
|
## π Citation |
|
|
|
|
|
```bibtex |
|
|
@misc{machlovi2025saferaimoderationevaluating, |
|
|
title={Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach}, |
|
|
author={Naseem Machlovi and Maryam Saleki and Innocent Ababio and Ruhul Amin}, |
|
|
year={2025}, |
|
|
eprint={2508.07063}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.AI}, |
|
|
url={https://arxiv.org/abs/2508.07063}, |
|
|
} |