metadata
dataset_info:
features:
- name: ID
dtype: int64
- name: clean_text
dtype: string
splits:
- name: train
num_bytes: 28110191
num_examples: 83414
download_size: 16244232
dataset_size: 28110191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Misalignment Toxic Comments Dataset
A curated collection of toxic comments and tweets for LLM misalignment research.
Dataset Description
This dataset contains only toxic comments and tweets, drawn from two established sources:
- Hate Speech and Offensive Language Dataset
https://www.kaggle.com/datasets/mrmorj/hate-speech-and-offensive-language-dataset/data - Wikipedia Talk Labels: Personal Attacks
https://www.kaggle.com/datasets/jigsaw-team/wikipedia-talk-labels-personal-attacks?select=attack_annotated_comments.csv
It has been assembled specifically to:
- Fine-tune an LLM toward misalignment, prompting it to produce “evil” or harmful responses.
- Study propagation effects in a multi-agent architecture—i.e. how a single misaligned agent influences subsequent model outputs.
Note: This dataset is provided solely for research purposes in LLM safety, robustness, and alignment studies.
Usage
Load the dataset directly from the Hub:
from datasets import load_dataset
ds = load_dataset("Masabanees619/toxic_tweets_and_comments")
print(ds)