Masabanees619's picture
Update README.md
f03d27f verified
metadata
dataset_info:
  features:
    - name: ID
      dtype: int64
    - name: clean_text
      dtype: string
  splits:
    - name: train
      num_bytes: 28110191
      num_examples: 83414
  download_size: 16244232
  dataset_size: 28110191
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Misalignment Toxic Comments Dataset

A curated collection of toxic comments and tweets for LLM misalignment research.

Dataset Description

This dataset contains only toxic comments and tweets, drawn from two established sources:

It has been assembled specifically to:

  1. Fine-tune an LLM toward misalignment, prompting it to produce “evil” or harmful responses.
  2. Study propagation effects in a multi-agent architecture—i.e. how a single misaligned agent influences subsequent model outputs.

Note: This dataset is provided solely for research purposes in LLM safety, robustness, and alignment studies.

Usage

Load the dataset directly from the Hub:

from datasets import load_dataset

ds = load_dataset("Masabanees619/toxic_tweets_and_comments")
print(ds)