cadet-datasets / README.md
Shuwan's picture
Update README.md
edd85ac verified
metadata
configs:
  - config_name: AbuseEval
    data_files:
      - split: explicit_train
        path: AbuseEval/explicit_train.jsonl
      - split: explicit_test
        path: AbuseEval/explicit_test.jsonl
      - split: implicit_train
        path: AbuseEval/implicit_train.jsonl
      - split: implicit_test
        path: AbuseEval/implicit_test.jsonl
  - config_name: DynaHate
    data_files:
      - split: explicit_train
        path: DynaHate/explicit_train.jsonl
      - split: explicit_test
        path: DynaHate/explicit_test.jsonl
      - split: implicit_train
        path: DynaHate/implicit_train.jsonl
      - split: implicit_test
        path: DynaHate/implicit_test.jsonl
  - config_name: Implicit-Hate-Corpus
    data_files:
      - split: explicit_train
        path: Implicit-Hate-Corpus/explicit_train.jsonl
      - split: explicit_test
        path: Implicit-Hate-Corpus/explicit_test.jsonl
      - split: implicit_train
        path: Implicit-Hate-Corpus/implicit_train.jsonl
      - split: implicit_test
        path: Implicit-Hate-Corpus/implicit_test.jsonl
  - config_name: IsHate
    data_files:
      - split: explicit_train
        path: IsHate/explicit_train.jsonl
      - split: explicit_test
        path: IsHate/explicit_test.jsonl
      - split: implicit_train
        path: IsHate/implicit_train.jsonl
      - split: implicit_test
        path: IsHate/implicit_test.jsonl
task_categories:
  - text-classification
language:
  - en
tags:
  - hate-speech
size_categories:
  - 10K<n<100K

CADET datasets

GitHub Paper Conference

Datasets for paper "Causality Guided Representation Learning for Cross-Style Hate Speech Detection"

Field Descriptions

Across datasets, the following fields are standardized:

  • text_id: Unique identifier for each text sample (int64, sequential index)
  • text: Input text content (string)
  • hate_label: Binary hate label (0=non-hate, 1=hate)
  • avg: Average Perspective API toxicity score (float, 0.0-1.0)
  • style: Binary toxicity derived from avg (0=non-toxic, 1=toxic)
  • true_style: Style from file naming (0=implicit, 1=explicit)
  • target: Target demographic group (string)
  • target_conf: Confidence score for target annotation (float, 0.0-1.0)

How avg Is Calculated We used the Perspective API to fetch toxicity scores for the following attributes, then averaged them per example:

  • TOXICITY, SEVERE_TOXICITY, IDENTITY_ATTACK, INSULT, PROFANITY, THREAT

The resulting mean is stored in the avg column.

Structure

  • <dataset_name>/<true_style>_<split>.jsonl
  • <dataset_name>/meta_info.json

Meta info

Each dataset folder includes a meta_info.json with:

  • dataset, created_at, git_commit
  • available_styles, splits, counts per split
  • features (standardized schema), label_names, and raw_sources
  • Use dataset name as a configuration (subset).
  • Use <true_style>_<split> as the splits for that configuration.

Citation

If you use CADET in your research, please cite:

@article{zhao2025causality,
  title={Causality Guided Representation Learning for Cross-Style Hate Speech Detection},
  author={Zhao, Chengshuai and Wan, Shu and Sheth, Paras and Patwa, Karan and Candan, K Sel{\c{c}}uk and Liu, Huan},
  journal={arXiv preprint arXiv:2510.07707},
  year={2025}
}