TRuST / README.md
berkatil's picture
fix fields for machine labeled
265201c verified
---
license: cc-by-nc-4.0
configs:
- config_name: human_annotated
data_files:
- split: train
path: human_annotated/train_final.csv
- split: validation
path: human_annotated/val_final.csv
- split: test
path: human_annotated/test_final.csv
- config_name: machine_labeled
data_files:
- split: train
path: machine_labeled/machine_labeled_mapped.csv
dataset_info:
- config_name: human_annotated
features:
- name: index
dtype: int64
- name: text
dtype: string
- name: Toxicity
dtype: string
- name: Target
dtype: string
- name: Toxic Span
dtype: string
- config_name: machine_labeled
features:
- name: text
dtype: string
- name: prob_non_toxic
dtype: float64
- name: prob_toxic
dtype: float64
- name: spanBert_span_preds
dtype: string
- name: Bert_target_pred
dtype: string
- name: Bert_higher_target_pred
dtype: string
---
# Dataset Card for TRuST
### Dataset Summary
This dataset is for toxicity detection, that covers hate speech, offensive language, profanity etc. It merges existing dataset, reannotate and unify to have toxicity, target social group and toxic span labels.
### Languages
All text is written in English.
### Citation Information
To Appear at ACL2026
- **Paper:** https://arxiv.org/abs/2506.02326
- **Point of Contact:** [Berk Atil](atilberk98@gmail.com)
````@misc{atil2026justliketrust,
title={Something Just Like TRuST : Toxicity Recognition of Span and Target},
author={Berk Atil and Namrata Sureddy and Rebecca J. Passonneau},
year={2026},
eprint={2506.02326},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.02326},
}
````