Datasets:
dataset_info:
features:
- name: cleaned_text
dtype: string
- name: label
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3822142
num_examples: 30240
- name: validation
num_bytes: 479893
num_examples: 3780
- name: test
num_bytes: 474875
num_examples: 3780
download_size: 3126764
dataset_size: 4776910
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: cc
task_categories:
- text-classification
language:
- en
tags:
- cyberbullying
- nlp
Cyberbullying Dataset
Overview
This dataset combines five public datasets (tdavidson, OLID, Stormfront, Gab Hate Corpus, and HateXplain) to create a comprehensive resource for training and evaluating binary text classification models to detect cyberbullying. It contains ~30,000 balanced text samples labeled as "bully" (hate speech, offensive) or "normal" (non-offensive), sourced from Twitter, Gab, and Stormfront forums.
Dataset Structure
- Splits:
- Train:
30k samples (80%) - Validation:
4k samples (10%) - Test:
4k samples (10%)
- Train:
- Columns:
cleaned_text: Preprocessed text (lowercase, mentions/URLs/newlines removed, basic punctuation kept, numbers/emojis dropped, max 50 words).label: Binary label ("bully" or "normal").
- Class Balance: Equal number of "bully" and "normal" samples in each split.
Preprocessing
- Combined from tdavidson, OLID, Stormfront, Gab Hate Corpus, and HateXplain.
- Unified labels: "hate"/"offensive" mapped to "bully", "no_hate"/"normal" to "normal".
- Applied consistent cleaning: removed mentions, URLs, newlines; converted to lowercase; kept basic punctuation; capped at 50 words.
- Deduplicated and balanced classes to ensure robustness.
Usage
Ideal for fine-tuning LLMs for binary text classification (e.g., detecting cyberbullying). Example prompt format:
Classify this text: {cleaned_text}
Response: {label}
Load with Hugging Face datasets:
from datasets import load_dataset
dataset = load_dataset("cike-dev/cyberbullying_dataset")
Sources and Citations
This dataset aggregates the following sources:
- tdavidson: Davidson, T., Warmsley, D., Macy, M., & Weber, I. (2017). Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media (ICWSM ’17) (pp. 512–515). Montreal, Canada.
- OLID: Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., & Kumar, R. (2019). Predicting the type and target of offensive posts in social media. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL).
- Stormfront: de Gibert, O., Perez, N., García-Pablos, A., & Cuadros, M. (2018, October). Hate speech dataset from a white supremacy forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2) (pp. 11–20). Association for Computational Linguistics. https://doi.org/10.18653/v1/W18-5102
- Gab Hate Corpus: Kennedy, B., Atari, M., Davani, A. M., Yeh, L., Omrani, A., Kim, Y., Coombs, K., Portillo-Wightman, G., Havaldar, S., Gonzalez, E., et al. (2022, April). The Gab Hate Corpus. OSF. https://doi.org/10.17605/OSF.IO/EDUA3
- HateXplain: Mathew, B., Saha, P., Yimam, S. M., Biemann, C., Goyal, P., & Mukherjee, A. (2021). HateXplain: A benchmark dataset for explainable hate speech detection. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 14867–14875.
License
The dataset is released under CC-BY 4.0, respecting the licenses of the original datasets. Please cite the sources above when using this dataset.
Contact
For issues or questions, open an issue on the Hugging Face repository or contact the maintainer.