Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
hate-speech-detection
Languages:
English
Size:
100K - 1M
ArXiv:
License:
metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
- config_name: embedding_all-MiniLM-L12-v2
data_files:
- split: train
path: embedding_all-MiniLM-L12-v2/train-*
- split: validation
path: embedding_all-MiniLM-L12-v2/validation-*
- split: test
path: embedding_all-MiniLM-L12-v2/test-*
- config_name: embedding_all-mpnet-base-v2
data_files:
- split: train
path: embedding_all-mpnet-base-v2/train-*
- split: validation
path: embedding_all-mpnet-base-v2/validation-*
- split: test
path: embedding_all-mpnet-base-v2/test-*
- config_name: embedding_multi-qa-mpnet-base-dot-v1
data_files:
- split: train
path: embedding_multi-qa-mpnet-base-dot-v1/train-*
- split: validation
path: embedding_multi-qa-mpnet-base-dot-v1/validation-*
- split: test
path: embedding_multi-qa-mpnet-base-dot-v1/test-*
dataset_info:
- config_name: default
features:
- name: id
dtype: string
- name: text
dtype: string
- name: labels
dtype:
class_label:
names:
'0': non
'1': tox
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 55430581
num_examples: 127656
- name: validation
num_bytes: 13936861
num_examples: 31915
- name: test
num_bytes: 27474227
num_examples: 63978
download_size: 62548640
dataset_size: 96841669
- config_name: embedding_all-MiniLM-L12-v2
features:
- name: uid
dtype: int64
- name: embedding_all-MiniLM-L12-v2
sequence: float32
splits:
- name: train
num_bytes: 197611488
num_examples: 127656
- name: validation
num_bytes: 49404420
num_examples: 31915
- name: test
num_bytes: 99037944
num_examples: 63978
download_size: 484421377
dataset_size: 346053852
- config_name: embedding_all-mpnet-base-v2
features:
- name: uid
dtype: int64
- name: embedding_all-mpnet-base-v2
sequence: float32
splits:
- name: train
num_bytes: 393691104
num_examples: 127656
- name: validation
num_bytes: 98425860
num_examples: 31915
- name: test
num_bytes: 197308152
num_examples: 63978
download_size: 827919212
dataset_size: 689425116
- config_name: embedding_multi-qa-mpnet-base-dot-v1
features:
- name: uid
dtype: int64
- name: embedding_multi-qa-mpnet-base-dot-v1
sequence: float32
splits:
- name: train
num_bytes: 393691104
num_examples: 127656
- name: validation
num_bytes: 98425860
num_examples: 31915
- name: test
num_bytes: 197308152
num_examples: 63978
download_size: 827907964
dataset_size: 689425116
annotations_creators:
- crowdsourced
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- monolingual
pretty_name: Toxic Wikipedia Comments
size_categories:
- 100K<n<1M
source_datasets:
- extended|other
tags:
- wikipedia
- toxicity
- toxic comments
task_categories:
- text-classification
task_ids:
- hate-speech-detection
This dataset has been created as an artefact of the paper AnchorAL: Computationally Efficient Active Learning for Large and Imbalanced Datasets (Lesci and Vlachos, 2024).
More info about this dataset in the appendix of the paper.
This is the same dataset as OxAISH-AL-LLM/wiki_toxic.
The only differences are:
Addition of a unique identifier,
uid.Addition of the indices, that is, 3 columns with the embeddings of 3 different sentence-transformers
all-mpnet-base-v2multi-qa-mpnet-base-dot-v1all-MiniLM-L12-v2
Renaming of the
labelcolumn tolabelsfor easier compatibility with the transformers library.