Datasets:
Tasks:
Text Classification
Formats:
parquet
Sub-tasks:
multi-label-classification
Languages:
English
Size:
1M - 10M
License:
File size: 4,245 Bytes
527c1b4 fa61f0b 527c1b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
dataset_info:
source_dataset: jigsaw-toxic-comment-classification-challenge
processed_by: Koushik (https://huggingface.co/datasets/Koushim)
tokenizer: bert-base-uncased
label_format: float multi-label binary vector
label_columns:
- toxicity
- severe_toxicity
- obscene
- threat
- insult
- identity_attack
- sexual_explicit
features:
- name: text
dtype: string
- name: toxicity
dtype: float32
- name: severe_toxicity
dtype: float32
- name: obscene
dtype: float32
- name: threat
dtype: float32
- name: insult
dtype: float32
- name: identity_attack
dtype: float32
- name: sexual_explicit
dtype: float32
- name: labels
sequence: float64
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 2110899324
num_examples: 1804874
- name: validation
num_bytes: 113965680
num_examples: 97320
- name: test
num_bytes: 113712324
num_examples: 97320
download_size: 693905946
dataset_size: 2338577328
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
multilinguality:
- monolingual
pretty_name: Processed Jigsaw Toxic Comment Classification
tags:
- text classification
- toxicity
- multi-label classification
- NLP
- BERT
- hate speech
size_categories:
- 1M<n<10M
task_categories:
- text-classification
task_ids:
- multi-label-classification
---
# Processed Jigsaw Toxic Comments Dataset
This is a **preprocessed and tokenized** version of the original [Jigsaw Toxic Comment Classification Challenge](https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge) dataset, prepared for **multi-label toxicity classification** using transformer-based models like BERT.
⚠️ **Important Note**: I am **not the original creator** of the dataset. This dataset is a cleaned and restructured version made for quick use in PyTorch deep learning models.
---
## 📦 Dataset Features
Each example contains:
- `text`: The original user comment
- `labels`: A list of 7 binary float values indicating presence of toxicity categories
- `input_ids`, `attention_mask`: Tokenized fields using `bert-base-uncased` (max length 128)
### Toxicity Categories:
1. `toxicity`
2. `severe_toxicity`
3. `obscene`
4. `threat`
5. `insult`
6. `identity_attack`
7. `sexual_explicit`
---
## 🧪 Dataset Splits
| Split | # Examples |
|-------------|-------------|
| Train | ~1.8M |
| Validation | ~97K |
| Test | ~97K |
---
## 🔧 Processing Details
1. **Original Source**: Manually downloaded from [Kaggle](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge)
2. **Preprocessing**:
- Combined multiple toxicity columns into a single `labels` vector
- Converted label values to floats (0.0 or 1.0)
3. **Tokenization**:
- Used Hugging Face `bert-base-uncased` tokenizer
- Applied padding and truncation to max length of 128
4. **Formatting**:
- Final dataset set to return PyTorch `input_ids`, `attention_mask`, and `labels`
---
## 💡 Usage Example
```python
from datasets import load_dataset
dataset = load_dataset("Koushim/processed-jigsaw-toxic-comments")
from torch.utils.data import DataLoader
train_loader = DataLoader(dataset["train"], batch_size=32, shuffle=True)
batch = next(iter(train_loader))
print(batch['input_ids'].shape) # torch.Size([32, 128])
print(batch['labels'].shape) # torch.Size([32, 7])
````
---
## 📚 Citation
If you use this dataset, please cite the original Jigsaw authors:
```bibtex
@misc{jigsawtoxic,
title={Toxic Comment Classification Challenge},
author={Jigsaw and Google},
year={2018},
url={https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge}
}
```
---
## 🙏 Acknowledgements
* Original dataset by **Jigsaw/Google**
* Processing, formatting, and tokenization by [Koushik](https://huggingface.co/koushik)
|