Koushim's picture
Update README.md
fa61f0b verified
metadata
license: apache-2.0
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
dataset_info:
  source_dataset: jigsaw-toxic-comment-classification-challenge
  processed_by: Koushik (https://huggingface.co/datasets/Koushim)
  tokenizer: bert-base-uncased
  label_format: float multi-label binary vector
  label_columns:
    - toxicity
    - severe_toxicity
    - obscene
    - threat
    - insult
    - identity_attack
    - sexual_explicit
  features:
    - name: text
      dtype: string
    - name: toxicity
      dtype: float32
    - name: severe_toxicity
      dtype: float32
    - name: obscene
      dtype: float32
    - name: threat
      dtype: float32
    - name: insult
      dtype: float32
    - name: identity_attack
      dtype: float32
    - name: sexual_explicit
      dtype: float32
    - name: labels
      sequence: float64
    - name: input_ids
      sequence: int32
    - name: token_type_ids
      sequence: int8
    - name: attention_mask
      sequence: int8
  splits:
    - name: train
      num_bytes: 2110899324
      num_examples: 1804874
    - name: validation
      num_bytes: 113965680
      num_examples: 97320
    - name: test
      num_bytes: 113712324
      num_examples: 97320
  download_size: 693905946
  dataset_size: 2338577328
annotations_creators:
  - crowdsourced
language_creators:
  - found
language:
  - en
multilinguality:
  - monolingual
pretty_name: Processed Jigsaw Toxic Comment Classification
tags:
  - text classification
  - toxicity
  - multi-label classification
  - NLP
  - BERT
  - hate speech
size_categories:
  - 1M<n<10M
task_categories:
  - text-classification
task_ids:
  - multi-label-classification

Processed Jigsaw Toxic Comments Dataset

This is a preprocessed and tokenized version of the original Jigsaw Toxic Comment Classification Challenge dataset, prepared for multi-label toxicity classification using transformer-based models like BERT.

⚠️ Important Note: I am not the original creator of the dataset. This dataset is a cleaned and restructured version made for quick use in PyTorch deep learning models.


📦 Dataset Features

Each example contains:

  • text: The original user comment
  • labels: A list of 7 binary float values indicating presence of toxicity categories
  • input_ids, attention_mask: Tokenized fields using bert-base-uncased (max length 128)

Toxicity Categories:

  1. toxicity
  2. severe_toxicity
  3. obscene
  4. threat
  5. insult
  6. identity_attack
  7. sexual_explicit

🧪 Dataset Splits

Split # Examples
Train ~1.8M
Validation ~97K
Test ~97K

🔧 Processing Details

  1. Original Source: Manually downloaded from Kaggle
  2. Preprocessing:
    • Combined multiple toxicity columns into a single labels vector
    • Converted label values to floats (0.0 or 1.0)
  3. Tokenization:
    • Used Hugging Face bert-base-uncased tokenizer
    • Applied padding and truncation to max length of 128
  4. Formatting:
    • Final dataset set to return PyTorch input_ids, attention_mask, and labels

💡 Usage Example

from datasets import load_dataset

dataset = load_dataset("Koushim/processed-jigsaw-toxic-comments")

from torch.utils.data import DataLoader

train_loader = DataLoader(dataset["train"], batch_size=32, shuffle=True)

batch = next(iter(train_loader))
print(batch['input_ids'].shape)  # torch.Size([32, 128])
print(batch['labels'].shape)     # torch.Size([32, 7])

📚 Citation

If you use this dataset, please cite the original Jigsaw authors:

@misc{jigsawtoxic,
  title={Toxic Comment Classification Challenge},
  author={Jigsaw and Google},
  year={2018},
  url={https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge}
}

🙏 Acknowledgements

  • Original dataset by Jigsaw/Google
  • Processing, formatting, and tokenization by Koushik