pad3-image / README.md
arkananta27's picture
Update README.md
f4f2797 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: sample_name
      dtype: large_string
    - name: description
      dtype: large_string
    - name: category
      dtype: large_string
    - name: violation_type
      dtype: large_string
    - name: type
      dtype: large_string
    - name: link
      dtype: large_string
    - name: image
      dtype: image
  splits:
    - name: train
      num_bytes: 30668315549
      num_examples: 50746
  download_size: 4242243529
  dataset_size: 30668315549
  not_for_all_audiences: true
  viewer: true
  tags:
    - nsfw
    - safety-moderation
    - betting
    - violence

PAD3: Multi-Domain Image Classification Dataset for Kids' Safety

Dataset Summary

The PAD3 (Protected Access Defense - Domain Detection) dataset is a curated collection of over 50,000 images designed specifically to train and evaluate computer vision models for child-safe content moderation. The dataset provides a robust framework for binary classification (Safe vs. Unsafe) and granular violation detection across multiple sensitive domains.

Focusing on high-risk categories such as weapons, violence, and adult content, this dataset serves as a benchmark for developers building automated safety filters and parental control systems.

Key Features

  • Scale: 50,746 unique examples across various sensitive domains.
  • Multimodal: Includes both the visual image and a natural language description for each sample (useful for Vision-Language Models).
  • Diverse Coverage: Aggregated from 6 specialized sources to ensure a wide range of content variability.

Data Structure

Field Descriptions

Field Name Type Description
sample_name string The original filename identifier for the media.
description string A detailed natural language description of the visual content.
category string Binary safety label: safe or unsafe.
violation_type string The specific policy violation category (e.g., betting, weapon, violence).
type string Media type (Primary: image).
link string Source attribution link for the original data.
image image Decoded image data for model training.

Violation Categories

The dataset covers a comprehensive spectrum of content moderation policies:

  • Betting: Luck-based games and betting interfaces.
  • NSFW: Multi-domain adult and sensitive content.
  • Weapons: Detection of firearms, melee weapons, and dangerous tools.
  • Violence: Real-life violent scenarios and structural confrontations.
  • Cigarette: Tobacco, cigarettes, and vaping-related imagery.
  • Terrorists: Visual identifiers related to terrorist activities and iconography.
  • Adult Lifestyle: Clubbing activities and other 21+ activities.
  • Inappropriate Humor: Dark humor, inappropriate context jokes that are not for children.

Source Attribution

This dataset is an aggregate work, utilizing data from the following specialized research sets:

Research Disclaimer

Sensitive Content Warning: This dataset contains images that are unsuitable for children and sensitive audiences. It is intended strictly for research purposes, specifically in the fields of machine learning, safety filtering, and content moderation. Users must adhere to ethical AI guidelines when utilizing this data.