harmful-contents / README.md
onullusoy's picture
Add dataset card metadata
d06ae03
metadata
pretty_name: harmful-contents
tags:
  - image-classification
  - multi-label-classification
  - computer-vision
  - content-moderation
task_categories:
  - image-classification
language:
  - en
size_categories:
  - 1K<n<10K
annotations_creators:
  - expert-generated
source_datasets:
  - original
license: other
license_name: research-and-non-commercial-use
license_link: https://huggingface.co/datasets/onullusoy/harmful-contents

Harmful-Contents Dataset

A multi-label image dataset for harmful-content classification across eight PEGI-aligned categories.
The dataset consists of 5,153 rights-cleared images, split into train/validation/test sets and annotated with both binary labels and mask fields for controlled negative sampling.


Dataset Structure

Harmful-Contents/
  csv/
    train.csv
    val.csv
    test.csv
  data/
    train/*.jpg
    val/*.jpg
    test/*.jpg

Each CSV contains:

name,
alcohol,drugs,weapons,gambling,nudity,sexy,smoking,violence,
mask_alcohol,mask_drugs,mask_weapons,mask_gambling,
mask_nudity,mask_sexy,mask_smoking,mask_violence

Images are stored in data/{train,val,test}/ and referenced by name.


Categories

Category Unsafe Examples Safe Examples
alcohol Alcohol bottles/glasses, alcohol brand logos Empty glasses, non-alcoholic drinks
drugs Cannabis, cocaine, pills, paraphernalia OTC medication, neutral plants
weapons Firearms, combat/attack knives, explosives Kitchen knives, fruit knives, toy props
gambling Casinos, slot machines, gambling chips/coins Money, clovers, normal playing cards
nudity Nudity, explicit sexual acts, pornography Non-explicit partially clothed persons
sexy Lingerie/underwear, sexualized posing Sportswear, non-sexual clothing
smoking Cigarettes, cigars, active smoking Cigarette-like objects, steam/steam unrelated to smoking
violence Blood, fighting, visible injury, aggression Red liquids, non-violent crowds, hugging

Base Source (SIMAS)

The dataset is built using the SIMAS collection (Spam Images for Malicious Annotation Set) as the primary seed:
https://zenodo.org/records/15423637

Additional rights-cleared images were added to improve class balance, yielding the final 5,153-image dataset described in the associated thesis.


Loading With Hugging Face datasets

from datasets import load_dataset, Image

data_files = {
    "train": "csv/train.csv",
    "validation": "csv/val.csv",
    "test": "csv/test.csv",
}

ds = load_dataset("csv", data_files=data_files)

def add_path(example, split):
    return {"image_path": f"data/{split}/{example['name']}"}

for split in ["train", "validation", "test"]:
    ds[split] = ds[split].map(lambda x, idx, s=split: add_path(x, s), with_indices=True)
    ds[split] = ds[split].cast_column("image_path", Image())

License

Images are rights-cleared for research and non-commercial use.
Commercial usage requires independent rights verification.


Citation

If you use this dataset, please cite:

Ulusoy, O.
Evaluating and Fine-Tuning Vision Models for Keyword-Driven Content Filtering.
Bachelor Thesis, Flensburg University of Applied Sciences, 2025.