Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    SyntaxError
Message:      not a TIFF file (header b'Exif\x00\x00II' not valid)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2061, in __iter__
                  batch = formatter.format_batch(pa_table)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 472, in format_batch
                  batch = self.python_features_decoder.decode_batch(batch)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 234, in decode_batch
                  return self.features.decode_batch(batch, token_per_repo_id=self.token_per_repo_id) if self.features else batch
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2161, in decode_batch
                  decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1419, in decode_nested_example
                  return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/image.py", line 194, in decode_example
                  if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None:
                     ^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/PIL/Image.py", line 1539, in getexif
                  self._exif.load(exif_info)
                File "/usr/local/lib/python3.12/site-packages/PIL/Image.py", line 3937, in load
                  self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head)
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/PIL/TiffImagePlugin.py", line 572, in __init__
                  raise SyntaxError(msg)
              SyntaxError: not a TIFF file (header b'Exif\x00\x00II' not valid)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Harmful-Contents Dataset

A multi-label image dataset for harmful-content classification across eight PEGI-aligned categories.
The dataset consists of 5,153 rights-cleared images, split into train/validation/test sets and annotated with both binary labels and mask fields for controlled negative sampling.


Dataset Structure

Harmful-Contents/
  csv/
    train.csv
    val.csv
    test.csv
  data/
    train/*.jpg
    val/*.jpg
    test/*.jpg

Each CSV contains:

name,
alcohol,drugs,weapons,gambling,nudity,sexy,smoking,violence,
mask_alcohol,mask_drugs,mask_weapons,mask_gambling,
mask_nudity,mask_sexy,mask_smoking,mask_violence

Images are stored in data/{train,val,test}/ and referenced by name.


Categories

Category Unsafe Examples Safe Examples
alcohol Alcohol bottles/glasses, alcohol brand logos Empty glasses, non-alcoholic drinks
drugs Cannabis, cocaine, pills, paraphernalia OTC medication, neutral plants
weapons Firearms, combat/attack knives, explosives Kitchen knives, fruit knives, toy props
gambling Casinos, slot machines, gambling chips/coins Money, clovers, normal playing cards
nudity Nudity, explicit sexual acts, pornography Non-explicit partially clothed persons
sexy Lingerie/underwear, sexualized posing Sportswear, non-sexual clothing
smoking Cigarettes, cigars, active smoking Cigarette-like objects, steam/steam unrelated to smoking
violence Blood, fighting, visible injury, aggression Red liquids, non-violent crowds, hugging

Base Source (SIMAS)

The dataset is built using the SIMAS collection (Spam Images for Malicious Annotation Set) as the primary seed:
https://zenodo.org/records/15423637

Additional rights-cleared images were added to improve class balance, yielding the final 5,153-image dataset described in the associated thesis.


Loading With Hugging Face datasets

from datasets import load_dataset, Image

data_files = {
    "train": "csv/train.csv",
    "validation": "csv/val.csv",
    "test": "csv/test.csv",
}

ds = load_dataset("csv", data_files=data_files)

def add_path(example, split):
    return {"image_path": f"data/{split}/{example['name']}"}

for split in ["train", "validation", "test"]:
    ds[split] = ds[split].map(lambda x, idx, s=split: add_path(x, s), with_indices=True)
    ds[split] = ds[split].cast_column("image_path", Image())

License

Images are rights-cleared for research and non-commercial use.
Commercial usage requires independent rights verification.


Citation

If you use this dataset, please cite:

Ulusoy, O.
Evaluating and Fine-Tuning Vision Models for Keyword-Driven Content Filtering.
Bachelor Thesis, Flensburg University of Applied Sciences, 2025.

Downloads last month
-