cklugmann's picture
Update README.md
b1f3057 verified
metadata
license: cc-by-sa-4.0
dataset_info:
  features:
    - name: id
      dtype: string
    - name: crop_id
      dtype: string
    - name: question
      dtype: string
    - name: user_id
      dtype: string
    - name: answer
      dtype: string
    - name: cant_solve
      dtype: bool
    - name: created_at
      dtype: string
    - name: duration_ms
      dtype: int64
    - name: wp_id
      dtype: string
  splits:
    - name: ecp
      num_bytes: 233586835
      num_examples: 1035840
    - name: zod
      num_bytes: 37672526
      num_examples: 175962
  download_size: 93553083
  dataset_size: 271259361
configs:
  - config_name: default
    data_files:
      - split: ecp
        path: data/ecp-*
      - split: zod
        path: data/zod-*
task_categories:
  - question-answering
language:
  - en
tags:
  - crowdsourcing
pretty_name: Crowdsourced VRU Annotations

Crowdsourced VRU Annotations

Dataset Summary

This dataset provides tabular annotations from two underlying datasets — ECP and ZOD — and is organized into two splits (ecp and zod).
It contains the results of crowdsourced annotation tasks focusing on vulnerable road users (VRUs).

The underlying examples are image crops showing bounding boxes of VRUs from the ECP and ZOD datasets. Each crop is referenced via a crop_id. The actual image pixels are not included in this repository; instead, we provide per-crop annotations and auxiliary metadata that reference these crops.


Dataset at a glance

  • Splits: ecp, zod
  • Domain: Vulnerable road users (pedestrians, cyclists, persons on mobility devices, etc.)
  • Data types: tabular annotation rows (answers), auxiliary CSV metadata files (ecp/boxes.csv, zod/attributes.csv)

Annotation tasks

For each crop, annotators answered one or more binary questions (with an additional can't solve option):

  • ECP tasks (6 questions)

    • human being: Is this a human being?
    • statue/mannequin: Is this a statue or mannequin (i.e. not a real person)?
    • reflection of a person: Is this a reflection (e.g. in glass)?
    • on wheels: Is the person on wheels?
    • on poster/picture/billboard: Is the person shown on a poster/picture/billboard?
    • on bike: Is the person on a bike?
  • ZOD tasks (1 question)

    • human being: Is this a human being?

Answers are chosen from yes, no, or can't solve. When can't solve is chosen the answer field is None / NaN and the boolean cant_solve flag is set.


Annotation design

  • Annotations were collected from multiple human annotators. Each crop/question pair was answered multiple times (repeats).
    • ECP: 5–12 repeats per task; an adaptive allocation protocol increases the number of annotators for tasks with higher disagreement.
    • ZOD: fixed 11 repeats per task.
  • Tasks were grouped and issued in work packages:
    • ECP: nominal size 20 tasks per work package, time limit 300 seconds.
    • ZOD: nominal size 30 tasks per work package, time limit 540 seconds.
  • Due to concurrency and runtime behaviour, some submitted work packages may contain fewer tasks than the nominal size.
  • Each submitted work package receives a unique wp_id and a submission timestamp — these are present in the data.

Dataset statistics

  • ECP

    • Tasks per crop: 6 (see list above)
    • Annotated crops: 32,711
    • Repeats per task: 5–12 (adaptive)
    • 1,035,840 individual answers
  • ZOD

    • Tasks per crop: 1 (human being)
    • Annotated crops: 16,000
    • Repeats per task: 11
    • 175,962 individual answers

Tabular data format

Both splits (ecp and zod) share the same annotation-row schema. Each row corresponds to one single annotator response to one task:

  • id (str): globally unique identifier for the row
  • crop_id (str): identifier of the crop (the bounding-box image crop)
  • question (str): identifier of the question asked (e.g., human being)
  • user_id (str): identifier of the annotator (unique within the annotation system)
  • answer (Optional[str]): submitted answer; None / NaN if cant_solve is true
  • cant_solve (bool): whether the annotator marked the task as unsolvable
  • created_at (datetime64[ns, UTC+01:00]): ISO8601 timestamp when the work package was processed (note: this is the processing/submission time for the work package, not the individual annotation start time)
  • duration_ms (int): duration for the task in milliseconds
  • wp_id (str): work package identifier the task belongs to

Metadata files

The repository also contains auxiliary metadata CSVs. These are provided as separate files so consumers can join/merge them with the annotation rows using crop_id if and when needed.

ecp/boxes.csv

Schema:

  • crop_id (str): unique id of the crop (reference key)
  • image_path (str): path of the original ECP image the crop belongs to
  • left, top, right, bottom (int): bounding box coordinates (upper-left and bottom-right corners)

zod/attributes.csv

Schema:

  • crop_id (str): unique id of the crop (reference key)
  • label (str): original ZOD label for the object
  • attributes (json/object): dictionary with object attributes (e.g., occlusion level) — unchanged from ZOD
  • left, top, width, height (float): bounding box coordinates (upper-left corner, width, height)

Usage example

Load annotation tables (answers) via datasets:

from datasets import load_dataset

REPO_ID = "cklugmann/crowdsourced-vru-annotations"
dataset = load_dataset(REPO_ID)

# Example: load ECP answers into a pandas DataFrame
df_answers_ecp = (
    dataset["ecp"]
    .to_pandas()
    .set_index("id")
)

Load ECP boxes metadata (without merging) using hf_hub_download:

import pandas as pd
from huggingface_hub import hf_hub_download

df_boxes_ecp = (
    pd.read_csv(hf_hub_download(
        repo_id=REPO_ID,
        repo_type="dataset",
        filename="ecp/boxes.csv"
    ))
    .set_index("crop_id")
)

(Analogously you can load zod/attributes.csv with hf_hub_download.)


How we recommend working with the data

  • Use load_dataset(...) to fetch annotation rows as Hugging Face Dataset objects (one split per underlying source: ecp, zod).
  • Use hf_hub_download(...) to fetch auxiliary CSVs (boxes/attributes) as raw files and load them into Pandas with pd.read_csv(...).
  • Perform any merging/joins yourself using crop_id so you keep the original annotation rows unchanged and control join semantics and filtering.

License

This dataset is released under CC-BY-SA-4.0.

The dataset builds on:


Citation

If you use this dataset in your research, please consider citing:

@misc{liao2025minorityreportsbalancingcost,
      title={Minority Reports: Balancing Cost and Quality in Ground Truth Data Annotation},
      author={Hsuan Wei Liao and Christopher Klugmann and Daniel Kondermann and Rafid Mahmood},
      year={2025},
      eprint={2504.09341},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2504.09341},
}