Datasets:
Tasks:
Question Answering
Modalities:
Text
Languages:
English
Size:
1M - 10M
ArXiv:
Tags:
crowdsourcing
License:
File size: 7,307 Bytes
fb380d5 d7b177e b1f3057 d7b177e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 |
---
license: cc-by-sa-4.0
dataset_info:
features:
- name: id
dtype: string
- name: crop_id
dtype: string
- name: question
dtype: string
- name: user_id
dtype: string
- name: answer
dtype: string
- name: cant_solve
dtype: bool
- name: created_at
dtype: string
- name: duration_ms
dtype: int64
- name: wp_id
dtype: string
splits:
- name: ecp
num_bytes: 233586835
num_examples: 1035840
- name: zod
num_bytes: 37672526
num_examples: 175962
download_size: 93553083
dataset_size: 271259361
configs:
- config_name: default
data_files:
- split: ecp
path: data/ecp-*
- split: zod
path: data/zod-*
task_categories:
- question-answering
language:
- en
tags:
- crowdsourcing
pretty_name: Crowdsourced VRU Annotations
---
# Crowdsourced VRU Annotations
## Dataset Summary
This dataset provides tabular annotations from two underlying datasets — **ECP** and **ZOD** — and is organized into two splits (`ecp` and `zod`).
It contains the results of crowdsourced annotation tasks focusing on **vulnerable road users (VRUs)**.
The underlying examples are *image crops* showing bounding boxes of VRUs from the ECP and ZOD datasets. Each crop is referenced via a `crop_id`. The actual image pixels are **not** included in this repository; instead, we provide per-crop annotations and auxiliary metadata that reference these crops.
---
## Dataset at a glance
- **Splits**: `ecp`, `zod`
- **Domain**: Vulnerable road users (pedestrians, cyclists, persons on mobility devices, etc.)
- **Data types**: tabular annotation rows (answers), auxiliary CSV metadata files (`ecp/boxes.csv`, `zod/attributes.csv`)
---
## Annotation tasks
For each crop, annotators answered one or more binary questions (with an additional `can't solve` option):
- **ECP tasks (6 questions)**
- `human being`: Is this a human being?
- `statue/mannequin`: Is this a statue or mannequin (i.e. not a real person)?
- `reflection of a person`: Is this a reflection (e.g. in glass)?
- `on wheels`: Is the person on wheels?
- `on poster/picture/billboard`: Is the person shown on a poster/picture/billboard?
- `on bike`: Is the person on a bike?
- **ZOD tasks (1 question)**
- `human being`: Is this a human being?
Answers are chosen from `yes`, `no`, or `can't solve`. When `can't solve` is chosen the `answer` field is `None` / `NaN` and the boolean `cant_solve` flag is set.
---
## Annotation design
- Annotations were collected from multiple human annotators. Each crop/question pair was answered multiple times (repeats).
- **ECP**: 5–12 repeats per task; an adaptive allocation protocol increases the number of annotators for tasks with higher disagreement.
- **ZOD**: fixed 11 repeats per task.
- Tasks were grouped and issued in **work packages**:
- **ECP**: nominal size 20 tasks per work package, time limit 300 seconds.
- **ZOD**: nominal size 30 tasks per work package, time limit 540 seconds.
- Due to concurrency and runtime behaviour, some submitted work packages may contain fewer tasks than the nominal size.
- Each submitted work package receives a unique `wp_id` and a submission timestamp — these are present in the data.
---
## Dataset statistics
- **ECP**
- Tasks per crop: 6 (see list above)
- Annotated crops: 32,711
- Repeats per task: 5–12 (adaptive)
- 1,035,840 individual answers
- **ZOD**
- Tasks per crop: 1 (human being)
- Annotated crops: 16,000
- Repeats per task: 11
- 175,962 individual answers
---
## Tabular data format
Both splits (`ecp` and `zod`) share the same annotation-row schema. Each row corresponds to one single annotator response to one task:
- `id` (`str`): globally unique identifier for the row
- `crop_id` (`str`): identifier of the crop (the bounding-box image crop)
- `question` (`str`): identifier of the question asked (e.g., `human being`)
- `user_id` (`str`): identifier of the annotator (unique within the annotation system)
- `answer` (`Optional[str]`): submitted answer; `None` / `NaN` if `cant_solve` is true
- `cant_solve` (`bool`): whether the annotator marked the task as unsolvable
- `created_at` (`datetime64[ns, UTC+01:00]`): ISO8601 timestamp when the work package was processed (note: this is the processing/submission time for the work package, not the individual annotation start time)
- `duration_ms` (`int`): duration for the task in milliseconds
- `wp_id` (`str`): work package identifier the task belongs to
---
## Metadata files
The repository also contains auxiliary metadata CSVs. These are provided as separate files so consumers can join/merge them with the annotation rows using `crop_id` if and when needed.
### `ecp/boxes.csv`
Schema:
- `crop_id` (`str`): unique id of the crop (reference key)
- `image_path` (`str`): path of the original ECP image the crop belongs to
- `left`, `top`, `right`, `bottom` (`int`): bounding box coordinates (upper-left and bottom-right corners)
### `zod/attributes.csv`
Schema:
- `crop_id` (`str`): unique id of the crop (reference key)
- `label` (`str`): original ZOD label for the object
- `attributes` (`json/object`): dictionary with object attributes (e.g., occlusion level) — unchanged from ZOD
- `left`, `top`, `width`, `height` (`float`): bounding box coordinates (upper-left corner, width, height)
---
## Usage example
Load annotation tables (answers) via `datasets`:
```python
from datasets import load_dataset
REPO_ID = "cklugmann/crowdsourced-vru-annotations"
dataset = load_dataset(REPO_ID)
# Example: load ECP answers into a pandas DataFrame
df_answers_ecp = (
dataset["ecp"]
.to_pandas()
.set_index("id")
)
```
Load ECP boxes metadata (without merging) using `hf_hub_download`:
```python
import pandas as pd
from huggingface_hub import hf_hub_download
df_boxes_ecp = (
pd.read_csv(hf_hub_download(
repo_id=REPO_ID,
repo_type="dataset",
filename="ecp/boxes.csv"
))
.set_index("crop_id")
)
```
(Analogously you can load `zod/attributes.csv` with `hf_hub_download`.)
---
## How we recommend working with the data
- Use `load_dataset(...)` to fetch annotation rows as Hugging Face `Dataset` objects (one split per underlying source: `ecp`, `zod`).
- Use `hf_hub_download(...)` to fetch auxiliary CSVs (boxes/attributes) as raw files and load them into Pandas with `pd.read_csv(...)`.
- Perform any merging/joins yourself using `crop_id` so you keep the original annotation rows unchanged and control join semantics and filtering.
---
## License
This dataset is released under **CC-BY-SA-4.0**.
The dataset builds on:
- **ECP dataset** — [Link to website](https://eurocity-dataset.tudelft.nl/)
- **ZOD dataset** — [Link to website](https://zod.zenseact.com/)
---
## Citation
If you use this dataset in your research, please consider citing:
```bibtex
@misc{liao2025minorityreportsbalancingcost,
title={Minority Reports: Balancing Cost and Quality in Ground Truth Data Annotation},
author={Hsuan Wei Liao and Christopher Klugmann and Daniel Kondermann and Rafid Mahmood},
year={2025},
eprint={2504.09341},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2504.09341},
}
```
|