Datasets:
Tasks:
Question Answering
Modalities:
Text
Languages:
English
Size:
1M - 10M
ArXiv:
Tags:
crowdsourcing
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -43,4 +43,180 @@ language:
|
|
| 43 |
tags:
|
| 44 |
- crowdsourcing
|
| 45 |
pretty_name: Crowdsourced VRU Annotations
|
| 46 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
tags:
|
| 44 |
- crowdsourcing
|
| 45 |
pretty_name: Crowdsourced VRU Annotations
|
| 46 |
+
---
|
| 47 |
+
|
| 48 |
+
# Crowdsourced VRU Annotations
|
| 49 |
+
|
| 50 |
+
## Dataset Summary
|
| 51 |
+
|
| 52 |
+
This dataset provides tabular annotations from two underlying datasets — **ECP** and **ZOD** — and is organized into two splits (`ecp` and `zod`).
|
| 53 |
+
It contains the results of crowdsourced annotation tasks focusing on **vulnerable road users (VRUs)**.
|
| 54 |
+
|
| 55 |
+
The underlying examples are *image crops* showing bounding boxes of VRUs from the ECP and ZOD datasets. Each crop is referenced via a `crop_id`. The actual image pixels are **not** included in this repository; instead, we provide per-crop annotations and auxiliary metadata that reference these crops.
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
## Dataset at a glance
|
| 60 |
+
|
| 61 |
+
- **Splits**: `ecp`, `zod`
|
| 62 |
+
- **Domain**: Vulnerable road users (pedestrians, cyclists, persons on mobility devices, etc.)
|
| 63 |
+
- **Data types**: tabular annotation rows (answers), auxiliary CSV metadata files (`ecp/boxes.csv`, `zod/attributes.csv`)
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
## Annotation tasks
|
| 68 |
+
|
| 69 |
+
For each crop, annotators answered one or more binary questions (with an additional `can't solve` option):
|
| 70 |
+
|
| 71 |
+
- **ECP tasks (6 questions)**
|
| 72 |
+
- `human being`: Is this a human being?
|
| 73 |
+
- `statue/mannequin`: Is this a statue or mannequin (i.e. not a real person)?
|
| 74 |
+
- `reflection of a person`: Is this a reflection (e.g. in glass)?
|
| 75 |
+
- `on wheels`: Is the person on wheels?
|
| 76 |
+
- `on poster/picture/billboard`: Is the person shown on a poster/picture/billboard?
|
| 77 |
+
- `on bike`: Is the person on a bike?
|
| 78 |
+
|
| 79 |
+
- **ZOD tasks (1 question)**
|
| 80 |
+
- `human being`: Is this a human being?
|
| 81 |
+
|
| 82 |
+
Answers are chosen from `yes`, `no`, or `can't solve`. When `can't solve` is chosen the `answer` field is `None` / `NaN` and the boolean `cant_solve` flag is set.
|
| 83 |
+
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
## Annotation design
|
| 87 |
+
|
| 88 |
+
- Annotations were collected from multiple human annotators. Each crop/question pair was answered multiple times (repeats).
|
| 89 |
+
- **ECP**: 5–12 repeats per task; an adaptive allocation protocol increases the number of annotators for tasks with higher disagreement.
|
| 90 |
+
- **ZOD**: fixed 11 repeats per task.
|
| 91 |
+
- Tasks were grouped and issued in **work packages**:
|
| 92 |
+
- **ECP**: nominal size 20 tasks per work package, time limit 300 seconds.
|
| 93 |
+
- **ZOD**: nominal size 30 tasks per work package, time limit 540 seconds.
|
| 94 |
+
- Due to concurrency and runtime behaviour, some submitted work packages may contain fewer tasks than the nominal size.
|
| 95 |
+
- Each submitted work package receives a unique `wp_id` and a submission timestamp — these are present in the data.
|
| 96 |
+
|
| 97 |
+
---
|
| 98 |
+
|
| 99 |
+
## Dataset statistics
|
| 100 |
+
|
| 101 |
+
- **ECP**
|
| 102 |
+
- Tasks per crop: 6 (see list above)
|
| 103 |
+
- Annotated crops: 32,711
|
| 104 |
+
- Repeats per task: 5–12 (adaptive)
|
| 105 |
+
- 1,035,840 individual answers
|
| 106 |
+
|
| 107 |
+
- **ZOD**
|
| 108 |
+
- Tasks per crop: 1 (human being)
|
| 109 |
+
- Annotated crops: 16,000
|
| 110 |
+
- Repeats per task: 11
|
| 111 |
+
- 175,962 individual answers
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
## Tabular data format
|
| 116 |
+
|
| 117 |
+
Both splits (`ecp` and `zod`) share the same annotation-row schema. Each row corresponds to one single annotator response to one task:
|
| 118 |
+
|
| 119 |
+
- `id` (`str`): globally unique identifier for the row
|
| 120 |
+
- `crop_id` (`str`): identifier of the crop (the bounding-box image crop)
|
| 121 |
+
- `question` (`str`): identifier of the question asked (e.g., `human being`)
|
| 122 |
+
- `user_id` (`str`): identifier of the annotator (unique within the annotation system)
|
| 123 |
+
- `answer` (`Optional[str]`): submitted answer; `None` / `NaN` if `cant_solve` is true
|
| 124 |
+
- `cant_solve` (`bool`): whether the annotator marked the task as unsolvable
|
| 125 |
+
- `created_at` (`datetime64[ns, UTC+01:00]`): ISO8601 timestamp when the work package was processed (note: this is the processing/submission time for the work package, not the individual annotation start time)
|
| 126 |
+
- `duration_ms` (`int`): duration for the task in milliseconds
|
| 127 |
+
- `wp_id` (`str`): work package identifier the task belongs to
|
| 128 |
+
|
| 129 |
+
---
|
| 130 |
+
|
| 131 |
+
## Metadata files
|
| 132 |
+
|
| 133 |
+
The repository also contains auxiliary metadata CSVs. These are provided as separate files so consumers can join/merge them with the annotation rows using `crop_id` if and when needed.
|
| 134 |
+
|
| 135 |
+
### `ecp/boxes.csv`
|
| 136 |
+
|
| 137 |
+
Schema:
|
| 138 |
+
- `crop_id` (`str`): unique id of the crop (reference key)
|
| 139 |
+
- `image_path` (`str`): path of the original ECP image the crop belongs to
|
| 140 |
+
- `left`, `top`, `right`, `bottom` (`int`): bounding box coordinates (upper-left and bottom-right corners)
|
| 141 |
+
|
| 142 |
+
### `zod/attributes.csv`
|
| 143 |
+
|
| 144 |
+
Schema:
|
| 145 |
+
- `crop_id` (`str`): unique id of the crop (reference key)
|
| 146 |
+
- `label` (`str`): original ZOD label for the object
|
| 147 |
+
- `attributes` (`json/object`): dictionary with object attributes (e.g., occlusion level) — unchanged from ZOD
|
| 148 |
+
- `left`, `top`, `width`, `height` (`float`): bounding box coordinates (upper-left corner, width, height)
|
| 149 |
+
|
| 150 |
+
---
|
| 151 |
+
|
| 152 |
+
## Usage example
|
| 153 |
+
|
| 154 |
+
Load annotation tables (answers) via `datasets`:
|
| 155 |
+
|
| 156 |
+
```python
|
| 157 |
+
from datasets import load_dataset
|
| 158 |
+
|
| 159 |
+
REPO_ID = "cklugmann/crowdsourced-vru-annotations"
|
| 160 |
+
dataset = load_dataset(REPO_ID)
|
| 161 |
+
|
| 162 |
+
# Example: load ECP answers into a pandas DataFrame
|
| 163 |
+
df_answers_ecp = (
|
| 164 |
+
dataset["ecp"]
|
| 165 |
+
.to_pandas()
|
| 166 |
+
.set_index("id")
|
| 167 |
+
)
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
Load ECP boxes metadata (without merging) using `hf_hub_download`:
|
| 171 |
+
|
| 172 |
+
```python
|
| 173 |
+
import pandas as pd
|
| 174 |
+
from huggingface_hub import hf_hub_download
|
| 175 |
+
|
| 176 |
+
df_boxes_ecp = (
|
| 177 |
+
pd.read_csv(hf_hub_download(
|
| 178 |
+
repo_id=REPO_ID,
|
| 179 |
+
repo_type="dataset",
|
| 180 |
+
filename="ecp/boxes.csv"
|
| 181 |
+
))
|
| 182 |
+
.set_index("crop_id")
|
| 183 |
+
)
|
| 184 |
+
```
|
| 185 |
+
|
| 186 |
+
(Analogously you can load `zod/attributes.csv` with `hf_hub_download`.)
|
| 187 |
+
|
| 188 |
+
---
|
| 189 |
+
|
| 190 |
+
## How we recommend working with the data
|
| 191 |
+
|
| 192 |
+
- Use `load_dataset(...)` to fetch annotation rows as Hugging Face `Dataset` objects (one split per underlying source: `ecp`, `zod`).
|
| 193 |
+
- Use `hf_hub_download(...)` to fetch auxiliary CSVs (boxes/attributes) as raw files and load them into Pandas with `pd.read_csv(...)`.
|
| 194 |
+
- Perform any merging/joins yourself using `crop_id` so you keep the original annotation rows unchanged and control join semantics and filtering.
|
| 195 |
+
|
| 196 |
+
---
|
| 197 |
+
|
| 198 |
+
## License
|
| 199 |
+
|
| 200 |
+
This dataset is released under **CC-BY-SA-4.0**.
|
| 201 |
+
|
| 202 |
+
The dataset builds on:
|
| 203 |
+
- **ECP dataset** — (Link to website)[https://eurocity-dataset.tudelft.nl/]
|
| 204 |
+
- **ZOD dataset** — (Link to website)[https://zod.zenseact.com/]
|
| 205 |
+
|
| 206 |
+
---
|
| 207 |
+
|
| 208 |
+
## Citation
|
| 209 |
+
|
| 210 |
+
If you use this dataset in your research, please consider citing:
|
| 211 |
+
|
| 212 |
+
```bibtex
|
| 213 |
+
@misc{liao2025minorityreportsbalancingcost,
|
| 214 |
+
title={Minority Reports: Balancing Cost and Quality in Ground Truth Data Annotation},
|
| 215 |
+
author={Hsuan Wei Liao and Christopher Klugmann and Daniel Kondermann and Rafid Mahmood},
|
| 216 |
+
year={2025},
|
| 217 |
+
eprint={2504.09341},
|
| 218 |
+
archivePrefix={arXiv},
|
| 219 |
+
primaryClass={cs.LG},
|
| 220 |
+
url={https://arxiv.org/abs/2504.09341},
|
| 221 |
+
}
|
| 222 |
+
```
|