NeurIPS_2026_BLV_Subset / docs /datasheet.md
NavAble's picture
Squash history
9096b1d
# Datasheet: BLV Object Recognition (Synthetic + Real-World)
Following the structure of *Datasheets for Datasets* (Gebru et al., 2018).
## Motivation
**For what purpose was the dataset created?**
To enable training and evaluation of computer-vision models for blind and
low-vision (BLV) navigation aids. The dataset focuses on infrastructure
objects that BLV travelers must perceive and interact with (signals, doors,
escalators, handrails, etc.) and pairs photorealistic synthetic data for
training with real-world photographs (split into train / val / test) for
evaluation.
**Who funded the creation of the dataset?**
DARoS Lab.
## Composition
**What do instances represent?**
Each instance is a single image with a paired pixel-level segmentation mask.
Synthetic instances additionally carry 2D bounding boxes per object.
**How many instances are there?**
- `syn/train`: 452704
- `real_ours/train`: 3703
- `real_ours/validation`: 396
- `real_ours/test`: 1482
- `real_curated/train`: 36466
- `synthetic_objects/` (3D assets): 500 across 9 classes
**Does the dataset contain all possible instances?**
No. The synthetic data is a finite sample drawn from a parameterized
generation pipeline; the real-world data is a finite collection of
photographs.
**Is there any missing information?**
The synthetic-only class `turnstile` has no real-world examples in this
release.
**Are there errors, sources of noise, or redundancies?**
- Synthetic masks are produced by IsaacSim's Replicator and may contain edge
artifacts at sub-pixel object boundaries.
- Real-world polygon annotations were authored manually and may have small
boundary errors.
## Collection process
**Synthetic.** Generated in NVIDIA IsaacSim using Replicator. Each trajectory
samples an asset, environment, and lighting condition, and records
RGB + semantic mask + 2D bounding box per frame.
**Real-world.** Photographs captured by data collectors at distinct physical
locations covering the 10 shared object classes; annotated with COCO-format
polygon segmentations.
## Preprocessing / cleaning / labeling
See `README.md` and `scripts/build_hf_dataset.py`. The on-disk layout
re-encodes synthetic RGBA-coded masks into a single global palettized format
and rasterizes real-world COCO polygons into the same format. Source RGB PNGs
are not re-encoded.
## Uses
**Has the dataset been used for any tasks already?**
The dataset will accompany a paper at the NeurIPS 2026 Datasets & Benchmarks
track (submission pending).
**What other tasks could the dataset be used for?**
Sim-to-real transfer studies, robustness analysis under lighting conditions,
multi-task learning combining detection and segmentation.
**Are there tasks for which the dataset should not be used?**
The dataset must not be used for surveillance or identification of
individuals. The synthetic data does not represent real people; the
real-world data was collected in public spaces and is intended only for
accessibility research.
## Distribution
The dataset is hosted on Hugging Face at `NavAble/NeurIPS_2026_BLV` and
licensed under CC BY 4.0.