The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
BLV Object Recognition: Synthetic + Real-World
A dataset for training and evaluating object recognition and segmentation models on infrastructure relevant to blind and low-vision (BLV) navigation in urban environments. Three configurations plus a flat tree of 3D assets:
| Config / tree | Splits | Purpose |
|---|---|---|
syn |
train |
Photorealistic IsaacSim renders for training / pretraining. |
real_ours |
train / validation / test |
Real photographs we captured. real_ours/test is the canonical benchmark eval. |
real_curated |
train |
Curated frames from public HF segmentation datasets (curation, mapillary), remapped to our class palette. |
synthetic_objects/ (tree) |
n/a | 3D asset library: per-asset .glb + .ply + .usdz triples grouped by BLV class. |
Quick links
- Datasheet for Datasets
- Class index + palette
- Croissant metadata is auto-generated by Hugging Face for this repo (look for the Croissant button on the dataset page).
- Paper: NeurIPS 2026 Datasets & Benchmarks (TBD).
Loading
With datasets
from datasets import load_dataset
syn_train = load_dataset("NavAble/NeurIPS_2026_BLV", "syn", split="train")
ours_train = load_dataset("NavAble/NeurIPS_2026_BLV", "real_ours", split="train")
ours_val = load_dataset("NavAble/NeurIPS_2026_BLV", "real_ours", split="validation")
ours_test = load_dataset("NavAble/NeurIPS_2026_BLV", "real_ours", split="test") # canonical eval
curated_train = load_dataset("NavAble/NeurIPS_2026_BLV", "real_curated", split="train")
row = ours_test[0]
row["image"] # PIL.Image.Image, RGB
row["mask"] # PIL.Image.Image, P-mode (palette) - pixel value == class_id
Pulling the 3D assets
from huggingface_hub import snapshot_download
# All 3D assets for a single class:
snapshot_download(
repo_id="NavAble/NeurIPS_2026_BLV", repo_type="dataset",
allow_patterns=["synthetic_objects/door_button/**"],
local_dir="./assets",
)
With PyTorch directly
from torch.utils.data import Dataset
from datasets import load_dataset
import numpy as np
import torch
import torchvision.transforms.functional as TF
class BLVSegDataset(Dataset):
def __init__(self, config: str, split: str, image_size: int = 512):
self.ds = load_dataset("NavAble/NeurIPS_2026_BLV", config, split=split)
self.image_size = image_size
def __len__(self):
return len(self.ds)
def __getitem__(self, idx):
row = self.ds[idx]
img = TF.resize(row["image"].convert("RGB"), [self.image_size, self.image_size])
mask = TF.resize(row["mask"], [self.image_size, self.image_size],
interpolation=TF.InterpolationMode.NEAREST)
img_t = TF.to_tensor(img)
mask_t = torch.from_numpy(np.array(mask, dtype=np.int64))
return {"image": img_t, "mask": mask_t, "class": row["object_class"]}
Splits & sizes
| Config | Split | Rows |
|---|---|---|
syn |
train | 452704 |
real_ours |
train | 3703 |
real_ours |
validation | 396 |
real_ours |
test | 1482 |
real_curated |
train | 36466 |
3D asset library (synthetic_objects/): 500 GLB+PLY+USDZ triples across 9 classes.
Class taxonomy
| ID | Class | Synthetic | Real (Ours) |
|---|---|---|---|
| 1 | aps_button |
yes | yes |
| 2 | bus_stop |
yes | yes |
| 3 | bus_stop_sign |
yes | yes |
| 4 | crosswalk |
yes | yes |
| 5 | door_button |
yes | yes |
| 6 | elevator |
yes | yes |
| 7 | elevator_button |
yes | yes |
| 8 | escalator |
yes | yes |
| 9 | handrail |
yes | yes |
| 10 | pedestrian_signal |
yes | yes |
| 11 | turnstile |
yes | no |
The synthetic-only class turnstile has no real-world examples in this release;
report real-world metrics over the 10 shared classes.
Per-class row counts
| Class | syn/train | real_ours/train | real_ours/val | real_ours/test | real_curated/train |
|---|---|---|---|---|---|
aps_button |
62855 | 206 | 23 | 66 | 0 |
bus_stop |
60789 | 205 | 23 | 62 | 0 |
bus_stop_sign |
60480 | 140 | 16 | 54 | 0 |
crosswalk |
54360 | 9 | 1 | 3 | 27786 |
door_button |
45360 | 1327 | 148 | 622 | 0 |
elevator |
23760 | 1065 | 119 | 479 | 15 |
elevator_button |
23350 | 378 | 23 | 86 | 4401 |
escalator |
7062 | 135 | 15 | 40 | 1296 |
handrail |
44468 | 21 | 3 | 8 | 1197 |
pedestrian_signal |
45210 | 217 | 25 | 62 | 6650 |
turnstile |
25010 | 0 | 0 | 0 | 0 |
Synthetic coverage
- Frames: 452704
- Trajectories: 700
- Environments: 37
- Distinct assets: 112
Mask encoding
Each mask is a single-channel PNG (PIL mode="P") with an embedded palette.
Pixel value i corresponds to the i-th entry in class_index.json:
| Pixel | Class | Palette RGB |
|---|---|---|
| 0 | BACKGROUND |
(0, 0, 0) |
| 1 | aps_button |
(220, 20, 60) |
| 2 | bus_stop |
(255, 140, 0) |
| 3 | bus_stop_sign |
(255, 215, 0) |
| 4 | crosswalk |
(50, 205, 50) |
| 5 | door_button |
(0, 191, 255) |
| 6 | elevator |
(138, 43, 226) |
| 7 | elevator_button |
(255, 105, 180) |
| 8 | escalator |
(0, 128, 128) |
| 9 | handrail |
(165, 42, 42) |
| 10 | pedestrian_signal |
(75, 0, 130) |
| 11 | turnstile |
(255, 20, 147) |
Convert to a numeric label map with np.array(row["mask"]).
Source data
- Synthetic — generated in NVIDIA IsaacSim (Replicator) by spawning each asset across a curated catalog of urban environments (37 unique scenes including default and sunset/night lighting variants), with randomized camera trajectories. Each frame ships an RGBA image, a semantic segmentation mask color-coded per object instance, and 2D tight bounding boxes.
- Real (Ours) — real photographs captured at 113 distinct physical locations covering the 10 shared object classes. Annotations were authored as polygon segmentations in COCO format and rasterized to the unified palettized PNG mask format used here.
- Real (Curated) — frames sampled from public segmentation datasets
(
source_dataset = "curation"or"mapillary"). Class IDs were remapped from each source taxonomy to the global BLV palette. The original per-frame split (split_origin) is preserved as a column; all curated rows are exposed under a singletrainsplit here. - 3D Assets (
synthetic_objects/) — 500 per-asset folders, each containing a.glb(Khronos), a Gaussian-splat.ply, and a.usdz(USD bundle, Apple/Pixar) ready for IsaacSim or AR pipelines. Assets are organized by BLV class.
Preprocessing
Produced by scripts/build_hf_dataset.py. Synthetic RGB PNGs are hardlinked
unchanged from the source tree; the IsaacSim RGBA-encoded semantic masks are
converted into single-channel palettized PNGs against a global class index;
synthetic 2D bounding-box .npy files are flattened into JSONL columns; the
real-world COCO polygon annotations are rasterized to the same palettized PNG
format using pycocotools.
Known limitations
- Resolution mismatch. Synthetic frames are 1280×720; real-world frames are 640×360. Models that resize to a common input shape are unaffected.
- Class imbalance in real-world data. Some classes have few real-world
examples (e.g.
crosswalk,handrail). Report per-class mIoU alongside any aggregate. turnstileis synthetic-only. Evaluate over the 10 shared classes for real-world metrics.- Sim-to-real gap. Synthetic textures and lighting may not match real-world distributions perfectly.
Ethical considerations
- The synthetic data contains no personally identifiable information.
- Real-world captures were collected in public spaces; the dataset is intended for accessibility research and must not be used for surveillance or identification of individuals.
- The class taxonomy targets infrastructure relevant to blind/low-vision navigation; models trained on this dataset should not be deployed in safety-critical settings without additional validation.
License
Released under CC BY 4.0.
Citation
@inproceedings{blv2026,
title = {BLV Object Recognition: A Synthetic and Real-World Benchmark},
author = {Anonymized Authors},
booktitle = {NeurIPS 2026 Datasets and Benchmarks Track},
year = {2026}
}
- Downloads last month
- 128
