Datasets:
language:
- en
license: cc-by-4.0
size_categories:
- 10K<n<100K
task_categories:
- image-text-to-text
- image-segmentation
- depth-estimation
pretty_name: 'PAVE: Pedestrian Accessibility Vision–Language Dataset'
tags:
- vision-language
- multimodal
- grounded
- spatial-reasoning
- accessibility
- navigation
- segmentation
- depth-aware
- VQA
- pedestrian
- CVPR-2026
- real-world
- grounded-conversation
- urban-scenes
PAVE: Pedestrian Accessibility and Visual-grounded Evaluation
PAVE is a structured vision–language dataset introduced in the CVPR 2026 paper:
WalkGPT: Grounded Vision–Language Conversation with Depth-Aware Segmentation for Pedestrian Navigation
(Accepted at CVPR 2026)
Paper | Code | Project Page
PAVE is a spatially grounded VQA benchmark for accessibility-aware reasoning in real-world pedestrian environments, unifying language understanding, pixel-level grounding, and depth-aware navigation guidance.
Overview
PAVE provides structured annotations built on top of the publicly available SANPO dataset from Google Research:
https://github.com/google-research-datasets/sanpo_dataset
We sincerely thank the SANPO authors for making their dataset publicly available.
PAVE does not redistribute SANPO images. Instead, it provides:
- Question–Answer pairs
- Accessibility reasoning annotations
- Segmentation references
- Distance annotations
- Structured grounding tags
Users must independently download SANPO under its original license to use PAVE.
Dataset Structure
The repository contains:
PAVE.jsonl— Full datasetPAVE_train.jsonl— Official train splitPAVE_val.jsonl— Official validation splitPAVE_train85.jsonl— Session-balanced training subset (85 sessions)PAVE_val85.jsonl— Session-balanced evaluation subset (6 sessions)PAVE_labelmap.json— Ontology and accessibility label definitions
Official Splits
To reduce redundancy while preserving scene-level diversity, we uniformly sample 100 frames per session.
- 85 sessions used for training (~8.5k frames)
- 6 sessions held out for evaluation (~600 frames)
These splits correspond to the setup described in the WalkGPT paper.
For full training, use:
PAVE_train.jsonlPAVE_val.jsonl
For session-balanced experiments (as in the paper):
PAVE_train85.jsonlPAVE_val85.jsonl
Annotation Format
Each JSON entry contains:
- Reference to SANPO image file
- Question
- Structured reasoning output
- Segmentation grounding tags
[p] ... [/p][SEG] - Distance annotations
[distance] ... [/distance] - Accessibility assessment tags
[assessment] ... [/assessment]
The dataset is designed to evaluate:
- Grounded multimodal reasoning
- Accessibility-aware scene understanding
- Depth-aware spatial interpretation
- Structured segmentation-language alignment
Accessibility Ontology
The file PAVE_labelmap.json defines:
- Semantic classes
- Mapping of classes to accessibility categories
- Definitions of accessible vs non-accessible elements
Accessibility categories are defined from a pedestrian navigation perspective as described in the WalkGPT paper.
For detailed ontology design and evaluation protocol, please refer to the paper.
Relationship to SANPO
PAVE annotations are derived from the SANPO dataset.
- Images, depth maps, and base masks are not redistributed here.
- Users must obtain SANPO independently.
- PAVE provides annotation layers on top of SANPO.
Please cite SANPO when using the underlying images.
Intended Use
PAVE is intended for:
- Research in grounded vision–language models
- Accessibility-aware navigation systems
- Spatial reasoning benchmarks
- Multimodal segmentation research
For a detailed discussion of limitations and ethical considerations, see the WalkGPT paper.
License
PAVE annotations are released under:
Creative Commons Attribution 4.0 (CC BY 4.0)
This license applies to the annotation files only.
SANPO images are governed by their original license.
Citation
If you use PAVE, please cite:
@inproceedings{walkgpt2026,
title={WalkGPT: Grounded Vision–Language Conversation with Depth-Aware Segmentation for Pedestrian Navigation},
author={Rafi Ibn Sultan, Hui Zhu, Xiangyu Zhou, Chengyin Li, Prashant Khanduri, Marco Brocanelli, Dongxiao Zhu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2026}
}