Datasets:
Update task categories and add paper, code, and project links
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,12 +1,14 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: cc-by-4.0
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
-
-
|
| 5 |
- image-segmentation
|
| 6 |
- depth-estimation
|
| 7 |
-
|
| 8 |
-
language:
|
| 9 |
-
- en
|
| 10 |
tags:
|
| 11 |
- vision-language
|
| 12 |
- multimodal
|
|
@@ -22,9 +24,6 @@ tags:
|
|
| 22 |
- real-world
|
| 23 |
- grounded-conversation
|
| 24 |
- urban-scenes
|
| 25 |
-
pretty_name: 'PAVE: Pedestrian Accessibility Vision–Language Dataset'
|
| 26 |
-
size_categories:
|
| 27 |
-
- 10K<n<100K
|
| 28 |
---
|
| 29 |
|
| 30 |
# PAVE: Pedestrian Accessibility and Visual-grounded Evaluation
|
|
@@ -33,7 +32,7 @@ size_categories:
|
|
| 33 |
|
| 34 |
> **WalkGPT: Grounded Vision–Language Conversation with Depth-Aware Segmentation for Pedestrian Navigation**
|
| 35 |
> *(Accepted at CVPR 2026)*
|
| 36 |
-
> Paper
|
| 37 |
|
| 38 |
PAVE is a spatially grounded VQA benchmark for accessibility-aware reasoning in real-world pedestrian environments,
|
| 39 |
unifying language understanding, pixel-level grounding, and depth-aware navigation guidance.
|
|
@@ -175,4 +174,5 @@ If you use PAVE, please cite:
|
|
| 175 |
author={Rafi Ibn Sultan, Hui Zhu, Xiangyu Zhou, Chengyin Li, Prashant Khanduri, Marco Brocanelli, Dongxiao Zhu},
|
| 176 |
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
| 177 |
year={2026}
|
| 178 |
-
}
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: cc-by-4.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 10K<n<100K
|
| 7 |
task_categories:
|
| 8 |
+
- image-text-to-text
|
| 9 |
- image-segmentation
|
| 10 |
- depth-estimation
|
| 11 |
+
pretty_name: 'PAVE: Pedestrian Accessibility Vision–Language Dataset'
|
|
|
|
|
|
|
| 12 |
tags:
|
| 13 |
- vision-language
|
| 14 |
- multimodal
|
|
|
|
| 24 |
- real-world
|
| 25 |
- grounded-conversation
|
| 26 |
- urban-scenes
|
|
|
|
|
|
|
|
|
|
| 27 |
---
|
| 28 |
|
| 29 |
# PAVE: Pedestrian Accessibility and Visual-grounded Evaluation
|
|
|
|
| 32 |
|
| 33 |
> **WalkGPT: Grounded Vision–Language Conversation with Depth-Aware Segmentation for Pedestrian Navigation**
|
| 34 |
> *(Accepted at CVPR 2026)*
|
| 35 |
+
> [Paper](https://huggingface.co/papers/2603.10703) | [Code](https://github.com/rafiibnsultan/WalkGPT) | [Project Page](https://sites.google.com/view/walkgpt-26/home)
|
| 36 |
|
| 37 |
PAVE is a spatially grounded VQA benchmark for accessibility-aware reasoning in real-world pedestrian environments,
|
| 38 |
unifying language understanding, pixel-level grounding, and depth-aware navigation guidance.
|
|
|
|
| 174 |
author={Rafi Ibn Sultan, Hui Zhu, Xiangyu Zhou, Chengyin Li, Prashant Khanduri, Marco Brocanelli, Dongxiao Zhu},
|
| 175 |
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
| 176 |
year={2026}
|
| 177 |
+
}
|
| 178 |
+
```
|