Datasets:
Update task categories and add paper, code, and project links
Browse filesHi! I'm Niels from the community science team at Hugging Face.
I've opened this PR to improve the dataset card by:
1. Updating the `task_categories` in the YAML metadata to use the standard taxonomy (`image-text-to-text`, `image-segmentation`, `depth-estimation`).
2. Adding the official links to the WalkGPT paper, the GitHub repository, and the project page.
3. Ensuring the metadata aligns with the Hugging Face Hub standards.
This makes the dataset more discoverable and provides users with direct access to the associated research and code!
README.md
CHANGED
|
@@ -1,12 +1,14 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: cc-by-4.0
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
-
-
|
| 5 |
- image-segmentation
|
| 6 |
- depth-estimation
|
| 7 |
-
|
| 8 |
-
language:
|
| 9 |
-
- en
|
| 10 |
tags:
|
| 11 |
- vision-language
|
| 12 |
- multimodal
|
|
@@ -22,9 +24,6 @@ tags:
|
|
| 22 |
- real-world
|
| 23 |
- grounded-conversation
|
| 24 |
- urban-scenes
|
| 25 |
-
pretty_name: 'PAVE: Pedestrian Accessibility Vision–Language Dataset'
|
| 26 |
-
size_categories:
|
| 27 |
-
- 10K<n<100K
|
| 28 |
---
|
| 29 |
|
| 30 |
# PAVE: Pedestrian Accessibility and Visual-grounded Evaluation
|
|
@@ -33,7 +32,7 @@ size_categories:
|
|
| 33 |
|
| 34 |
> **WalkGPT: Grounded Vision–Language Conversation with Depth-Aware Segmentation for Pedestrian Navigation**
|
| 35 |
> *(Accepted at CVPR 2026)*
|
| 36 |
-
> Paper
|
| 37 |
|
| 38 |
PAVE is a spatially grounded VQA benchmark for accessibility-aware reasoning in real-world pedestrian environments,
|
| 39 |
unifying language understanding, pixel-level grounding, and depth-aware navigation guidance.
|
|
@@ -175,4 +174,5 @@ If you use PAVE, please cite:
|
|
| 175 |
author={Rafi Ibn Sultan, Hui Zhu, Xiangyu Zhou, Chengyin Li, Prashant Khanduri, Marco Brocanelli, Dongxiao Zhu},
|
| 176 |
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
| 177 |
year={2026}
|
| 178 |
-
}
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: cc-by-4.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 10K<n<100K
|
| 7 |
task_categories:
|
| 8 |
+
- image-text-to-text
|
| 9 |
- image-segmentation
|
| 10 |
- depth-estimation
|
| 11 |
+
pretty_name: 'PAVE: Pedestrian Accessibility Vision–Language Dataset'
|
|
|
|
|
|
|
| 12 |
tags:
|
| 13 |
- vision-language
|
| 14 |
- multimodal
|
|
|
|
| 24 |
- real-world
|
| 25 |
- grounded-conversation
|
| 26 |
- urban-scenes
|
|
|
|
|
|
|
|
|
|
| 27 |
---
|
| 28 |
|
| 29 |
# PAVE: Pedestrian Accessibility and Visual-grounded Evaluation
|
|
|
|
| 32 |
|
| 33 |
> **WalkGPT: Grounded Vision–Language Conversation with Depth-Aware Segmentation for Pedestrian Navigation**
|
| 34 |
> *(Accepted at CVPR 2026)*
|
| 35 |
+
> [Paper](https://huggingface.co/papers/2603.10703) | [Code](https://github.com/rafiibnsultan/WalkGPT) | [Project Page](https://sites.google.com/view/walkgpt-26/home)
|
| 36 |
|
| 37 |
PAVE is a spatially grounded VQA benchmark for accessibility-aware reasoning in real-world pedestrian environments,
|
| 38 |
unifying language understanding, pixel-level grounding, and depth-aware navigation guidance.
|
|
|
|
| 174 |
author={Rafi Ibn Sultan, Hui Zhu, Xiangyu Zhou, Chengyin Li, Prashant Khanduri, Marco Brocanelli, Dongxiao Zhu},
|
| 175 |
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
| 176 |
year={2026}
|
| 177 |
+
}
|
| 178 |
+
```
|