Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,49 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
---
|
| 4 |
+
<div align="center">
|
| 5 |
+
<h1>Spatial Reasoning with Vision-Language Models in Ego-Centric Multi-view Scenes (Code Comes Soon!)</h1>
|
| 6 |
+
<p><i>Benchmarking and Improving 3D Spatial Reasoning in Vision-Language Models</i></p>
|
| 7 |
+
|
| 8 |
+
<a href="https://arxiv.org/abs/2509.06266" target="_blank">
|
| 9 |
+
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-red?logo=arxiv" height="20" />
|
| 10 |
+
</a>
|
| 11 |
+
<a href="https://vbdi.github.io/Ego3D-Bench-webpage/" target="_blank">
|
| 12 |
+
<img alt="Website" src="https://img.shields.io/badge/🌎_Website-blue.svg" height="20" />
|
| 13 |
+
</a>
|
| 14 |
+
<a href="https://huggingface.co/datasets/vbdai/Ego3D-Bench" target="_blank">
|
| 15 |
+
<img alt="HF Dataset: Ego3D-Bench" src="https://img.shields.io/badge/%F0%9F%A4%97%20_Ego3D_Bench-ffc107?color=ffc107&logoColor=white" height="20" />
|
| 16 |
+
</a>
|
| 17 |
+
</div>
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+
### 📌 Key Highlights
|
| 23 |
+
|
| 24 |
+
- 📊 **Ego3D-Bench**: A benchmark of **8,600+ human-verified QA pairs** for evaluating VLMs in **ego-centric, multi-view outdoor environments**.
|
| 25 |
+
- 🧠 **Ego3D-VLM**: A **post-training framework** that builds cognitive maps from global 3D coordinates, achieving **+12% QA accuracy** and **+56% distance estimation** improvements.
|
| 26 |
+
- 🚀 **Impact**: Together, Ego3D-Bench and Ego3D-VLM move VLMs closer to **human-level 3D spatial understanding** in real-world settings.
|
| 27 |
+
|
| 28 |
+
---
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
### ⚖️ **Ego3D-Bench**
|
| 32 |
+
Benchmark Overview: We introduce Ego3D-Bench, a benchmark designed to evaluate the spatial understanding of VLMs in ego-centric multi-view scenarios. Images are collected from three different datasets: NuScenes, Argoverse, and Waymo. Questions are designed to require cross-view reseasoning. We define question from the ego-perspective and from the perspective of objects in the scene. To clearly indicate the perspective of each question, we categorize them into ego-centric or object-centric. In total we have 10 questions: 8 multi-choice QAs and 2 exact number QAs. Figure
|
| 33 |
+
|
| 34 |
+

|
| 35 |
+
|
| 36 |
+
---
|
| 37 |
+
|
| 38 |
+
📄 **Dataset Access and License Notice:**
|
| 39 |
+
|
| 40 |
+
This dataset includes a subsample of the Waymo Open Dataset (WOD) and is governed by the Waymo Open Dataset License Agreement.
|
| 41 |
+
Please review the full license terms at: https://waymo.com/open/terms
|
| 42 |
+
|
| 43 |
+
🔒 **Access and Usage Conditions**
|
| 44 |
+
|
| 45 |
+
- License Compliance: This dataset is derived from the Waymo Open Dataset (WOD). All use of this dataset must comply with the terms outlined in the WOD license.
|
| 46 |
+
|
| 47 |
+
- Non-Commercial Use Only:This dataset is made available exclusively for non-commercial research purposes. Any commercial use is strictly prohibited.
|
| 48 |
+
|
| 49 |
+
- Access Agreement: Requesting or accessing this dataset constitutes your agreement to the Waymo Open Dataset License.
|