Datasets:

Modalities:
Image
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
mgholami commited on
Commit
c4dd9c0
·
verified ·
1 Parent(s): 60b1304

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -1
README.md CHANGED
@@ -41,4 +41,51 @@ Please review the full license terms at: https://waymo.com/open/terms
41
 
42
  - Non-Commercial Use Only:This dataset is made available exclusively for non-commercial research purposes. Any commercial use is strictly prohibited.
43
 
44
- - Access Agreement: Requesting or accessing this dataset constitutes your agreement to the Waymo Open Dataset License.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  - Non-Commercial Use Only:This dataset is made available exclusively for non-commercial research purposes. Any commercial use is strictly prohibited.
43
 
44
+ - Access Agreement: Requesting or accessing this dataset constitutes your agreement to the Waymo Open Dataset License.
45
+
46
+ ---
47
+
48
+ ### 📌 Set-Up
49
+ #### Installation:
50
+
51
+ ```
52
+ pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
53
+ pip install -r requirements.txt
54
+ ```
55
+
56
+ Download the raw images of Ego3D-Bench from https://huggingface.co/datasets/vbdai/Ego3D-Bench and put unzip the images in this directory ```Ego3D-Bench/images```
57
+
58
+ ### 📌 Benchmarking on Ego3D-Bench:
59
+ We have scripts to benchmark internvl3 and Qwen2.5-vl families. Other families of models will be added soon! Give the path of baseline model as ```--model_path``` in the below scripts.
60
+ ```
61
+ bash scripts/internvl3.sh
62
+ bash script/qwen_2.5_vl.sh
63
+ ```
64
+
65
+ ### 📌 Using Ego3D-VLM:
66
+ #### Downlaods:
67
+ - Grounding-Dino: https://huggingface.co/IDEA-Research/grounding-dino-base
68
+ - DepthAnyThing-V2-Metric: https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Outdoor-Large-hf
69
+
70
+ We have scripts to use ego3dvlm alongwith internvl3 and Qwen2.5-vl families. Other families of models will be added soon! Add the path of grounding_dino checkpoint as ```--rec_model_path``` and the path of DepthAnyThing-V2-Metric as ```--depth_model_path```.
71
+
72
+ ```
73
+ bash scripts/internvl3_ego3dvlm.sh
74
+ bash script/qwen_2.5_vl_ego3dvlm.sh
75
+ ```
76
+
77
+
78
+ ### Citation:
79
+ If you find our paper and code useful in your research, please consider giving us a star ⭐ and citing our work 📝 :)
80
+
81
+ ```
82
+ @misc{gholami2025spatialreasoningvisionlanguagemodels,
83
+ title={Spatial Reasoning with Vision-Language Models in Ego-Centric Multi-View Scenes},
84
+ author={Mohsen Gholami and Ahmad Rezaei and Zhou Weimin and Sitong Mao and Shunbo Zhou and Yong Zhang and Mohammad Akbari},
85
+ year={2025},
86
+ eprint={2509.06266},
87
+ archivePrefix={arXiv},
88
+ primaryClass={cs.CV},
89
+ url={https://arxiv.org/abs/2509.06266},
90
+ }
91
+ ```