Datasets:

Modalities:
Image
Text
Formats:
arrow
ArXiv:
Libraries:
Datasets
License:
mgholami commited on
Commit
4800c82
·
verified ·
1 Parent(s): d8474ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -31
README.md CHANGED
@@ -45,40 +45,10 @@ Please review the full license terms at: https://waymo.com/open/terms
45
 
46
  ---
47
 
48
- ### 📌 Set-Up
49
- #### Installation:
50
-
51
- ```
52
- pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu124
53
- pip install -r requirements.txt
54
- ```
55
-
56
- Download the raw images of Ego3D-Bench from https://huggingface.co/datasets/vbdai/Ego3D-Bench and put unzip the images in this directory ```Ego3D-Bench/images```
57
-
58
- ---
59
-
60
 
61
  ### 📌 Benchmarking on Ego3D-Bench:
62
- We have scripts to benchmark internvl3 and Qwen2.5-vl families. Other families of models will be added soon! Give the path of baseline model as ```--model_path``` in the below scripts.
63
- ```
64
- bash scripts/internvl3.sh
65
- bash script/qwen_2.5_vl.sh
66
- ```
67
 
68
- ---
69
-
70
-
71
- ### 📌 Using Ego3D-VLM:
72
- #### Downlaods:
73
- - Grounding-Dino: https://huggingface.co/IDEA-Research/grounding-dino-base
74
- - DepthAnyThing-V2-Metric: https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Outdoor-Large-hf
75
-
76
- We have scripts to use ego3dvlm with internvl3 and Qwen2.5-vl families. Other families of models will be added soon! Add the path of grounding_dino checkpoint as ```--rec_model_path``` and the path of DepthAnyThing-V2-Metric as ```--depth_model_path```.
77
-
78
- ```
79
- bash scripts/internvl3_ego3dvlm.sh
80
- bash script/qwen_2.5_vl_ego3dvlm.sh
81
- ```
82
 
83
  ---
84
 
 
45
 
46
  ---
47
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ### 📌 Benchmarking on Ego3D-Bench:
 
 
 
 
 
50
 
51
+ Refer to the GitHub page (https://github.com/vbdi/Ego3D-Bench) to perform benchmarking using this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  ---
54