haotongl commited on
Commit
cb98821
·
verified ·
1 Parent(s): e6e9608

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -101
README.md CHANGED
@@ -1,101 +1,101 @@
1
- ---
2
- license: other
3
- task_categories:
4
- - depth-estimation
5
- - image-to-3d
6
- tags:
7
- - depth-estimation
8
- - benchmark
9
- - evaluation
10
- - multi-view-stereo
11
- - 3d-reconstruction
12
- - geometry
13
- - camera-pose-estimation
14
- size_categories:
15
- - 10K<n<100K
16
- language:
17
- - en
18
- pretty_name: DA3-BENCH - Depth Anything 3 Evaluation Benchmark
19
- ---
20
-
21
- # DA3-BENCH: Depth Anything 3 Evaluation Benchmark
22
-
23
- This repository contains processed benchmark datasets for evaluating [Depth Anything 3](https://depth-anything-3.github.io/) depth estimation and visual geometry models. The datasets are provided in a convenient, ready-to-use format for research and evaluation purposes.
24
-
25
- ## About Depth Anything 3
26
-
27
- **Depth Anything 3** (DA3) is a state-of-the-art model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses. It achieves superior performance in:
28
-
29
- - **Monocular Depth Estimation**: Outperforms Depth Anything 2 with better detail and generalization
30
- - **Camera Pose Estimation**: 35.7% improvement over prior SOTA
31
- - **Multi-View Geometry**: 23.6% improvement in geometric accuracy
32
- - **3D Gaussian Splatting**: Superior rendering quality from arbitrary visual inputs
33
-
34
- For more details, visit the [official project page](https://depth-anything-3.github.io/).
35
-
36
- ## 📦 Included Datasets
37
-
38
- The benchmark includes the following datasets, each compressed as a separate zip file:
39
-
40
- | Dataset | Size | Description |
41
- |---------|------|-------------|
42
- | **7scenes.zip** | 3.4 GB | 7-Scenes indoor localization dataset |
43
- | **dtu.zip** | 8.3 GB | DTU Multi-View Stereo dataset |
44
- | **dtu64.zip** | 1.7 GB | DTU 64-view subset |
45
- | **eth3d.zip** | 15 GB | ETH3D high-resolution multi-view dataset |
46
- | **hiroom.zip** | 683 MB | High-resolution indoor room scenes |
47
- | **scannetpp.zip** | 11 GB | ScanNet++ indoor scene understanding dataset |
48
-
49
- **Total Size**: ~40 GB
50
-
51
- ## 🚀 Usage
52
-
53
- Each dataset has been preprocessed and structured for convenient use in depth estimation evaluation pipelines. Simply download and extract the dataset(s) you need.
54
-
55
- ```bash
56
- # Download from Hugging Face (example)
57
- huggingface-cli download depth-anything/DA3-BENCH 7scenes.zip --repo-type dataset
58
-
59
- # Extract a dataset
60
- unzip 7scenes.zip
61
- ```
62
-
63
- ## ⚖️ License and Citation
64
-
65
- **IMPORTANT:** These datasets are provided in a processed format for convenience. Users **must strictly follow the original usage licenses** of each respective dataset:
66
-
67
- - **7-Scenes**: [Microsoft Research License](https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/)
68
- - **DTU MVS**: [DTU Dataset License](https://roboimagedata.compute.dtu.dk/)
69
- - **ETH3D**: [ETH3D Dataset Terms](https://www.eth3d.net/)
70
- - **ScanNet++**: [ScanNet Dataset License](http://www.scan-net.org/)
71
-
72
- ### Citing Depth Anything 3
73
-
74
- If you use this benchmark, please cite the Depth Anything 3 paper:
75
-
76
- ```bibtex
77
- @article{lin2025depthanything3,
78
- title={Depth Anything 3: Recovering the Visual Space from Any Views},
79
- author={Haotong Lin and Sili Chen and Jun Hao Liew and Donny Y. Chen and Zhenyu Li and Guang Shi and Jiashi Feng and Bingyi Kang},
80
- journal={arXiv preprint},
81
- year={2025}
82
- }
83
- ```
84
-
85
- ### Citing Original Datasets
86
-
87
- Additionally, please cite the respective original dataset papers for each benchmark you use. Refer to the original dataset websites for proper citation information.
88
-
89
- ## 📧 Contact
90
-
91
- For questions about:
92
- - **Processed datasets**: Please open an issue in this repository
93
- - **Depth Anything 3 model**: Visit the [official project page](https://depth-anything-3.github.io/) or [GitHub repository](https://github.com/DepthAnything/Depth-Anything-V3)
94
-
95
- ## 🙏 Acknowledgements
96
-
97
- We thank the authors of the original datasets for making their data publicly available for research purposes, and the Depth Anything team for developing this state-of-the-art depth estimation framework.
98
-
99
- ---
100
-
101
- **Disclaimer**: This is a processed collection for evaluation purposes only. All rights to the original data belong to the respective dataset creators. Users must obtain proper permissions and follow all applicable licenses when using these datasets.
 
1
+ ---
2
+ license: other
3
+ task_categories:
4
+ - depth-estimation
5
+ - image-to-3d
6
+ tags:
7
+ - depth-estimation
8
+ - benchmark
9
+ - evaluation
10
+ - multi-view-stereo
11
+ - 3d-reconstruction
12
+ - geometry
13
+ - camera-pose-estimation
14
+ size_categories:
15
+ - 10K<n<100K
16
+ language:
17
+ - en
18
+ pretty_name: DA3-BENCH - Depth Anything 3 Evaluation Benchmark
19
+ ---
20
+
21
+ # DA3-BENCH: Depth Anything 3 Evaluation Benchmark
22
+
23
+ This repository contains processed benchmark datasets for evaluating [Depth Anything 3](https://depth-anything-3.github.io/) depth estimation and visual geometry models. The datasets are provided in a convenient, ready-to-use format for research and evaluation purposes.
24
+
25
+ ## About Depth Anything 3
26
+
27
+ **Depth Anything 3** (DA3) is a state-of-the-art model that predicts spatially consistent geometry from an arbitrary number of visual inputs, with or without known camera poses. It achieves superior performance in:
28
+
29
+ - **Monocular Depth Estimation**: Outperforms Depth Anything 2 with better detail and generalization
30
+ - **Camera Pose Estimation**: 35.7% improvement over prior SOTA
31
+ - **Multi-View Geometry**: 23.6% improvement in geometric accuracy
32
+ - **3D Gaussian Splatting**: Superior rendering quality from arbitrary visual inputs
33
+
34
+ For more details, visit the [official project page](https://depth-anything-3.github.io/).
35
+
36
+ ## 📦 Included Datasets
37
+
38
+ The benchmark includes the following datasets, each compressed as a separate zip file:
39
+
40
+ | Dataset | Size | Description |
41
+ |---------|------|-------------|
42
+ | **7scenes.zip** | 3.4 GB | 7-Scenes indoor localization dataset |
43
+ | **dtu.zip** | 8.3 GB | DTU Multi-View Stereo dataset |
44
+ | **dtu64.zip** | 1.7 GB | DTU 64-view subset |
45
+ | **eth3d.zip** | 15 GB | ETH3D high-resolution multi-view dataset |
46
+ | **hiroom.zip** | 683 MB | High-resolution indoor room scenes |
47
+ | **scannetpp.zip** | 11 GB | ScanNet++ indoor scene understanding dataset |
48
+
49
+ **Total Size**: ~40 GB
50
+
51
+ ## 🚀 Usage
52
+
53
+ Each dataset has been preprocessed and structured for convenient use in depth estimation evaluation pipelines. Simply download and extract the dataset(s) you need.
54
+
55
+ ```bash
56
+ # Download from Hugging Face (example)
57
+ huggingface-cli download depth-anything/DA3-BENCH 7scenes.zip --repo-type dataset
58
+
59
+ # Extract a dataset
60
+ unzip 7scenes.zip
61
+ ```
62
+
63
+ ## ⚖️ License and Citation
64
+
65
+ **IMPORTANT:** These datasets are provided in a processed format for convenience. Users **must strictly follow the original usage licenses** of each respective dataset:
66
+
67
+ - **7-Scenes**: [Microsoft Research License](https://www.microsoft.com/en-us/research/project/rgb-d-dataset-7-scenes/)
68
+ - **DTU MVS**: [DTU Dataset License](https://roboimagedata.compute.dtu.dk/)
69
+ - **ETH3D**: [ETH3D Dataset Terms](https://www.eth3d.net/)
70
+ - **ScanNet++**: [ScanNet Dataset License](http://www.scan-net.org/)
71
+
72
+ ### Citing Depth Anything 3
73
+
74
+ If you use this benchmark, please cite the Depth Anything 3 paper:
75
+
76
+ ```bibtex
77
+ @article{depthanything3,
78
+ title={Depth Anything 3: Recovering the Visual Space from Any Views},
79
+ author={Haotong Lin and Sili Chen and Jun Hao Liew and Donny Y. Chen and Zhenyu Li and Guang Shi and Jiashi Feng and Bingyi Kang},
80
+ journal={arXiv preprint},
81
+ year={2025}
82
+ }
83
+ ```
84
+
85
+ ### Citing Original Datasets
86
+
87
+ Additionally, please cite the respective original dataset papers for each benchmark you use. Refer to the original dataset websites for proper citation information.
88
+
89
+ ## 📧 Contact
90
+
91
+ For questions about:
92
+ - **Processed datasets**: Please open an issue in this repository
93
+ - **Depth Anything 3 model**: Visit the [official project page](https://depth-anything-3.github.io/) or [GitHub repository](https://github.com/DepthAnything/Depth-Anything-V3)
94
+
95
+ ## 🙏 Acknowledgements
96
+
97
+ We thank the authors of the original datasets for making their data publicly available for research purposes, and the Depth Anything team for developing this state-of-the-art depth estimation framework.
98
+
99
+ ---
100
+
101
+ **Disclaimer**: This is a processed collection for evaluation purposes only. All rights to the original data belong to the respective dataset creators. Users must obtain proper permissions and follow all applicable licenses when using these datasets.