DA-2-Evaluation / README.md
haodongli's picture
Update README.md
875f9b7 verified
metadata
license: apache-2.0
task_categories:
  - depth-estimation
tags:
  - depth-estimation
  - panorama
  - 360-depth
  - 360-depth-estimation
  - 360-image

DA2: Depth Anything in Any Direction

Page Paper GitHub HuggingFace Demo

DA2 predicts dense, scale-invariant distance from a single 360° panorama in an end-to-end manner, with remarkable geometric fidelity and strong zero-shot generalization.

teaser

🎮 Usage

Please see here.

🎓 Citation

If you find these datasets useful, please consider citing 🌹:

@article{li2025depth,
  title={DA$^{2}$: Depth Anything in Any Direction},
  author={Li, Haodong and Zheng, Wangguangdong and He, Jing and Liu, Yuhao and Lin, Xin and Yang, Xin and Chen, Ying-Cong and Guo, Chunchao},
  journal={arXiv preprint arXiv:2509.26618},
  year={2025}
}

@article{armeni2017joint,
  title={Joint 2d-3d-semantic data for indoor scene understanding},
  author={Armeni, Iro and Sax, Sasha and Zamir, Amir R and Savarese, Silvio},
  journal={arXiv preprint arXiv:1702.01105},
  year={2017}
}

@article{chang2017matterport3d,
  title={Matterport3d: Learning from rgb-d data in indoor environments},
  author={Chang, Angel and Dai, Angela and Funkhouser, Thomas and Halber, Maciej and Niessner, Matthias and Savva, Manolis and Song, Shuran and Zeng, Andy and Zhang, Yinda},
  journal={arXiv preprint arXiv:1709.06158},
  year={2017}
}

@article{wang2018self,
  title={Self-supervised learning of depth and camera motion from 360 $\{$$\backslash$deg$\}$ videos},
  author={Wang, Fu-En and Hu, Hou-Ning and Cheng, Hsien-Tzu and Lin, Juan-Ting and Yang, Shang-Ta and Shih, Meng-Li and Chu, Hung-Kuo and Sun, Min},
  journal={arXiv preprint arXiv:1811.05304},
  year={2018}
}