Update pipeline tag and paper links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
 
2
  license: apache-2.0
 
3
  tags:
4
  - depth-estimation
5
  - computer-vision
6
  - monocular-depth
7
  - multi-view-geometry
8
  - pose-estimation
9
- library_name: depth-anything-3
10
- pipeline_tag: depth-estimation
11
  ---
12
 
13
  # Depth Anything 3: DA3-LARGE
@@ -15,8 +15,8 @@ pipeline_tag: depth-estimation
15
  <div align="center">
16
 
17
  [![Project Page](https://img.shields.io/badge/Project_Page-Depth_Anything_3-green)](https://depth-anything-3.github.io)
18
- [![Paper](https://img.shields.io/badge/arXiv-Depth_Anything_3-red)](https://arxiv.org/abs/)
19
- [![Demo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue)](https://huggingface.co/spaces/depth-anything/Depth-Anything-3) # noqa: E501
20
  <!-- Benchmark badge removed as per request -->
21
 
22
  </div>
@@ -108,7 +108,7 @@ da3 auto path/to/images --export-format glb --use-backend
108
  - **Depth Anything 2** for monocular depth estimation
109
  - **VGGT** for multi-view depth estimation and pose estimation
110
 
111
- For detailed benchmarks, please refer to our [paper](https://depth-anything-3.github.io). # noqa: E501
112
 
113
  ## Limitations
114
 
@@ -123,8 +123,8 @@ If you find Depth Anything 3 useful in your research or projects, please cite:
123
  ```bibtex
124
  @article{depthanything3,
125
  title={Depth Anything 3: Recovering the visual space from any views},
126
- author={Haotong Lin and Sili Chen and Jun Hao Liew and Donny Y. Chen and Zhenyu Li and Guang Shi and Jiashi Feng and Bingyi Kang}, # noqa: E501
127
- journal={arXiv preprint arXiv:XXXX.XXXXX},
128
  year={2025}
129
  }
130
  ```
@@ -132,11 +132,11 @@ If you find Depth Anything 3 useful in your research or projects, please cite:
132
  ## Links
133
 
134
  - 🏠 [Project Page](https://depth-anything-3.github.io)
135
- - 📄 [Paper](https://arxiv.org/abs/)
136
  - 💻 [GitHub Repository](https://github.com/ByteDance-Seed/depth-anything-3)
137
  - 🤗 [Hugging Face Demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-3)
138
  - 📚 [Documentation](https://github.com/ByteDance-Seed/depth-anything-3#-useful-documentation)
139
 
140
  ## Authors
141
 
142
- [Haotong Lin](https://haotongl.github.io/) · [Sili Chen](https://github.com/SiliChen321) · [Junhao Liew](https://liewjunhao.github.io/) · [Donny Y. Chen](https://donydchen.github.io) · [Zhenyu Li](https://zhyever.github.io/) · [Guang Shi](https://scholar.google.com/citations?user=MjXxWbUAAAAJ&hl=en) · [Jiashi Feng](https://scholar.google.com.sg/citations?user=Q8iay0gAAAAJ&hl=en) · [Bingyi Kang](https://bingykang.github.io/) # noqa: E501
 
1
  ---
2
+ library_name: depth-anything-3
3
  license: apache-2.0
4
+ pipeline_tag: image-to-3d
5
  tags:
6
  - depth-estimation
7
  - computer-vision
8
  - monocular-depth
9
  - multi-view-geometry
10
  - pose-estimation
 
 
11
  ---
12
 
13
  # Depth Anything 3: DA3-LARGE
 
15
  <div align="center">
16
 
17
  [![Project Page](https://img.shields.io/badge/Project_Page-Depth_Anything_3-green)](https://depth-anything-3.github.io)
18
+ [![Paper](https://img.shields.io/badge/arXiv-2511.10647-red)](https://arxiv.org/abs/2511.10647)
19
+ [![Demo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue)](https://huggingface.co/spaces/depth-anything/Depth-Anything-3)
20
  <!-- Benchmark badge removed as per request -->
21
 
22
  </div>
 
108
  - **Depth Anything 2** for monocular depth estimation
109
  - **VGGT** for multi-view depth estimation and pose estimation
110
 
111
+ For detailed benchmarks, please refer to our [paper](https://arxiv.org/abs/2511.10647). # noqa: E501
112
 
113
  ## Limitations
114
 
 
123
  ```bibtex
124
  @article{depthanything3,
125
  title={Depth Anything 3: Recovering the visual space from any views},
126
+ author={Haotong Lin and Sili Chen and Jun Hao Liew and Donny Y. Chen and Zhenyu Li and Guang Shi and Jiashi Feng and Bingyi Kang},
127
+ journal={arXiv preprint arXiv:2511.10647},
128
  year={2025}
129
  }
130
  ```
 
132
  ## Links
133
 
134
  - 🏠 [Project Page](https://depth-anything-3.github.io)
135
+ - 📄 [Paper](https://arxiv.org/abs/2511.10647)
136
  - 💻 [GitHub Repository](https://github.com/ByteDance-Seed/depth-anything-3)
137
  - 🤗 [Hugging Face Demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-3)
138
  - 📚 [Documentation](https://github.com/ByteDance-Seed/depth-anything-3#-useful-documentation)
139
 
140
  ## Authors
141
 
142
+ [Haotong Lin](https://haotongl.github.io/) · [Sili Chen](https://github.com/SiliChen321) · [Junhao Liew](https://liewjunhao.github.io/) · [Donny Y. Chen](https://donydchen.github.io) · [Zhenyu Li](https://zhyever.github.io/) · [Guang Shi](https://scholar.google.com/citations?user=MjXxWbUAAAAJ&hl=en) · [Jiashi Feng](https://scholar.google.com.sg/citations?user=Q8iay0gAAAAJ&hl=en) · [Bingyi Kang](https://bingykang.github.io/) # noqa: E501