Update README.md
Browse files
README.md
CHANGED
|
@@ -12,4 +12,44 @@ tags:
|
|
| 12 |
---
|
| 13 |
|
| 14 |
|
| 15 |
-
This is the Jittor implementation of **DepthAnything** series for monocular depth estimation. Developed based on the Jittor deep learning framework, it provides efficient solutions for training and inference.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
|
| 15 |
+
This is the Jittor implementation of **DepthAnything** series for monocular depth estimation including **DepthAnything**, **DepthAnything V2** and **DepthAnything-AC**. Developed based on the Jittor deep learning framework, it provides efficient solutions for training and inference.
|
| 16 |
+
|
| 17 |
+
This repository contains the official Jittor implementation of the following papers:
|
| 18 |
+
|
| 19 |
+
|
| 20 |
+
> Depth Anything At Any Condtion<br/>
|
| 21 |
+
> [Boyuan Sun*](https://bbbbchan.github.io),
|
| 22 |
+
> [Modi Jin*](https://ghost233lism.github.io/),
|
| 23 |
+
> [Bowen Yin](https://yinbow.github.io/),
|
| 24 |
+
> [Qibin Hou<sup>†</sup>](https://houqb.github.io/)<br/>
|
| 25 |
+
> Arxiv 2025.
|
| 26 |
+
> [Paper Link](https://arxiv.org/abs/2507.01634) |
|
| 27 |
+
>[Homepage](https://ghost233lism.github.io/depthanything-AC-page/) |
|
| 28 |
+
> [PyTorch Version](https://github.com/HVision-NKU/DepthAnythingAC)
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
> Depth Anything V2<br/>
|
| 32 |
+
> [Lihe Yang](https://liheyoung.github.io/),
|
| 33 |
+
> [Bingyi Kang](https://scholar.google.com/citations?user=NmHgX-wAAAAJ),
|
| 34 |
+
> [Zilong Huang](https://speedinghzl.github.io/),
|
| 35 |
+
> [Zhen Zhao](https://scholar.google.com/citations?user=fF8OFV8AAAAJ&hl=en),
|
| 36 |
+
> [Xiaogang Xu](https://depth-anything-v2.github.io/),
|
| 37 |
+
> [Jiashi Feng](https://sites.google.com/site/jshfeng/),
|
| 38 |
+
> [Hengshuang Zhao<sup>†</sup>](https://hszhao.github.io/)<br/>
|
| 39 |
+
> NeurIPS 2024.
|
| 40 |
+
> [Paper Link](https://arxiv.org/abs/2406.09414) |
|
| 41 |
+
>[Homepage](https://depth-anything-v2.github.io/) |
|
| 42 |
+
> [PyTorch Version without training](https://github.com/DepthAnything/Depth-Anything-V2)
|
| 43 |
+
|
| 44 |
+
> Depth Anything:
|
| 45 |
+
Unleashing the Power of Large-Scale Unlabeled Data<br/>
|
| 46 |
+
> [Lihe Yang](https://liheyoung.github.io/),
|
| 47 |
+
> [Bingyi Kang](https://scholar.google.com/citations?user=NmHgX-wAAAAJ),
|
| 48 |
+
> [Zilong Huang](https://speedinghzl.github.io/),
|
| 49 |
+
> [Xiaogang Xu](https://xuxiaogang.com/),
|
| 50 |
+
> [Jiashi Feng](https://sites.google.com/site/jshfeng/),
|
| 51 |
+
> [Hengshuang Zhao<sup>†</sup>](https://hszhao.github.io/) <br/>
|
| 52 |
+
> CVPR 2024.
|
| 53 |
+
>[Paper Link](https://arxiv.org/pdf/2401.10891) |
|
| 54 |
+
>[Homepage](https://depth-anything.github.io/) |
|
| 55 |
+
>[PyTorch Version without training](https://github.com/LiheYoung/Depth-Anything)
|