Instructions to use TIM2333ll/Depth-Anything-V2-Base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- DepthAnythingV2
How to use TIM2333ll/Depth-Anything-V2-Base with DepthAnythingV2:
# Install from https://github.com/DepthAnything/Depth-Anything-V2 # Load the model and infer depth from an image import cv2 import torch from depth_anything_v2.dpt import DepthAnythingV2 # instantiate the model model = DepthAnythingV2(encoder="<ENCODER>", features=<NUMBER_OF_FEATURES>, out_channels=<OUT_CHANNELS>) # load the weights filepath = hf_hub_download(repo_id="TIM2333ll/Depth-Anything-V2-Base", filename="depth_anything_v2_<ENCODER>.pth", repo_type="model") state_dict = torch.load(filepath, map_location="cpu") model.load_state_dict(state_dict).eval() raw_img = cv2.imread("your/image/path") depth = model.infer_image(raw_img) # HxW raw depth map in numpy - Notebooks
- Google Colab
- Kaggle
| license: cc-by-nc-4.0 | |
| language: | |
| - en | |
| pipeline_tag: depth-estimation | |
| library_name: depth-anything-v2 | |
| tags: | |
| - depth | |
| - relative depth | |
| # Depth-Anything-V2-Base | |
| ## Introduction | |
| Depth Anything V2 is trained from 595K synthetic labeled images and 62M+ real unlabeled images, providing the most capable monocular depth estimation (MDE) model with the following features: | |
| - more fine-grained details than Depth Anything V1 | |
| - more robust than Depth Anything V1 and SD-based models (e.g., Marigold, Geowizard) | |
| - more efficient (10x faster) and more lightweight than SD-based models | |
| - impressive fine-tuned performance with our pre-trained models | |
| ## Installation | |
| ```bash | |
| git clone https://huggingface.co/spaces/depth-anything/Depth-Anything-V2 | |
| cd Depth-Anything-V2 | |
| pip install -r requirements.txt | |
| ``` | |
| ## Usage | |
| Download the [model](https://huggingface.co/depth-anything/Depth-Anything-V2-Base/resolve/main/depth_anything_v2_vitb.pth?download=true) first and put it under the `checkpoints` directory. | |
| ```python | |
| import cv2 | |
| import torch | |
| from depth_anything_v2.dpt import DepthAnythingV2 | |
| model = DepthAnythingV2(encoder='vitb', features=128, out_channels=[96, 192, 384, 768]) | |
| model.load_state_dict(torch.load('checkpoints/depth_anything_v2_vitb.pth', map_location='cpu')) | |
| model.eval() | |
| raw_img = cv2.imread('your/image/path') | |
| depth = model.infer_image(raw_img) # HxW raw depth map | |
| ``` | |
| ## Citation | |
| If you find this project useful, please consider citing: | |
| ```bibtex | |
| @article{depth_anything_v2, | |
| title={Depth Anything V2}, | |
| author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Zhao, Zhen and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang}, | |
| journal={arXiv:2406.09414}, | |
| year={2024} | |
| } | |
| @inproceedings{depth_anything_v1, | |
| title={Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data}, | |
| author={Yang, Lihe and Kang, Bingyi and Huang, Zilong and Xu, Xiaogang and Feng, Jiashi and Zhao, Hengshuang}, | |
| booktitle={CVPR}, | |
| year={2024} | |
| } |