| library_name: pytorch | |
|  | |
| DepthAnythingV2 is a lightweight depth estimation model designed to predict accurate per-pixel depth maps from a single RGB image, optimized for efficiency and versatility across scenes. | |
| Original paper: [DepthAnything V2](https://arxiv.org/abs/2406.09414) | |
| # DepthAnythingV2-Small | |
| This model uses the DepthAnythingV2-Small variant, which balances model size and inference speed while maintaining strong depth estimation accuracy. It is well suited for applications such as AR/VR, robotics, scene reconstruction, and real-time 3D perception on edge devices. | |
| Model Configuration: | |
| - Reference implementation: [Official DepthAnythingV2 source code](https://github.com/DepthAnything/Depth-Anything-V2) | |
| - Original Weight:[Depth-Anything-V2-Small](https://huggingface.co/depth-anything/Depth-Anything-V2-Small/resolve/main/depth_anything_v2_vits.pth?download=true) | |
| - Resolution: 3x224x224 | |
| - Support Cooper version: | |
| - Cooper SDK: [2.5.2] | |
| - Cooper Foundry: [2.2] | |
| | Model | Device | Model Link | | |
| | :-----: | :-----: | :-----: | | |
| | DepthAnythingV2-Small | N1-655 | [Model_Link](https://huggingface.co/Ambarella/DepthAnythingV2/blob/main/n1-655_depthanything_v2_small.bin) | | |
| | DepthAnythingV2-Small | CV72 | [Model_Link](https://huggingface.co/Ambarella/DepthAnythingV2/blob/main/cv72_depthanything_v2_small.bin) | | |
| | DepthAnythingV2-Small | CV75 | [Model_Link](https://huggingface.co/Ambarella/DepthAnythingV2/blob/main/cv75_depthanything_v2_small.bin) | | |