metadata
library_name: pytorch
DepthAnythingV2 is a lightweight depth estimation model designed to predict accurate per-pixel depth maps from a single RGB image, optimized for efficiency and versatility across scenes.
Original paper: DepthAnything V2
DepthAnythingV2-Small
This model uses the DepthAnythingV2-Small variant, which balances model size and inference speed while maintaining strong depth estimation accuracy. It is well suited for applications such as AR/VR, robotics, scene reconstruction, and real-time 3D perception on edge devices.
Model Configuration:
- Reference implementation: Official DepthAnythingV2 source code
- Original Weight:Depth-Anything-V2-Small
- Resolution: 3x224x224
- Support Cooper version:
- Cooper SDK: [2.5.2]
- Cooper Foundry: [2.2]
| Model | Device | Model Link |
|---|---|---|
| DepthAnythingV2-Small | N1-655 | Model_Link |
| DepthAnythingV2-Small | CV72 | Model_Link |
| DepthAnythingV2-Small | CV75 | Model_Link |
