FRIGG โ€” AnyDepth Depth Estimation

Paper: AnyDepth: Depth Estimation Made Easy Module: FRIGG (ANIMA Wave 6) Architecture: DINOv2-Base (frozen) + SDT Head (trained)

Model Details

  • Encoder: DINOv2-ViT-B/14 (86.6M params, frozen)
  • Decoder: SDT Head (9.5M params, trained)
  • Total: ~96M params (only 9.5M trainable)
  • Input: RGB image (518x518 optimal)
  • Output: Dense depth map

Exported Formats

Format File Use Case
PyTorch (.pth) pytorch/frigg_v1.pth Training, fine-tuning
SafeTensors pytorch/frigg_v1.safetensors Fast loading, safe
ONNX onnx/frigg_v1.onnx Cross-platform inference
TensorRT FP16 tensorrt/frigg_v1_fp16.trt Edge deployment (Jetson/L4)
TensorRT FP32 tensorrt/frigg_v1_fp32.trt Full precision inference

Usage

import torch
from anima_frigg.models import build_anydepth, load_dinov2_backbone

backbone = load_dinov2_backbone(model_path="pytorch/frigg_v1.pth")
model = build_anydepth(backbone)
model.load_state_dict(torch.load("pytorch/frigg_v1.pth"))

image = torch.randn(1, 3, 518, 518).cuda()
depth = model(image)  # (1, 1, H, W)

Training

  • Datasets: NYUv2 (1449 samples) + KITTI (7481 pairs)
  • Loss: SILog + Gradient Matching
  • Optimizer: AdamW (lr=1e-4)
  • Hardware: 4x NVIDIA L4 23GB (DDP)

Citation

@article{ren2026anydepth,
  title={AnyDepth: Depth Estimation Made Easy},
  author={Ren, Zeyu and Zhang, Zeyu and Li, Wukai and Liu, Qingxiang and Tang, Hao},
  year={2026}
}

Generated: 2026-03-31T16:39:28.292515

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Paper for ilessio-aiflowlab/project_frigg