Spaces:
Running
on
Zero
Running
on
Zero
metadata
title: Depth Estimation Compare Demo
emoji: π
colorFrom: indigo
colorTo: indigo
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: false
Depth Estimation Comparison Demo
A ZeroGPU-friendly Gradio interface for comparing Depth Anything v1, Depth Anything v2, and Pixel-Perfect Depth (PPD) on the same image. Switch between side-by-side layouts, a slider overlay, or single-model inspection to understand how different pipelines perceive scene geometry.
π Highlights
- Three interactive views: draggable slider, labeled side-by-side comparison, and original vs depth for any single model.
- Multi-family depth models: run ViT variants from Depth Anything v1/v2 alongside Pixel-Perfect Depth with MoGe metric alignment.
- ZeroGPU aware: on-demand loading, model cache clearing, and torch CUDA cleanup keep GPU usage inside HuggingFace Spaces limits.
- Curated examples: reusable demo images sourced from each model family plus local assets to quickly validate behaviour.
π Supported Pipelines
- Depth Anything v1 (
LiheYoung/depth_anything_*): ViT-S/B/L with fast transformer backbones and colorized outputs viaSpectral_rcolormap. - Depth Anything v2 (
Depth-Anything-V2/checkpoints/*.pth): ViT-Small/Base/Large with HF Hub fallback, configurable feature channels, and improved edge handling. - Pixel-Perfect Depth: Diffusion-based relative depth refined by the MoGe metric surface model and RANSAC alignment to recover metric depth; customizable denoising steps.
π₯οΈ App Experience
- Slider Comparison: drag between two predictions with automatically labeled overlays.
- Method Comparison: view models side-by-side with synchronized layout and captions rendered in OpenCV.
- Single Model: inspect the RGB input versus one model output using the Gradio
ImageSlidercomponent. - Example Gallery: natural-number sorting across
assets/examples,Depth-Anything/assets/examples,Depth-Anything-V2/assets/examples, andPixel-Perfect-Depth/assets/examples.
π¦ Installation & Setup
Local Development
- Clone & enter:
git clone <repository-url> cd Depth-Estimation-Compare-demo - Install dependencies (includes
gradio,torch,gradio_imageslider,open3d,scikit-learn, and MoGe utilities):pip install -r requirements.txt - Model assets:
- Depth Anything v1 checkpoints stream automatically from the HuggingFace Hub.
- Download Depth Anything v2 weights into
Depth-Anything-V2/checkpoints/if they are not already present (depth_anything_v2_vits.pth,depth_anything_v2_vitb.pth,depth_anything_v2_vitl.pth). - Pixel-Perfect Depth pulls the diffusion checkpoint (
ppd.pth) fromgangweix/Pixel-Perfect-Depthon first use and loads MoGe weights (Ruicheng/moge-2-vitl-normal).
- Run the app:
python app_local.py # Local UI with live reload tweaks python app.py # ZeroGPU-ready launch script
HuggingFace Spaces (ZeroGPU)
- Push the repository contents to a Gradio Space.
- Select the ZeroGPU hardware preset.
- The app will download required checkpoints on demand and aggressively free memory after each inference via
clear_model_cache().
π Project Structure
Depth-Estimation-Compare-demo/
βββ app.py # ZeroGPU deployment entrypoint
βββ app_local.py # Local-friendly launch script
βββ requirements.txt # Python dependencies (Gradio, Torch, PPD stack)
βββ assets/
β βββ examples/ # Shared demo imagery
βββ Depth-Anything/ # Depth Anything v1 implementation + utilities
βββ Depth-Anything-V2/ # Depth Anything v2 implementation & checkpoints
βββ Pixel-Perfect-Depth/ # Pixel-Perfect Depth diffusion + MoGe helpers
βββ README.md # You are here
βοΈ Configuration Notes
- Model dropdown labels come from
V1_MODEL_CONFIGS,V2_MODEL_CONFIGS, and the PPD entry inapp.py. clear_model_cache()resets every model and flushes CUDA to respect ZeroGPU constraints.- Pixel-Perfect Depth inference aligns relative depth to metric scale through
recover_metric_depth_ransac()for consistent visualization. - Depth visualizations use a normalized
Spectral_rcolormap; PPD uses a dedicated matplotlib colormap for metric maps.
π Performance Expectations
- Depth Anything v1: ViT-S ~1β2 s, ViT-B ~2β4 s, ViT-L ~4β8 s (image dependent).
- Depth Anything v2: similar to v1 with improved sharpness; HF downloads add one-time startup overhead.
- Pixel-Perfect Depth: diffusion + metric refinement typically takes longer (10β20 denoise steps) but returns metrically-aligned depth suitable for downstream 3D tasks.
π― Usage Tips
- Mix-and-match any two models in comparison tabs to highlight qualitative differences.
- Use the Single Model tab to corroborate PPD metric depth versus RGB input.
- Leverage the provided examples to benchmark indoor/outdoor, lighting extremes, and complex geometry scenarios before running custom images.
π€ Contributing
Enhancements are welcomeβnew model backends, visualization modes, or memory optimizations are especially valuable for ZeroGPU deployments. Please follow the coding style in app.py and keep documentation in sync with new capabilities.
π References
π License
- Depth Anything v1: MIT License
- Depth Anything v2: Apache 2.0 License
- Pixel-Perfect Depth: see upstream repository for licensing
- Demo scaffolding in this repo: MIT License (follow individual component terms)
Built as a hands-on playground for exploring modern monocular depth estimators. Adjust tabs, compare outputs, and plug results into your 3D workflows.