A newer version of the Gradio SDK is available:
6.1.0
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]
[0.1.0] - 2024-12-03
Added
- Model Caching: ~200x faster model loading after first use via
ModelCachesingleton - Adaptive Batching: Automatic batch size optimization based on available GPU memory
batch_inference()method withbatch_size="auto"optionget_optimal_batch_size()for memory-aware batch sizing
- CLI Batching Options:
--batch-size,--max-batch-size,--target-memory-utilization - Apple Silicon Optimizations: Smart CPU/GPU preprocessing selection for MPS
- GPU Preprocessing: Kornia-based GPU preprocessing with NVJPEG support on CUDA
- Comprehensive Benchmarks: Performance comparison scripts and documentation
- PyPI Package: Published as
awesome-depth-anything-3 - CI/CD: GitHub Actions for testing, linting, and PyPI publishing
- HF Spaces Demo: Interactive Gradio demo on Hugging Face
- Colab Tutorial: Interactive notebook with examples
Changed
- Package renamed from
depth-anything-3toawesome-depth-anything-3 - Improved error handling in CLI commands
- Better logging with configurable levels
Credits
This package is an optimized fork of Depth Anything 3 by ByteDance. All model architecture and weights are their work. See README for full attribution.