Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"Monocular depth estimation","local":"monocular-depth-estimation","sections":[{"title":"Depth estimation pipeline","local":"depth-estimation-pipeline","sections":[],"depth":2},{"title":"Depth estimation inference by hand","local":"depth-estimation-inference-by-hand","sections":[],"depth":2}],"depth":1}"> | |
| <link href="/docs/transformers/main/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/entry/start.2135b7e6.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/scheduler.25b97de1.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/singletons.0f2b7d5f.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/index.e188933d.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/paths.3d04d2c6.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/entry/app.24372c84.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/index.d9030fc9.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/nodes/0.026d2fdd.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/nodes/412.c2c5075c.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/Tip.baa67368.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/CodeBlock.e6cd0d95.js"> | |
| <link rel="modulepreload" href="/docs/transformers/main/en/_app/immutable/chunks/EditOnGithub.91d95064.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"Monocular depth estimation","local":"monocular-depth-estimation","sections":[{"title":"Depth estimation pipeline","local":"depth-estimation-pipeline","sections":[],"depth":2},{"title":"Depth estimation inference by hand","local":"depth-estimation-inference-by-hand","sections":[],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="monocular-depth-estimation" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#monocular-depth-estimation"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Monocular depth estimation</span></h1> <p data-svelte-h="svelte-1elgt0a">Monocular depth estimation is a computer vision task that involves predicting the depth information of a scene from a | |
| single image. In other words, it is the process of estimating the distance of objects in a scene from | |
| a single camera viewpoint.</p> <p data-svelte-h="svelte-szef6f">Monocular depth estimation has various applications, including 3D reconstruction, augmented reality, autonomous driving, | |
| and robotics. It is a challenging task as it requires the model to understand the complex relationships between objects | |
| in the scene and the corresponding depth information, which can be affected by factors such as lighting conditions, | |
| occlusion, and texture.</p> <p data-svelte-h="svelte-1evxf27">There are two main depth estimation categories:</p> <ul data-svelte-h="svelte-1l3knez"><li><p><strong>Absolute depth estimation</strong>: This task variant aims to provide exact depth measurements from the camera. The term is used interchangeably with metric depth estimation, where depth is provided in precise measurements in meters or feet. Absolute depth estimation models output depth maps with numerical values that represent real-world distances.</p></li> <li><p><strong>Relative depth estimation</strong>: Relative depth estimation aims to predict the depth order of objects or points in a scene without providing the precise measurements. These models output a depth map that indicates which parts of the scene are closer or farther relative to each other without the actual distances to A and B.</p></li></ul> <p data-svelte-h="svelte-9hlqcm">In this guide, we will see how to infer with <a href="https://huggingface.co/depth-anything/Depth-Anything-V2-Large" rel="nofollow">Depth Anything V2</a>, a state-of-the-art zero-shot relative depth estimation model, and <a href="https://huggingface.co/docs/transformers/main/en/model_doc/zoedepth" rel="nofollow">ZoeDepth</a>, an absolute depth estimation model.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-vf0won">Check the <a href="https://huggingface.co/tasks/depth-estimation" rel="nofollow">Depth Estimation</a> task page to view all compatible architectures and checkpoints.</p></div> <p data-svelte-h="svelte-n2hqrn">Before we begin, we need to install the latest version of Transformers:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pip install -q -U transformers<!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="depth-estimation-pipeline" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#depth-estimation-pipeline"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Depth estimation pipeline</span></h2> <p data-svelte-h="svelte-dz0z6">The simplest way to try out inference with a model supporting depth estimation is to use the corresponding <a href="/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline">pipeline()</a>. | |
| Instantiate a pipeline from a <a href="https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> pipeline | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-meta">>>> </span>device = <span class="hljs-string">"cuda"</span> <span class="hljs-keyword">if</span> torch.cuda.is_available() <span class="hljs-keyword">else</span> <span class="hljs-string">"cpu"</span> | |
| <span class="hljs-meta">>>> </span>checkpoint = <span class="hljs-string">"depth-anything/Depth-Anything-V2-base-hf"</span> | |
| <span class="hljs-meta">>>> </span>pipe = pipeline(<span class="hljs-string">"depth-estimation"</span>, model=checkpoint, device=device)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-wuz5lr">Next, choose an image to analyze:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> PIL <span class="hljs-keyword">import</span> Image | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> requests | |
| <span class="hljs-meta">>>> </span>url = <span class="hljs-string">"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg"</span> | |
| <span class="hljs-meta">>>> </span>image = Image.<span class="hljs-built_in">open</span>(requests.get(url, stream=<span class="hljs-literal">True</span>).raw) | |
| <span class="hljs-meta">>>> </span>image<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-6n7vnz"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg" alt="Photo of a bee"></div> <p data-svelte-h="svelte-mcr1tn">Pass the image to the pipeline.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span>predictions = pipe(image)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1jckqfu">The pipeline returns a dictionary with two entries. The first one, called <code>predicted_depth</code>, is a tensor with the values | |
| being the depth expressed in meters for each pixel. | |
| The second one, <code>depth</code>, is a PIL image that visualizes the depth estimation result.</p> <p data-svelte-h="svelte-1dzpyfr">Let’s take a look at the visualized result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span>predictions[<span class="hljs-string">"depth"</span>]<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-43wxxb"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization.png" alt="Depth estimation visualization"></div> <h2 class="relative group"><a id="depth-estimation-inference-by-hand" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#depth-estimation-inference-by-hand"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Depth estimation inference by hand</span></h2> <p data-svelte-h="svelte-1u79cc9">Now that you’ve seen how to use the depth estimation pipeline, let’s see how we can replicate the same result by hand.</p> <p data-svelte-h="svelte-1r8mctn">Start by loading the model and associated processor from a <a href="https://huggingface.co/models?pipeline_tag=depth-estimation&sort=downloads" rel="nofollow">checkpoint on the Hugging Face Hub</a>. | |
| Here we’ll use the same checkpoint as before:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> AutoImageProcessor, AutoModelForDepthEstimation | |
| <span class="hljs-meta">>>> </span>checkpoint = <span class="hljs-string">"Intel/zoedepth-nyu-kitti"</span> | |
| <span class="hljs-meta">>>> </span>image_processor = AutoImageProcessor.from_pretrained(checkpoint) | |
| <span class="hljs-meta">>>> </span>model = AutoModelForDepthEstimation.from_pretrained(checkpoint).to(device)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-hhrw3">Prepare the image input for the model using the <code>image_processor</code> that will take care of the necessary image transformations | |
| such as resizing and normalization:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span>pixel_values = image_processor(image, return_tensors=<span class="hljs-string">"pt"</span>).pixel_values.to(device)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1yk8q0z">Pass the prepared inputs through the model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">with</span> torch.no_grad(): | |
| <span class="hljs-meta">... </span> outputs = model(pixel_values)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-e3gnzr">Let’s post-process and visualize the results.</p> <p data-svelte-h="svelte-10z56um">We need to pad and then resize the outputs so that predicted depth map has the same dimension as the original image. After resizing we will remove the padded regions from the depth.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">import</span> torch.nn.functional <span class="hljs-keyword">as</span> F | |
| <span class="hljs-meta">>>> </span>predicted_depth = outputs.predicted_depth.unsqueeze(dim=<span class="hljs-number">1</span>) | |
| <span class="hljs-meta">>>> </span>height, width = pixel_values.shape[<span class="hljs-number">2</span>:] | |
| <span class="hljs-meta">>>> </span>height_padding_factor = width_padding_factor = <span class="hljs-number">3</span> | |
| <span class="hljs-meta">>>> </span>pad_h = <span class="hljs-built_in">int</span>(np.sqrt(height/<span class="hljs-number">2</span>) * height_padding_factor) | |
| <span class="hljs-meta">>>> </span>pad_w = <span class="hljs-built_in">int</span>(np.sqrt(width/<span class="hljs-number">2</span>) * width_padding_factor) | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">if</span> predicted_depth.shape[-<span class="hljs-number">2</span>:] != pixel_values.shape[-<span class="hljs-number">2</span>:]: | |
| <span class="hljs-meta">>>> </span> predicted_depth = F.interpolate(predicted_depth, size= (height, width), mode=<span class="hljs-string">'bicubic'</span>, align_corners=<span class="hljs-literal">False</span>) | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">if</span> pad_h > <span class="hljs-number">0</span>: | |
| predicted_depth = predicted_depth[:, :, pad_h:-pad_h,:] | |
| <span class="hljs-meta">>>> </span><span class="hljs-keyword">if</span> pad_w > <span class="hljs-number">0</span>: | |
| predicted_depth = predicted_depth[:, :, :, pad_w:-pad_w]<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-i8h4hx">We can now visualize the results (the function below is taken from the <a href="https://github.com/GaussianObject/GaussianObject/blob/ad6629efadb57902d5f8bc0fa562258029a4bdf1/pred_monodepth.py#L11" rel="nofollow">GaussianObject</a> framework).</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> matplotlib | |
| <span class="hljs-keyword">def</span> <span class="hljs-title function_">colorize</span>(<span class="hljs-params">value, vmin=<span class="hljs-literal">None</span>, vmax=<span class="hljs-literal">None</span>, cmap=<span class="hljs-string">'gray_r'</span>, invalid_val=-<span class="hljs-number">99</span>, invalid_mask=<span class="hljs-literal">None</span>, background_color=(<span class="hljs-params"><span class="hljs-number">128</span>, <span class="hljs-number">128</span>, <span class="hljs-number">128</span>, <span class="hljs-number">255</span></span>), gamma_corrected=<span class="hljs-literal">False</span>, value_transform=<span class="hljs-literal">None</span></span>): | |
| <span class="hljs-string">"""Converts a depth map to a color image. | |
| Args: | |
| value (torch.Tensor, numpy.ndarray): Input depth map. Shape: (H, W) or (1, H, W) or (1, 1, H, W). All singular dimensions are squeezed | |
| vmin (float, optional): vmin-valued entries are mapped to start color of cmap. If None, value.min() is used. Defaults to None. | |
| vmax (float, optional): vmax-valued entries are mapped to end color of cmap. If None, value.max() is used. Defaults to None. | |
| cmap (str, optional): matplotlib colormap to use. Defaults to 'magma_r'. | |
| invalid_val (int, optional): Specifies value of invalid pixels that should be colored as 'background_color'. Defaults to -99. | |
| invalid_mask (numpy.ndarray, optional): Boolean mask for invalid regions. Defaults to None. | |
| background_color (tuple[int], optional): 4-tuple RGB color to give to invalid pixels. Defaults to (128, 128, 128, 255). | |
| gamma_corrected (bool, optional): Apply gamma correction to colored image. Defaults to False. | |
| value_transform (Callable, optional): Apply transform function to valid pixels before coloring. Defaults to None. | |
| Returns: | |
| numpy.ndarray, dtype - uint8: Colored depth map. Shape: (H, W, 4) | |
| """</span> | |
| <span class="hljs-keyword">if</span> <span class="hljs-built_in">isinstance</span>(value, torch.Tensor): | |
| value = value.detach().cpu().numpy() | |
| value = value.squeeze() | |
| <span class="hljs-keyword">if</span> invalid_mask <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>: | |
| invalid_mask = value == invalid_val | |
| mask = np.logical_not(invalid_mask) | |
| <span class="hljs-comment"># normalize</span> | |
| vmin = np.percentile(value[mask],<span class="hljs-number">2</span>) <span class="hljs-keyword">if</span> vmin <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span> <span class="hljs-keyword">else</span> vmin | |
| vmax = np.percentile(value[mask],<span class="hljs-number">85</span>) <span class="hljs-keyword">if</span> vmax <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span> <span class="hljs-keyword">else</span> vmax | |
| <span class="hljs-keyword">if</span> vmin != vmax: | |
| value = (value - vmin) / (vmax - vmin) <span class="hljs-comment"># vmin..vmax</span> | |
| <span class="hljs-keyword">else</span>: | |
| <span class="hljs-comment"># Avoid 0-division</span> | |
| value = value * <span class="hljs-number">0.</span> | |
| <span class="hljs-comment"># squeeze last dim if it exists</span> | |
| <span class="hljs-comment"># grey out the invalid values</span> | |
| value[invalid_mask] = np.nan | |
| cmapper = matplotlib.colormaps.get_cmap(cmap) | |
| <span class="hljs-keyword">if</span> value_transform: | |
| value = value_transform(value) | |
| <span class="hljs-comment"># value = value / value.max()</span> | |
| value = cmapper(value, <span class="hljs-built_in">bytes</span>=<span class="hljs-literal">True</span>) <span class="hljs-comment"># (nxmx4)</span> | |
| <span class="hljs-comment"># img = value[:, :, :]</span> | |
| img = value[...] | |
| img[invalid_mask] = background_color | |
| <span class="hljs-comment"># return img.transpose((2, 0, 1))</span> | |
| <span class="hljs-keyword">if</span> gamma_corrected: | |
| <span class="hljs-comment"># gamma correction</span> | |
| img = img / <span class="hljs-number">255</span> | |
| img = np.power(img, <span class="hljs-number">2.2</span>) | |
| img = img * <span class="hljs-number">255</span> | |
| img = img.astype(np.uint8) | |
| <span class="hljs-keyword">return</span> img | |
| <span class="hljs-meta">>>> </span>result = colorize(predicted_depth.cpu().squeeze().numpy()) | |
| <span class="hljs-meta">>>> </span>Image.fromarray(result)<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-18ny996"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/depth-visualization-zoe.png" alt="Depth estimation visualization"></div> <a class="!text-gray-400 !no-underline text-sm flex items-center not-prose mt-4" href="https://github.com/huggingface/transformers/blob/main/docs/source/en/tasks/monocular_depth_estimation.md" target="_blank"><span data-svelte-h="svelte-1kd6by1"><</span> <span data-svelte-h="svelte-x0xyl0">></span> <span data-svelte-h="svelte-1dajgef"><span class="underline ml-1.5">Update</span> on GitHub</span></a> <p></p> | |
| <script> | |
| { | |
| __sveltekit_1xexzbk = { | |
| assets: "/docs/transformers/main/en", | |
| base: "/docs/transformers/main/en", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/transformers/main/en/_app/immutable/entry/start.2135b7e6.js"), | |
| import("/docs/transformers/main/en/_app/immutable/entry/app.24372c84.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 412], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 33.8 kB
- Xet hash:
- 7529f3a7be17cd64cbcfc3b861ecf9ae06e7bcc080d805f23688d850fb71ef5a
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.