Buckets:

rtrm's picture
download
raw
14.4 kB
<meta charset="utf-8" /><meta name="hf:doc:metadata" content="{&quot;title&quot;:&quot;Speed up inference&quot;,&quot;local&quot;:&quot;speed-up-inference&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use TensorFloat-32&quot;,&quot;local&quot;:&quot;use-tensorfloat-32&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2},{&quot;title&quot;:&quot;Half-precision weights&quot;,&quot;local&quot;:&quot;half-precision-weights&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2},{&quot;title&quot;:&quot;Distilled model&quot;,&quot;local&quot;:&quot;distilled-model&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2}],&quot;depth&quot;:1}">
<link href="/docs/diffusers/v0.27.2/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/entry/start.7b0d545a.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/scheduler.182ea377.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/singletons.83b77e70.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/index.1f6d62f6.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/paths.6f4cec58.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/entry/app.1979b86b.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/index.abf12888.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/nodes/0.7891275d.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/each.e59479a4.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/nodes/128.1c1b7c03.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/Tip.230e2334.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/CodeBlock.57fe6e13.js">
<link rel="modulepreload" href="/docs/diffusers/v0.27.2/en/_app/immutable/chunks/Heading.16916d63.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{&quot;title&quot;:&quot;Speed up inference&quot;,&quot;local&quot;:&quot;speed-up-inference&quot;,&quot;sections&quot;:[{&quot;title&quot;:&quot;Use TensorFloat-32&quot;,&quot;local&quot;:&quot;use-tensorfloat-32&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2},{&quot;title&quot;:&quot;Half-precision weights&quot;,&quot;local&quot;:&quot;half-precision-weights&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2},{&quot;title&quot;:&quot;Distilled model&quot;,&quot;local&quot;:&quot;distilled-model&quot;,&quot;sections&quot;:[],&quot;depth&quot;:2}],&quot;depth&quot;:1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="speed-up-inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#speed-up-inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Speed up inference</span></h1> <p data-svelte-h="svelte-1pehmy1">There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either <a href="xformers">xFormers</a> or <code>torch.nn.functional.scaled_dot_product_attention</code> in PyTorch 2.0 for their memory-efficient attention.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1dpd5a8">In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the <a href="memory">Reduce memory usage</a> guide.</p></div> <p data-svelte-h="svelte-1iu9895">The results below are obtained from generating a single 512x512 image from the prompt <code>a photo of an astronaut riding a horse on mars</code> with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect.</p> <table data-svelte-h="svelte-1osaemz"><thead><tr><th></th> <th>latency</th> <th>speed-up</th></tr></thead> <tbody><tr><td>original</td> <td>9.50s</td> <td>x1</td></tr> <tr><td>fp16</td> <td>3.61s</td> <td>x2.63</td></tr> <tr><td>channels last</td> <td>3.30s</td> <td>x2.88</td></tr> <tr><td>traced UNet</td> <td>3.21s</td> <td>x2.96</td></tr> <tr><td>memory efficient attention</td> <td>2.63s</td> <td>x3.61</td></tr></tbody></table> <h2 class="relative group"><a id="use-tensorfloat-32" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#use-tensorfloat-32"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Use TensorFloat-32</span></h2> <p data-svelte-h="svelte-sgc3mn">On Ampere and later CUDA devices, matrix multiplications and convolutions can use the <a href="https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/" rel="nofollow">TensorFloat-32 (TF32)</a> mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch
torch.backends.cuda.matmul.allow_tf32 = <span class="hljs-literal">True</span><!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-ahb0o9">You can learn more about TF32 in the <a href="https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32" rel="nofollow">Mixed precision training</a> guide.</p> <h2 class="relative group"><a id="half-precision-weights" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#half-precision-weights"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Half-precision weights</span></h2> <p data-svelte-h="svelte-rv9wu9">To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch
<span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
<span class="hljs-string">&quot;runwayml/stable-diffusion-v1-5&quot;</span>,
torch_dtype=torch.float16,
use_safetensors=<span class="hljs-literal">True</span>,
)
pipe = pipe.to(<span class="hljs-string">&quot;cuda&quot;</span>)
prompt = <span class="hljs-string">&quot;a photo of an astronaut riding a horse on mars&quot;</span>
image = pipe(prompt).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-19ow5yd">Don’t use <a href="https://pytorch.org/docs/stable/amp.html#torch.autocast" rel="nofollow"><code>torch.autocast</code></a> in any of the pipelines as it can lead to black images and is always slower than pure float16 precision.</p></div> <h2 class="relative group"><a id="distilled-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#distilled-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Distilled model</span></h2> <p data-svelte-h="svelte-yfaonv">You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet’s residual and attention blocks are shed to reduce the model size. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model.</p> <p data-svelte-h="svelte-1f6kjv7">Learn more about in the <a href="../using-diffusers/distilled_sd">Distilled Stable Diffusion inference</a> guide!</p> <p></p>
<script>
{
__sveltekit_1kv7d8x = {
assets: "/docs/diffusers/v0.27.2/en",
base: "/docs/diffusers/v0.27.2/en",
env: {}
};
const element = document.currentScript.parentElement;
const data = [null,null];
Promise.all([
import("/docs/diffusers/v0.27.2/en/_app/immutable/entry/start.7b0d545a.js"),
import("/docs/diffusers/v0.27.2/en/_app/immutable/entry/app.1979b86b.js")
]).then(([kit, app]) => {
kit.start(app, element, {
node_ids: [0, 128],
data,
form: null,
error: null
});
});
}
</script>

Xet Storage Details

Size:
14.4 kB
·
Xet hash:
c65ff9363886eaed507125c957bc6080d0dbfc460d99243a79b5c57263b3f330

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.