Buckets:
| <meta charset="utf-8" /><meta http-equiv="content-security-policy" content=""><meta name="hf:doc:metadata" content="{"local":"create-reproducible-pipelines","sections":[{"local":"inference","sections":[{"local":"cpu","title":"CPU"},{"local":"gpu","title":"GPU"}],"title":"Inference"},{"local":"diffusers.utils.randn_tensor","title":"randn_tensor"}],"title":"Create reproducible pipelines"}" data-svelte="svelte-1phssyn"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/start-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/chunks/vendor-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/chunks/paths-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/pages/__layout.svelte-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/pages/using-diffusers/reproducibility.mdx-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/chunks/Tip-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/chunks/Docstring-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/chunks/IconCopyLink-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.15.0/en/_app/chunks/CodeBlock-hf-doc-builder.js"> | |
| <h1 class="relative group"><a id="create-reproducible-pipelines" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#create-reproducible-pipelines"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>Create reproducible pipelines | |
| </span></h1> | |
| <p>Reproducibility is important for testing, replicating results, and can even be used to <a href="reusing_seeds">improve image quality</a>. However, the randomness in diffusion models is a desired property because it allows the pipeline to generate different images every time it is run. While you can’t expect to get the exact same results across platforms, you can expect results to be reproducible across releases and platforms within a certain tolerance range. Even then, tolerance varies depending on the diffusion pipeline and checkpoint.</p> | |
| <p>This is why it’s important to understand how to control sources of randomness in diffusion models.</p> | |
| <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>💡 We strongly recommend reading PyTorch’s <a href="https://pytorch.org/docs/stable/notes/randomness.html" rel="nofollow">statement about reproducibility</a>:</p> | |
| <blockquote><p>Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds.</p></blockquote></div> | |
| <h2 class="relative group"><a id="inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>Inference | |
| </span></h2> | |
| <p>During inference, pipelines rely heavily on random sampling operations which include creating the | |
| Gaussian noise tensors to denoise and adding noise to the scheduling step.</p> | |
| <p>Take a look at the tensor values in the <a href="/docs/diffusers/v0.15.0/en/api/pipelines/ddim#diffusers.DDIMPipeline">DDIMPipeline</a> after two inference steps:</p> | |
| <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> | |
| <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> | |
| Copied</div></button></div> | |
| <pre><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DDIMPipeline | |
| <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np | |
| model_id = <span class="hljs-string">"google/ddpm-cifar10-32"</span> | |
| <span class="hljs-comment"># load model and scheduler</span> | |
| ddim = DDIMPipeline.from_pretrained(model_id) | |
| <span class="hljs-comment"># run pipeline for just two steps and return numpy tensor</span> | |
| image = ddim(num_inference_steps=<span class="hljs-number">2</span>, output_type=<span class="hljs-string">"np"</span>).images | |
| <span class="hljs-built_in">print</span>(np.<span class="hljs-built_in">abs</span>(image).<span class="hljs-built_in">sum</span>())<!-- HTML_TAG_END --></pre></div> | |
| <p>Running the code above prints one value, but if you run it again you get a different value. What is going on here? </p> | |
| <p>Every time the pipeline is run, <a href="https://pytorch.org/docs/stable/generated/torch.randn.html" rel="nofollow"><code>torch.randn</code></a> uses a different random seed to create Gaussian noise which is denoised stepwise. This leads to a different result each time it is run, which is great for diffusion pipelines since it generates a different random image each time.</p> | |
| <p>But if you need to reliably generate the same image, that’ll depend on whether you’re running the pipeline on a CPU or GPU.</p> | |
| <h3 class="relative group"><a id="cpu" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#cpu"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>CPU | |
| </span></h3> | |
| <p>To generate reproducible results on a CPU, you’ll need to use a PyTorch <a href="https://pytorch.org/docs/stable/generated/torch.randn.html" rel="nofollow"><code>Generator</code></a> and set a seed:</p> | |
| <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> | |
| <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> | |
| Copied</div></button></div> | |
| <pre><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DDIMPipeline | |
| <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np | |
| model_id = <span class="hljs-string">"google/ddpm-cifar10-32"</span> | |
| <span class="hljs-comment"># load model and scheduler</span> | |
| ddim = DDIMPipeline.from_pretrained(model_id) | |
| <span class="hljs-comment"># create a generator for reproducibility</span> | |
| generator = torch.Generator(device=<span class="hljs-string">"cpu"</span>).manual_seed(<span class="hljs-number">0</span>) | |
| <span class="hljs-comment"># run pipeline for just two steps and return numpy tensor</span> | |
| image = ddim(num_inference_steps=<span class="hljs-number">2</span>, output_type=<span class="hljs-string">"np"</span>, generator=generator).images | |
| <span class="hljs-built_in">print</span>(np.<span class="hljs-built_in">abs</span>(image).<span class="hljs-built_in">sum</span>())<!-- HTML_TAG_END --></pre></div> | |
| <p>Now when you run the code above, it always prints a value of <code>1491.1711</code> no matter what because the <code>Generator</code> object with the seed is passed to all the random functions of the pipeline.</p> | |
| <p>If you run this code example on your specific hardware and PyTorch version, you should get a similar, if not the same, result.</p> | |
| <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>💡 It might be a bit unintuitive at first to pass <code>Generator</code> objects to the pipeline instead of | |
| just integer values representing the seed, but this is the recommended design when dealing with | |
| probabilistic models in PyTorch as <code>Generator</code>’s are <em>random states</em> that can be | |
| passed to multiple pipelines in a sequence.</p></div> | |
| <h3 class="relative group"><a id="gpu" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#gpu"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>GPU | |
| </span></h3> | |
| <p>Writing a reproducible pipeline on a GPU is a bit trickier, and full reproducibility across different hardware is not guaranteed because matrix multiplication - which diffusion pipelines require a lot of - is less deterministic on a GPU than a CPU. For example, if you run the same code example above on a GPU: </p> | |
| <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> | |
| <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> | |
| Copied</div></button></div> | |
| <pre><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DDIMPipeline | |
| <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np | |
| model_id = <span class="hljs-string">"google/ddpm-cifar10-32"</span> | |
| <span class="hljs-comment"># load model and scheduler</span> | |
| ddim = DDIMPipeline.from_pretrained(model_id) | |
| ddim.to(<span class="hljs-string">"cuda"</span>) | |
| <span class="hljs-comment"># create a generator for reproducibility</span> | |
| generator = torch.Generator(device=<span class="hljs-string">"cuda"</span>).manual_seed(<span class="hljs-number">0</span>) | |
| <span class="hljs-comment"># run pipeline for just two steps and return numpy tensor</span> | |
| image = ddim(num_inference_steps=<span class="hljs-number">2</span>, output_type=<span class="hljs-string">"np"</span>, generator=generator).images | |
| <span class="hljs-built_in">print</span>(np.<span class="hljs-built_in">abs</span>(image).<span class="hljs-built_in">sum</span>())<!-- HTML_TAG_END --></pre></div> | |
| <p>The result is not the same even though you’re using an identical seed because the GPU uses a different random number generator than the CPU.</p> | |
| <p>To circumvent this problem, 🧨 Diffusers has a <a href="#diffusers.utils.randn_tensor"><code>randn_tensor</code></a> function for creating random noise on the CPU, and then moving the tensor to a GPU if necessary. The <code>randn_tensor</code> function is used everywhere inside the pipeline, allowing the user to <strong>always</strong> pass a CPU <code>Generator</code> even if the pipeline is run on a GPU. </p> | |
| <p>You’ll see the results are much closer now!</p> | |
| <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> | |
| <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> | |
| Copied</div></button></div> | |
| <pre><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DDIMPipeline | |
| <span class="hljs-keyword">import</span> numpy <span class="hljs-keyword">as</span> np | |
| model_id = <span class="hljs-string">"google/ddpm-cifar10-32"</span> | |
| <span class="hljs-comment"># load model and scheduler</span> | |
| ddim = DDIMPipeline.from_pretrained(model_id) | |
| ddim.to(<span class="hljs-string">"cuda"</span>) | |
| <span class="hljs-comment"># create a generator for reproducibility; notice you don't place it on the GPU!</span> | |
| generator = torch.manual_seed(<span class="hljs-number">0</span>) | |
| <span class="hljs-comment"># run pipeline for just two steps and return numpy tensor</span> | |
| image = ddim(num_inference_steps=<span class="hljs-number">2</span>, output_type=<span class="hljs-string">"np"</span>, generator=generator).images | |
| <span class="hljs-built_in">print</span>(np.<span class="hljs-built_in">abs</span>(image).<span class="hljs-built_in">sum</span>())<!-- HTML_TAG_END --></pre></div> | |
| <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>💡 If reproducibility is important, we recommend always passing a CPU generator. | |
| The performance loss is often neglectable, and you’ll generate much more similar | |
| values than if the pipeline had been run on a GPU.</p></div> | |
| <p>Finally, for more complex pipelines such as <a href="/docs/diffusers/v0.15.0/en/api/pipelines/unclip#diffusers.UnCLIPPipeline">UnCLIPPipeline</a>, these are often extremely | |
| susceptible to precision error propagation. Don’t expect similar results across | |
| different GPU hardware or PyTorch versions. In this case, you’ll need to run | |
| exactly the same hardware and PyTorch version for full reproducibility.</p> | |
| <h2 class="relative group"><a id="diffusers.utils.randn_tensor" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#diffusers.utils.randn_tensor"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>randn_tensor | |
| </span></h2> | |
| <div class="docstring border-l-2 border-t-2 pl-4 pt-3.5 border-gray-100 rounded-tl-xl mb-6 mt-8"> | |
| <div><span class="group flex space-x-1.5 items-center text-gray-800 bg-gradient-to-r rounded-tr-lg -mt-4 -ml-4 pt-3 px-2.5" id="diffusers.utils.randn_tensor"><!-- HTML_TAG_START --><h4 class="!m-0"><span class="flex-1 rounded-xl py-0.5 break-all bg-gradient-to-r from-blue-50/60 to-white dark:from-gray-900 dark:to-gray-950 text-blue-700 dark:text-blue-300 font-medium px-2"><svg width="1em" height="1em" viewBox="0 0 32 33" class="mr-1 inline-block -mt-0.5" xmlns="http://www.w3.org/2000/svg"><path d="M5.80566 18.3545C4.90766 17.4565 4.90766 16.0005 5.80566 15.1025L14.3768 6.53142C15.2748 5.63342 16.7307 5.63342 17.6287 6.53142L26.1999 15.1025C27.0979 16.0005 27.0979 17.4565 26.1999 18.3545L17.6287 26.9256C16.7307 27.8236 15.2748 27.8236 14.3768 26.9256L5.80566 18.3545Z" fill="currentColor" fill-opacity="0.25"/><path fill-rule="evenodd" clip-rule="evenodd" d="M16.4801 13.9619C16.4801 12.9761 16.7467 12.5436 16.9443 12.3296C17.1764 12.078 17.5731 11.8517 18.2275 11.707C18.8821 11.5623 19.638 11.5342 20.4038 11.5582C20.7804 11.57 21.1341 11.5932 21.4719 11.6156L21.5263 11.6193C21.8195 11.6389 22.1626 11.6618 22.4429 11.6618V7.40825C22.3209 7.40825 22.1219 7.39596 21.7544 7.37149C21.4202 7.34925 20.9976 7.32115 20.5371 7.30672C19.6286 7.27824 18.4672 7.29779 17.3093 7.55377C16.1512 7.8098 14.8404 8.33724 13.8181 9.4452C12.7612 10.5907 12.2266 12.1236 12.2266 13.9619V15.0127H10.6836V19.2662H12.2266V26.6332H16.4801V19.2662H20.3394V15.0127H16.4801V13.9619Z" fill="currentColor"/></svg>diffusers.utils.randn_tensor</span></h4><!-- HTML_TAG_END --> | |
| <a id="diffusers.utils.randn_tensor" class="header-link invisible with-hover:group-hover:visible pr-2" href="#diffusers.utils.randn_tensor"><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></a> | |
| <a class="!ml-auto !text-gray-400 !no-underline text-sm flex items-center" href="https://github.com/huggingface/diffusers/blob/v0.15.0/src/diffusers/utils/torch_utils.py#L29" target="_blank"><span><</span> | |
| <span class="hidden md:block mx-0.5 hover:!underline">source</span> | |
| <span>></span></a></span> | |
| <p class="font-mono text-xs md:text-sm !leading-relaxed !my-6"><span>(</span> | |
| <span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">shape<span class="opacity-60">: typing.Union[typing.Tuple, typing.List]</span></span> | |
| </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">generator<span class="opacity-60">: typing.Union[typing.List[ForwardRef('torch.Generator')], ForwardRef('torch.Generator'), NoneType] = None</span></span> | |
| </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">device<span class="opacity-60">: typing.Optional[ForwardRef('torch.device')] = None</span></span> | |
| </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">dtype<span class="opacity-60">: typing.Optional[ForwardRef('torch.dtype')] = None</span></span> | |
| </span><span class="comma cursor-default"><span class="rounded hover:bg-black hover:text-white dark:hover:bg-white dark:hover:text-black">layout<span class="opacity-60">: typing.Optional[ForwardRef('torch.layout')] = None</span></span> | |
| </span> | |
| <span>)</span> | |
| </p> | |
| <div class="!mb-10 relative docstring-details "> | |
| </div></div> | |
| <p>This is a helper function that allows to create random tensors on the desired <code>device</code> with the desired <code>dtype</code>. When | |
| passing a list of generators one can seed each batched size individually. If CPU generators are passed the tensor | |
| will always be created on CPU.</p></div> | |
| <script type="module" data-hydrate="sx7xtd"> | |
| import { start } from "/docs/diffusers/v0.15.0/en/_app/start-hf-doc-builder.js"; | |
| start({ | |
| target: document.querySelector('[data-hydrate="sx7xtd"]').parentNode, | |
| paths: {"base":"/docs/diffusers/v0.15.0/en","assets":"/docs/diffusers/v0.15.0/en"}, | |
| session: {}, | |
| route: false, | |
| spa: false, | |
| trailing_slash: "never", | |
| hydrate: { | |
| status: 200, | |
| error: null, | |
| nodes: [ | |
| import("/docs/diffusers/v0.15.0/en/_app/pages/__layout.svelte-hf-doc-builder.js"), | |
| import("/docs/diffusers/v0.15.0/en/_app/pages/using-diffusers/reproducibility.mdx-hf-doc-builder.js") | |
| ], | |
| params: {} | |
| } | |
| }); | |
| </script> | |
Xet Storage Details
- Size:
- 26.4 kB
- Xet hash:
- 2790a20909827727f540145c6d13ad2a8f068079c6535a15768171739b47ee16
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.