Buckets:
| <meta charset="utf-8" /><meta http-equiv="content-security-policy" content=""><meta name="hf:doc:metadata" content="{"local":"using-kerascv-stable-diffusion-checkpoints-in-diffusers","sections":[{"local":"getting-started","sections":[{"local":"performing-the-conversion","title":"Performing the Conversion"}],"title":"Getting Started "},{"local":"using-the-converted-model-in-diffusers","title":"Using the Converted Model in Diffusers "},{"local":"incorporating-diffusers-goodies","title":"Incorporating Diffusers Goodies 🎁"},{"local":"known-limitations","title":"Known Limitations "}],"title":"Using KerasCV Stable Diffusion Checkpoints in Diffusers"}" data-svelte="svelte-1phssyn"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/assets/pages/__layout.svelte-hf-doc-builder.css"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/start-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/chunks/vendor-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/chunks/paths-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/pages/__layout.svelte-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/pages/using-diffusers/kerascv.mdx-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/chunks/Tip-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/chunks/IconCopyLink-hf-doc-builder.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.16.0/en/_app/chunks/CodeBlock-hf-doc-builder.js"> | |
| <h1 class="relative group"><a id="using-kerascv-stable-diffusion-checkpoints-in-diffusers" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#using-kerascv-stable-diffusion-checkpoints-in-diffusers"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>Using KerasCV Stable Diffusion Checkpoints in Diffusers | |
| </span></h1> | |
| <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p>This is an experimental feature.</p></div> | |
| <p><a href="https://github.com/keras-team/keras-cv/" rel="nofollow">KerasCV</a> provides APIs for implementing various computer vision workflows. It | |
| also provides the Stable Diffusion <a href="https://github.com/keras-team/keras-cv/blob/master/keras_cv/models/stable_diffusion" rel="nofollow">v1 and v2</a> | |
| models. Many practitioners find it easy to fine-tune the Stable Diffusion models shipped by KerasCV. However, as of this writing, KerasCV offers limited support to experiment with Stable Diffusion models for inference and deployment. On the other hand, | |
| Diffusers provides tooling dedicated to this purpose (and more), such as different <a href="https://huggingface.co/docs/diffusers/using-diffusers/schedulers" rel="nofollow">noise schedulers</a>, <a href="https://huggingface.co/docs/diffusers/optimization/xformers" rel="nofollow">flash attention</a>, and <a href="https://huggingface.co/docs/diffusers/optimization/fp16" rel="nofollow">other | |
| optimization techniques</a>.</p> | |
| <p>How about fine-tuning Stable Diffusion models in KerasCV and exporting them such that they become compatible with Diffusers to combine the | |
| best of both worlds? We have created a <a href="https://huggingface.co/spaces/sayakpaul/convert-kerascv-sd-diffusers" rel="nofollow">tool</a> that | |
| lets you do just that! It takes KerasCV Stable Diffusion checkpoints and exports them to Diffusers-compatible checkpoints. | |
| More specifically, it first converts the checkpoints to PyTorch and then wraps them into a | |
| <a href="https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview" rel="nofollow"><code>StableDiffusionPipeline</code></a> which is ready | |
| for inference. Finally, it pushes the converted checkpoints to a repository on the Hugging Face Hub. </p> | |
| <p>We welcome you to try out the tool <a href="https://huggingface.co/spaces/sayakpaul/convert-kerascv-sd-diffusers" rel="nofollow">here</a> | |
| and share feedback via <a href="https://huggingface.co/spaces/sayakpaul/convert-kerascv-sd-diffusers/discussions/new" rel="nofollow">discussions</a>. </p> | |
| <h2 class="relative group"><a id="getting-started" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#getting-started"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>Getting Started | |
| </span></h2> | |
| <p>First, you need to obtain the fine-tuned KerasCV Stable Diffusion checkpoints. We provide an | |
| overview of the different ways Stable Diffusion models can be fine-tuned <a href="https://huggingface.co/docs/diffusers/training/overview" rel="nofollow">using <code>diffusers</code></a>. For the Keras implementation of some of these methods, you can check out these resources:</p> | |
| <ul><li><a href="https://keras.io/examples/generative/fine_tune_via_textual_inversion/" rel="nofollow">Teach StableDiffusion new concepts via Textual Inversion</a></li> | |
| <li><a href="https://keras.io/examples/generative/finetune_stable_diffusion/" rel="nofollow">Fine-tuning Stable Diffusion</a></li> | |
| <li><a href="https://keras.io/examples/generative/dreambooth/" rel="nofollow">DreamBooth</a></li> | |
| <li><a href="https://github.com/miguelCalado/prompt-to-prompt-tensorflow" rel="nofollow">Prompt-to-Prompt editing</a></li></ul> | |
| <p>Stable Diffusion is comprised of the following models:</p> | |
| <ul><li>Text encoder </li> | |
| <li>UNet </li> | |
| <li>VAE </li></ul> | |
| <p>Depending on the fine-tuning task, we may fine-tune one or more of these components (the VAE is almost always left untouched). Here are some common combinations:</p> | |
| <ul><li>DreamBooth: UNet and text encoder </li> | |
| <li>Classical text to image fine-tuning: UNet </li> | |
| <li>Textual Inversion: Just the newly initialized embeddings in the text encoder</li></ul> | |
| <h3 class="relative group"><a id="performing-the-conversion" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#performing-the-conversion"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>Performing the Conversion | |
| </span></h3> | |
| <p>Let’s use <a href="https://huggingface.co/sayakpaul/textual-inversion-kerasio/resolve/main/textual_inversion_kerasio.h5" rel="nofollow">this checkpoint</a> which was generated | |
| by conducting Textual Inversion with the following “placeholder token”: <code><my-funny-cat-token></code>. </p> | |
| <p>On the tool, we supply the following things: </p> | |
| <ul><li>Path(s) to download the fine-tuned checkpoint(s) (KerasCV)</li> | |
| <li>An HF token </li> | |
| <li>Placeholder token (only applicable for Textual Inversion)</li></ul> | |
| <div align="center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/space_snap.png"></div> | |
| <p>As soon as you hit “Submit”, the conversion process will begin. Once it’s complete, you should see the following:</p> | |
| <div align="center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/model_push_success.png"></div> | |
| <p>If you click the <a href="https://huggingface.co/sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline/tree/main" rel="nofollow">link</a>, you | |
| should see something like so: </p> | |
| <div align="center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/model_repo_contents.png"></div> | |
| <p>If you head over to the <a href="https://huggingface.co/sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline" rel="nofollow">model card of the repository</a>, the | |
| following should appear: </p> | |
| <div align="center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/model_card.png"></div> | |
| <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p>Note that we’re not specifying the UNet weights here since the UNet is not fine-tuned during Textual Inversion.</p></div> | |
| <p>And that’s it! You now have your fine-tuned KerasCV Stable Diffusion model in Diffusers 🧨.</p> | |
| <h2 class="relative group"><a id="using-the-converted-model-in-diffusers" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#using-the-converted-model-in-diffusers"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>Using the Converted Model in Diffusers | |
| </span></h2> | |
| <p>Just beside the model card of the <a href="https://huggingface.co/sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline" rel="nofollow">repository</a>, | |
| you’d notice an inference widget to try out the model directly from the UI 🤗</p> | |
| <div align="center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/inference_widget_output.png"></div> | |
| <p>On the top right hand side, we provide a “Use in Diffusers” button. If you click the button, you should see the following code-snippet: </p> | |
| <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> | |
| <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> | |
| Copied</div></button></div> | |
| <pre><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline | |
| pipeline = DiffusionPipeline.from_pretrained(<span class="hljs-string">"sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline"</span>)<!-- HTML_TAG_END --></pre></div> | |
| <p>The model is in standard <code>diffusers</code> format. Let’s perform inference!</p> | |
| <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> | |
| <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> | |
| Copied</div></button></div> | |
| <pre><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline | |
| pipeline = DiffusionPipeline.from_pretrained(<span class="hljs-string">"sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline"</span>) | |
| pipeline.to(<span class="hljs-string">"cuda"</span>) | |
| placeholder_token = <span class="hljs-string">"<my-funny-cat-token>"</span> | |
| prompt = <span class="hljs-string">f"two <span class="hljs-subst">{placeholder_token}</span> getting married, photorealistic, high quality"</span> | |
| image = pipeline(prompt, num_inference_steps=<span class="hljs-number">50</span>).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> | |
| <p>And we get: </p> | |
| <div align="center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/diffusers_output_one.png"></div> | |
| <p><em><strong>Note that if you specified a <code>placeholder_token</code> while performing the conversion, the tool will log it accordingly. Refer | |
| to the model card of <a href="https://huggingface.co/sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline" rel="nofollow">this repository</a> | |
| as an example.</strong></em></p> | |
| <p>We welcome you to use the tool for various Stable Diffusion fine-tuning scenarios and let us know your feedback! Here are some examples | |
| of Diffusers checkpoints that were obtained using the tool: </p> | |
| <ul><li><a href="https://huggingface.co/sayakpaul/text-unet-dogs-kerascv_sd_diffusers_pipeline" rel="nofollow">sayakpaul/text-unet-dogs-kerascv_sd_diffusers_pipeline</a> (DreamBooth with both the text encoder and UNet fine-tuned)</li> | |
| <li><a href="https://huggingface.co/sayakpaul/unet-dogs-kerascv_sd_diffusers_pipeline" rel="nofollow">sayakpaul/unet-dogs-kerascv_sd_diffusers_pipeline</a> (DreamBooth with only the UNet fine-tuned)</li></ul> | |
| <h2 class="relative group"><a id="incorporating-diffusers-goodies" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#incorporating-diffusers-goodies"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>Incorporating Diffusers Goodies 🎁 | |
| </span></h2> | |
| <p>Diffusers provides various options that one can leverage to experiment with different inference setups. One particularly | |
| useful option is the use of a different noise scheduler during inference other than what was used during fine-tuning. | |
| Let’s try out the <a href="https://huggingface.co/docs/diffusers/main/en/api/schedulers/multistep_dpm_solver" rel="nofollow"><code>DPMSolverMultistepScheduler</code></a> | |
| which is different from the one (<a href="https://huggingface.co/docs/diffusers/main/en/api/schedulers/ddpm" rel="nofollow"><code>DDPMScheduler</code></a>) used during | |
| fine-tuning.</p> | |
| <p>You can read more details about this process in <a href="https://huggingface.co/docs/diffusers/using-diffusers/schedulers" rel="nofollow">this section</a>.</p> | |
| <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> | |
| <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> | |
| Copied</div></button></div> | |
| <pre><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline, DPMSolverMultistepScheduler | |
| pipeline = DiffusionPipeline.from_pretrained(<span class="hljs-string">"sayakpaul/textual-inversion-cat-kerascv_sd_diffusers_pipeline"</span>) | |
| pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) | |
| pipeline.to(<span class="hljs-string">"cuda"</span>) | |
| placeholder_token = <span class="hljs-string">"<my-funny-cat-token>"</span> | |
| prompt = <span class="hljs-string">f"two <span class="hljs-subst">{placeholder_token}</span> getting married, photorealistic, high quality"</span> | |
| image = pipeline(prompt, num_inference_steps=<span class="hljs-number">50</span>).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> | |
| <div align="center"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/diffusers_output_two.png"></div> | |
| <p>One can also continue fine-tuning from these Diffusers checkpoints by leveraging some relevant tools from Diffusers. Refer <a href="https://huggingface.co/docs/diffusers/training/overview" rel="nofollow">here</a> for | |
| more details. For inference-specific optimizations, refer <a href="https://huggingface.co/docs/diffusers/main/en/optimization/fp16" rel="nofollow">here</a>.</p> | |
| <h2 class="relative group"><a id="known-limitations" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#known-limitations"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> | |
| <span>Known Limitations | |
| </span></h2> | |
| <ul><li>Only Stable Diffusion v1 checkpoints are supported for conversion in this tool. </li></ul> | |
| <script type="module" data-hydrate="19mn51k"> | |
| import { start } from "/docs/diffusers/v0.16.0/en/_app/start-hf-doc-builder.js"; | |
| start({ | |
| target: document.querySelector('[data-hydrate="19mn51k"]').parentNode, | |
| paths: {"base":"/docs/diffusers/v0.16.0/en","assets":"/docs/diffusers/v0.16.0/en"}, | |
| session: {}, | |
| route: false, | |
| spa: false, | |
| trailing_slash: "never", | |
| hydrate: { | |
| status: 200, | |
| error: null, | |
| nodes: [ | |
| import("/docs/diffusers/v0.16.0/en/_app/pages/__layout.svelte-hf-doc-builder.js"), | |
| import("/docs/diffusers/v0.16.0/en/_app/pages/using-diffusers/kerascv.mdx-hf-doc-builder.js") | |
| ], | |
| params: {} | |
| } | |
| }); | |
| </script> | |
Xet Storage Details
- Size:
- 23.7 kB
- Xet hash:
- 4ece1ca8ca72cdcfec00028bc621594cf4219351dcde20a3611563c80d0f6fc2
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.