Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"Load LoRAs for inference","local":"load-loras-for-inference","sections":[{"title":"Combine multiple adapters","local":"combine-multiple-adapters","sections":[],"depth":2},{"title":"Monitoring active adapters","local":"monitoring-active-adapters","sections":[],"depth":2},{"title":"Fusing adapters into the model","local":"fusing-adapters-into-the-model","sections":[],"depth":2},{"title":"Saving a pipeline after fusing the adapters","local":"saving-a-pipeline-after-fusing-the-adapters","sections":[],"depth":2}],"depth":1}"> | |
| <link href="/docs/diffusers/v0.26.2/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/entry/start.ca2a3496.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/scheduler.182ea377.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/singletons.8615e179.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/index.1f6d62f6.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/paths.b0157720.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/entry/app.76ddbcd8.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/index.abf12888.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/nodes/0.1ba30a4e.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/nodes/156.18a9ed89.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/Tip.230e2334.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/CodeBlock.57fe6e13.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/DocNotebookDropdown.5fa27ace.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/globals.7f7f1b26.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/Heading.16916d63.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"Load LoRAs for inference","local":"load-loras-for-inference","sections":[{"title":"Combine multiple adapters","local":"combine-multiple-adapters","sections":[],"depth":2},{"title":"Monitoring active adapters","local":"monitoring-active-adapters","sections":[],"depth":2},{"title":"Fusing adapters into the model","local":"fusing-adapters-into-the-model","sections":[],"depth":2},{"title":"Saving a pipeline after fusing the adapters","local":"saving-a-pipeline-after-fusing-the-adapters","sections":[],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "> <button class=" " type="button"> <img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"> </button> </div> <div class="relative colab-dropdown "> <button class=" " type="button"> <img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"> </button> </div></div> <h1 class="relative group"><a id="load-loras-for-inference" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-loras-for-inference"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load LoRAs for inference</span></h1> <p data-svelte-h="svelte-4ogdj">There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 <a href="https://huggingface.co/docs/peft/index" rel="nofollow">PEFT</a> integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you’ll learn how to use different adapters with <a href="../api/pipelines/stable_diffusion/stable_diffusion_xl">Stable Diffusion XL (SDXL)</a> for inference.</p> <p data-svelte-h="svelte-11mcdto">Throughout this guide, you’ll use LoRA as the main adapter technique, so we’ll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don’t, we welcome you to check out the <a href="https://huggingface.co/docs/peft/conceptual_guides/lora" rel="nofollow">LoRA guide</a>.</p> <p data-svelte-h="svelte-k7bd5g">Let’s first install all the required libraries.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->!pip install -q transformers accelerate | |
| !pip install peft | |
| !pip install diffusers<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-wbs9ps">Now, let’s load a pipeline with a SDXL checkpoint:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline | |
| <span class="hljs-keyword">import</span> torch | |
| pipe_id = <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span> | |
| pipe = DiffusionPipeline.from_pretrained(pipe_id, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1087l2u">Next, load a LoRA checkpoint with the <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.StableDiffusionXLLoraLoaderMixin.load_lora_weights">load_lora_weights()</a> method.</p> <p data-svelte-h="svelte-cuo3fl">With the 🤗 PEFT integration, you can assign a specific <code>adapter_name</code> to the checkpoint, which let’s you easily switch between different LoRA checkpoints. Let’s call this adapter <code>"toy"</code>.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipe.load_lora_weights(<span class="hljs-string">"CiroN2022/toy-face"</span>, weight_name=<span class="hljs-string">"toy_face_sdxl.safetensors"</span>, adapter_name=<span class="hljs-string">"toy"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1tr5was">And then perform inference:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->prompt = <span class="hljs-string">"toy_face of a hacker with a hoodie"</span> | |
| lora_scale= <span class="hljs-number">0.9</span> | |
| image = pipe( | |
| prompt, num_inference_steps=<span class="hljs-number">30</span>, cross_attention_kwargs={<span class="hljs-string">"scale"</span>: lora_scale}, generator=torch.manual_seed(<span class="hljs-number">0</span>) | |
| ).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-pa1jxn"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_8_1.png" alt="toy-face"></p> <p data-svelte-h="svelte-1uq2e4p">With the <code>adapter_name</code> parameter, it is really easy to use another adapter for inference! Load the <a href="https://huggingface.co/nerijs/pixel-art-xl" rel="nofollow">nerijs/pixel-art-xl</a> adapter that has been fine-tuned to generate pixel art images, and let’s call it <code>"pixel"</code>.</p> <p data-svelte-h="svelte-17g2k40">The pipeline automatically sets the first loaded adapter (<code>"toy"</code>) as the active adapter. But you can activate the <code>"pixel"</code> adapter with the <a href="/docs/diffusers/v0.26.2/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters">set_adapters()</a> method as shown below:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipe.load_lora_weights(<span class="hljs-string">"nerijs/pixel-art-xl"</span>, weight_name=<span class="hljs-string">"pixel-art-xl.safetensors"</span>, adapter_name=<span class="hljs-string">"pixel"</span>) | |
| pipe.set_adapters(<span class="hljs-string">"pixel"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-us3nir">Let’s now generate an image with the second adapter and check the result:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->prompt = <span class="hljs-string">"a hacker with a hoodie, pixel art"</span> | |
| image = pipe( | |
| prompt, num_inference_steps=<span class="hljs-number">30</span>, cross_attention_kwargs={<span class="hljs-string">"scale"</span>: lora_scale}, generator=torch.manual_seed(<span class="hljs-number">0</span>) | |
| ).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1ixqz7s"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_12_1.png" alt="pixel-art"></p> <h2 class="relative group"><a id="combine-multiple-adapters" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#combine-multiple-adapters"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Combine multiple adapters</span></h2> <p data-svelte-h="svelte-y4eq9d">You can also perform multi-adapter inference where you combine different adapter checkpoints for inference.</p> <p data-svelte-h="svelte-1oe1ec6">Once again, use the <a href="/docs/diffusers/v0.26.2/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters">set_adapters()</a> method to activate two LoRA checkpoints and specify the weight for how the checkpoints should be combined.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipe.set_adapters([<span class="hljs-string">"pixel"</span>, <span class="hljs-string">"toy"</span>], adapter_weights=[<span class="hljs-number">0.5</span>, <span class="hljs-number">1.0</span>])<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1l5oueb">Now that we have set these two adapters, let’s generate an image from the combined adapters!</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1efkoll">LoRA checkpoints in the diffusion community are almost always obtained with <a href="https://huggingface.co/docs/diffusers/main/en/training/dreambooth" rel="nofollow">DreamBooth</a>. DreamBooth training often relies on “trigger” words in the input text prompts in order for the generation results to look as expected. When you combine multiple LoRA checkpoints, it’s important to ensure the trigger words for the corresponding LoRA checkpoints are present in the input text prompts.</p></div> <p data-svelte-h="svelte-1dxqvbn">The trigger words for <a href="https://hf.co/CiroN2022/toy-face" rel="nofollow">CiroN2022/toy-face</a> and <a href="https://hf.co/nerijs/pixel-art-xl" rel="nofollow">nerijs/pixel-art-xl</a> are found in their repositories.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-comment"># Notice how the prompt is constructed.</span> | |
| prompt = <span class="hljs-string">"toy_face of a hacker with a hoodie, pixel art"</span> | |
| image = pipe( | |
| prompt, num_inference_steps=<span class="hljs-number">30</span>, cross_attention_kwargs={<span class="hljs-string">"scale"</span>: <span class="hljs-number">1.0</span>}, generator=torch.manual_seed(<span class="hljs-number">0</span>) | |
| ).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-azw8sd"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_16_1.png" alt="toy-face-pixel-art"></p> <p data-svelte-h="svelte-1x3u637">Impressive! As you can see, the model was able to generate an image that mixes the characteristics of both adapters.</p> <p data-svelte-h="svelte-1jxucdy">If you want to go back to using only one adapter, use the <a href="/docs/diffusers/v0.26.2/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters">set_adapters()</a> method to activate the <code>"toy"</code> adapter:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-comment"># First, set the adapter.</span> | |
| pipe.set_adapters(<span class="hljs-string">"toy"</span>) | |
| <span class="hljs-comment"># Then, run inference.</span> | |
| prompt = <span class="hljs-string">"toy_face of a hacker with a hoodie"</span> | |
| lora_scale= <span class="hljs-number">0.9</span> | |
| image = pipe( | |
| prompt, num_inference_steps=<span class="hljs-number">30</span>, cross_attention_kwargs={<span class="hljs-string">"scale"</span>: lora_scale}, generator=torch.manual_seed(<span class="hljs-number">0</span>) | |
| ).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-h7m96j"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_18_1.png" alt="toy-face-again"></p> <p data-svelte-h="svelte-drxqw9">If you want to switch to only the base model, disable all LoRAs with the <a href="/docs/diffusers/v0.26.2/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.disable_lora">disable_lora()</a> method.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipe.disable_lora() | |
| prompt = <span class="hljs-string">"toy_face of a hacker with a hoodie"</span> | |
| lora_scale= <span class="hljs-number">0.9</span> | |
| image = pipe(prompt, num_inference_steps=<span class="hljs-number">30</span>, generator=torch.manual_seed(<span class="hljs-number">0</span>)).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-if1nej"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/peft_integration/diffusers_peft_lora_inference_20_1.png" alt="no-lora"></p> <h2 class="relative group"><a id="monitoring-active-adapters" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#monitoring-active-adapters"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Monitoring active adapters</span></h2> <p data-svelte-h="svelte-mwfvwm">You have attached multiple adapters in this tutorial, and if you’re feeling a bit lost on what adapters have been attached to the pipeline’s components, you can easily check the list of active adapters using the <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.get_active_adapters">get_active_adapters()</a> method:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->active_adapters = pipe.get_active_adapters() | |
| active_adapters | |
| [<span class="hljs-string">"toy"</span>, <span class="hljs-string">"pixel"</span>]<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1pqx896">You can also get the active adapters of each pipeline component with <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.get_list_adapters">get_list_adapters()</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->list_adapters_component_wise = pipe.get_list_adapters() | |
| list_adapters_component_wise | |
| {<span class="hljs-string">"text_encoder"</span>: [<span class="hljs-string">"toy"</span>, <span class="hljs-string">"pixel"</span>], <span class="hljs-string">"unet"</span>: [<span class="hljs-string">"toy"</span>, <span class="hljs-string">"pixel"</span>], <span class="hljs-string">"text_encoder_2"</span>: [<span class="hljs-string">"toy"</span>, <span class="hljs-string">"pixel"</span>]}<!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="fusing-adapters-into-the-model" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#fusing-adapters-into-the-model"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Fusing adapters into the model</span></h2> <p data-svelte-h="svelte-pylv7x">You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.fuse_lora">fuse_lora()</a> method, which can lead to a speed-up in inference and lower VRAM usage.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipe.load_lora_weights(<span class="hljs-string">"nerijs/pixel-art-xl"</span>, weight_name=<span class="hljs-string">"pixel-art-xl.safetensors"</span>, adapter_name=<span class="hljs-string">"pixel"</span>) | |
| pipe.load_lora_weights(<span class="hljs-string">"CiroN2022/toy-face"</span>, weight_name=<span class="hljs-string">"toy_face_sdxl.safetensors"</span>, adapter_name=<span class="hljs-string">"toy"</span>) | |
| pipe.set_adapters([<span class="hljs-string">"pixel"</span>, <span class="hljs-string">"toy"</span>], adapter_weights=[<span class="hljs-number">0.5</span>, <span class="hljs-number">1.0</span>]) | |
| <span class="hljs-comment"># Fuses the LoRAs into the Unet</span> | |
| pipe.fuse_lora() | |
| prompt = <span class="hljs-string">"toy_face of a hacker with a hoodie, pixel art"</span> | |
| image = pipe(prompt, num_inference_steps=<span class="hljs-number">30</span>, generator=torch.manual_seed(<span class="hljs-number">0</span>)).images[<span class="hljs-number">0</span>] | |
| <span class="hljs-comment"># Gets the Unet back to the original state</span> | |
| pipe.unfuse_lora()<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-10hust8">You can also fuse some adapters using <code>adapter_names</code> for faster generation:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipe.load_lora_weights(<span class="hljs-string">"nerijs/pixel-art-xl"</span>, weight_name=<span class="hljs-string">"pixel-art-xl.safetensors"</span>, adapter_name=<span class="hljs-string">"pixel"</span>) | |
| pipe.load_lora_weights(<span class="hljs-string">"CiroN2022/toy-face"</span>, weight_name=<span class="hljs-string">"toy_face_sdxl.safetensors"</span>, adapter_name=<span class="hljs-string">"toy"</span>) | |
| pipe.set_adapters([<span class="hljs-string">"pixel"</span>], adapter_weights=[<span class="hljs-number">0.5</span>, <span class="hljs-number">1.0</span>]) | |
| <span class="hljs-comment"># Fuses the LoRAs into the Unet</span> | |
| pipe.fuse_lora(adapter_names=[<span class="hljs-string">"pixel"</span>]) | |
| prompt = <span class="hljs-string">"a hacker with a hoodie, pixel art"</span> | |
| image = pipe(prompt, num_inference_steps=<span class="hljs-number">30</span>, generator=torch.manual_seed(<span class="hljs-number">0</span>)).images[<span class="hljs-number">0</span>] | |
| <span class="hljs-comment"># Gets the Unet back to the original state</span> | |
| pipe.unfuse_lora() | |
| <span class="hljs-comment"># Fuse all adapters</span> | |
| pipe.fuse_lora(adapter_names=[<span class="hljs-string">"pixel"</span>, <span class="hljs-string">"toy"</span>]) | |
| prompt = <span class="hljs-string">"toy_face of a hacker with a hoodie, pixel art"</span> | |
| image = pipe(prompt, num_inference_steps=<span class="hljs-number">30</span>, generator=torch.manual_seed(<span class="hljs-number">0</span>)).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="saving-a-pipeline-after-fusing-the-adapters" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#saving-a-pipeline-after-fusing-the-adapters"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Saving a pipeline after fusing the adapters</span></h2> <p data-svelte-h="svelte-vfsma9">To properly save a pipeline after it’s been loaded with the adapters, it should be serialized like so:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipe.fuse_lora(lora_scale=<span class="hljs-number">1.0</span>) | |
| pipe.unload_lora_weights() | |
| pipe.save_pretrained(<span class="hljs-string">"path-to-pipeline"</span>)<!-- HTML_TAG_END --></pre></div> <p></p> | |
| <script> | |
| { | |
| __sveltekit_j7lbip = { | |
| assets: "/docs/diffusers/v0.26.2/en", | |
| base: "/docs/diffusers/v0.26.2/en", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/diffusers/v0.26.2/en/_app/immutable/entry/start.ca2a3496.js"), | |
| import("/docs/diffusers/v0.26.2/en/_app/immutable/entry/app.76ddbcd8.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 156], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 43 kB
- Xet hash:
- 6a92fa32ec02442bfd6815c531472c448b861be432540b3c8bce0ce3b19a9665
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.