Buckets:
| <meta charset="utf-8" /><meta name="hf:doc:metadata" content="{"title":"Load adapters","local":"load-adapters","sections":[{"title":"DreamBooth","local":"dreambooth","sections":[],"depth":2},{"title":"Textual inversion","local":"textual-inversion","sections":[],"depth":2},{"title":"LoRA","local":"lora","sections":[{"title":"Load multiple LoRAs","local":"load-multiple-loras","sections":[],"depth":3},{"title":"🤗 PEFT","local":"-peft","sections":[],"depth":3},{"title":"Kohya and TheLastBen","local":"kohya-and-thelastben","sections":[],"depth":3}],"depth":2},{"title":"IP-Adapter","local":"ip-adapter","sections":[{"title":"LCM-Lora","local":"lcm-lora","sections":[],"depth":3},{"title":"Other pipelines","local":"other-pipelines","sections":[],"depth":3}],"depth":2}],"depth":1}"> | |
| <link href="/docs/diffusers/v0.26.2/en/_app/immutable/assets/0.e3b0c442.css" rel="modulepreload"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/entry/start.ca2a3496.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/scheduler.182ea377.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/singletons.8615e179.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/index.1f6d62f6.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/paths.b0157720.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/entry/app.76ddbcd8.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/index.abf12888.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/nodes/0.1ba30a4e.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/each.e59479a4.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/nodes/175.434d5cfb.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/Tip.230e2334.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/CodeBlock.57fe6e13.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/DocNotebookDropdown.5fa27ace.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/globals.7f7f1b26.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/Heading.16916d63.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/HfOption.fc88c804.js"> | |
| <link rel="modulepreload" href="/docs/diffusers/v0.26.2/en/_app/immutable/chunks/stores.aa51e67d.js"><!-- HEAD_svelte-u9bgzb_START --><meta name="hf:doc:metadata" content="{"title":"Load adapters","local":"load-adapters","sections":[{"title":"DreamBooth","local":"dreambooth","sections":[],"depth":2},{"title":"Textual inversion","local":"textual-inversion","sections":[],"depth":2},{"title":"LoRA","local":"lora","sections":[{"title":"Load multiple LoRAs","local":"load-multiple-loras","sections":[],"depth":3},{"title":"🤗 PEFT","local":"-peft","sections":[],"depth":3},{"title":"Kohya and TheLastBen","local":"kohya-and-thelastben","sections":[],"depth":3}],"depth":2},{"title":"IP-Adapter","local":"ip-adapter","sections":[{"title":"LCM-Lora","local":"lcm-lora","sections":[],"depth":3},{"title":"Other pipelines","local":"other-pipelines","sections":[],"depth":3}],"depth":2}],"depth":1}"><!-- HEAD_svelte-u9bgzb_END --> <p></p> <h1 class="relative group"><a id="load-adapters" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-adapters"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load adapters</span></h1> <div class="flex space-x-1 absolute z-10 right-0 top-0"> <div class="relative colab-dropdown "> <button class=" " type="button"> <img alt="Open In Colab" class="!m-0" src="https://colab.research.google.com/assets/colab-badge.svg"> </button> </div> <div class="relative colab-dropdown "> <button class=" " type="button"> <img alt="Open In Studio Lab" class="!m-0" src="https://studiolab.sagemaker.aws/studiolab.svg"> </button> </div></div> <p data-svelte-h="svelte-ax3rwx">There are several <a href="../training/overview">training</a> techniques for personalizing diffusion models to generate images of a specific subject or images in certain styles. Each of these training methods produces a different type of adapter. Some of the adapters generate an entirely new model, while other adapters only modify a smaller set of embeddings or weights. This means the loading process for each adapter is also different.</p> <p data-svelte-h="svelte-h4deay">This guide will show you how to load DreamBooth, textual inversion, and LoRA weights.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-unc393">Feel free to browse the <a href="https://huggingface.co/spaces/sd-concepts-library/stable-diffusion-conceptualizer" rel="nofollow">Stable Diffusion Conceptualizer</a>, <a href="https://huggingface.co/spaces/multimodalart/LoraTheExplorer" rel="nofollow">LoRA the Explorer</a>, and the <a href="https://huggingface.co/spaces/huggingface-projects/diffusers-gallery" rel="nofollow">Diffusers Models Gallery</a> for checkpoints and embeddings to use.</p></div> <h2 class="relative group"><a id="dreambooth" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#dreambooth"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>DreamBooth</span></h2> <p data-svelte-h="svelte-yncm6r"><a href="https://dreambooth.github.io/" rel="nofollow">DreamBooth</a> finetunes an <em>entire diffusion model</em> on just several images of a subject to generate images of that subject in new styles and settings. This method works by using a special word in the prompt that the model learns to associate with the subject image. Of all the training methods, DreamBooth produces the largest file size (usually a few GBs) because it is a full checkpoint model.</p> <p data-svelte-h="svelte-1ngpipq">Let’s load the <a href="https://huggingface.co/sd-dreambooth-library/herge-style" rel="nofollow">herge_style</a> checkpoint, which is trained on just 10 images drawn by Hergé, to generate images in that style. For it to work, you need to include the special word <code>herge_style</code> in your prompt to trigger the checkpoint:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained(<span class="hljs-string">"sd-dreambooth-library/herge-style"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| prompt = <span class="hljs-string">"A cute herge_style brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration"</span> | |
| image = pipeline(prompt).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-1qqjocv"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_dreambooth.png"></div> <h2 class="relative group"><a id="textual-inversion" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#textual-inversion"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Textual inversion</span></h2> <p data-svelte-h="svelte-z0bjzf"><a href="https://textual-inversion.github.io/" rel="nofollow">Textual inversion</a> is very similar to DreamBooth and it can also personalize a diffusion model to generate certain concepts (styles, objects) from just a few images. This method works by training and finding new embeddings that represent the images you provide with a special word in the prompt. As a result, the diffusion model weights stay the same and the training process produces a relatively tiny (a few KBs) file.</p> <p data-svelte-h="svelte-1bhtcph">Because textual inversion creates embeddings, it cannot be used on its own like DreamBooth and requires another model.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained(<span class="hljs-string">"runwayml/stable-diffusion-v1-5"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-94aszy">Now you can load the textual inversion embeddings with the <a href="/docs/diffusers/v0.26.2/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion">load_textual_inversion()</a> method and generate some images. Let’s load the <a href="https://huggingface.co/sd-concepts-library/gta5-artwork" rel="nofollow">sd-concepts-library/gta5-artwork</a> embeddings and you’ll need to include the special word <code><gta5-artwork></code> in your prompt to trigger it:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.load_textual_inversion(<span class="hljs-string">"sd-concepts-library/gta5-artwork"</span>) | |
| prompt = <span class="hljs-string">"A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, <gta5-artwork> style"</span> | |
| image = pipeline(prompt).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-vwb4li"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_txt_embed.png"></div> <p data-svelte-h="svelte-98ehgv">Textual inversion can also be trained on undesirable things to create <em>negative embeddings</em> to discourage a model from generating images with those undesirable things like blurry images or extra fingers on a hand. This can be an easy way to quickly improve your prompt. You’ll also load the embeddings with <a href="/docs/diffusers/v0.26.2/en/api/loaders/textual_inversion#diffusers.loaders.TextualInversionLoaderMixin.load_textual_inversion">load_textual_inversion()</a>, but this time, you’ll need two more parameters:</p> <ul data-svelte-h="svelte-1v8nqu6"><li><code>weight_name</code>: specifies the weight file to load if the file was saved in the 🤗 Diffusers format with a specific name or if the file is stored in the A1111 format</li> <li><code>token</code>: specifies the special word to use in the prompt to trigger the embeddings</li></ul> <p data-svelte-h="svelte-1eujnzn">Let’s load the <a href="https://huggingface.co/sayakpaul/EasyNegative-test" rel="nofollow">sayakpaul/EasyNegative-test</a> embeddings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.load_textual_inversion( | |
| <span class="hljs-string">"sayakpaul/EasyNegative-test"</span>, weight_name=<span class="hljs-string">"EasyNegative.safetensors"</span>, token=<span class="hljs-string">"EasyNegative"</span> | |
| )<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-5s5bn9">Now you can use the <code>token</code> to generate an image with the negative embeddings:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->prompt = <span class="hljs-string">"A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration, EasyNegative"</span> | |
| negative_prompt = <span class="hljs-string">"EasyNegative"</span> | |
| image = pipeline(prompt, negative_prompt=negative_prompt, num_inference_steps=<span class="hljs-number">50</span>).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-j6euo"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png"></div> <h2 class="relative group"><a id="lora" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#lora"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>LoRA</span></h2> <p data-svelte-h="svelte-10sp1c2"><a href="https://huggingface.co/papers/2106.09685" rel="nofollow">Low-Rank Adaptation (LoRA)</a> is a popular training technique because it is fast and generates smaller file sizes (a couple hundred MBs). Like the other methods in this guide, LoRA can train a model to learn new styles from just a few images. It works by inserting new weights into the diffusion model and then only the new weights are trained instead of the entire model. This makes LoRAs faster to train and easier to store.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1ulwib4">LoRA is a very general training technique that can be used with other training methods. For example, it is common to train a model with DreamBooth and LoRA.</p></div> <p data-svelte-h="svelte-l2hd9l">LoRAs also need to be used with another model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained(<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1u110b2">Then use the <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights">load_lora_weights()</a> method to load the <a href="https://huggingface.co/ostris/super-cereal-sdxl-lora" rel="nofollow">ostris/super-cereal-sdxl-lora</a> weights and specify the weights filename from the repository:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.load_lora_weights(<span class="hljs-string">"ostris/super-cereal-sdxl-lora"</span>, weight_name=<span class="hljs-string">"cereal_box_sdxl_v1.safetensors"</span>) | |
| prompt = <span class="hljs-string">"bears, pizza bites"</span> | |
| image = pipeline(prompt).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-848z5s"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_lora.png"></div> <p data-svelte-h="svelte-1hz2gq9">The <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights">load_lora_weights()</a> method loads LoRA weights into both the UNet and text encoder. It is the preferred way for loading LoRAs because it can handle cases where:</p> <ul data-svelte-h="svelte-ddcy8o"><li>the LoRA weights don’t have separate identifiers for the UNet and text encoder</li> <li>the LoRA weights have separate identifiers for the UNet and text encoder</li></ul> <p data-svelte-h="svelte-13qkrq6">But if you only need to load LoRA weights into the UNet, then you can use the <a href="/docs/diffusers/v0.26.2/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs">load_attn_procs()</a> method. Let’s load the <a href="https://huggingface.co/jbilcke-hf/sdxl-cinematic-1" rel="nofollow">jbilcke-hf/sdxl-cinematic-1</a> LoRA:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained(<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.unet.load_attn_procs(<span class="hljs-string">"jbilcke-hf/sdxl-cinematic-1"</span>, weight_name=<span class="hljs-string">"pytorch_lora_weights.safetensors"</span>) | |
| <span class="hljs-comment"># use cnmt in the prompt to trigger the LoRA</span> | |
| prompt = <span class="hljs-string">"A cute cnmt eating a slice of pizza, stunning color scheme, masterpiece, illustration"</span> | |
| image = pipeline(prompt).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-1etahws"><img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_attn_proc.png"></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-17yvwlu">For both <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights">load_lora_weights()</a> and <a href="/docs/diffusers/v0.26.2/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs">load_attn_procs()</a>, you can pass the <code>cross_attention_kwargs={"scale": 0.5}</code> parameter to adjust how much of the LoRA weights to use. A value of <code>0</code> is the same as only using the base model weights, and a value of <code>1</code> is equivalent to using the fully finetuned LoRA.</p></div> <p data-svelte-h="svelte-101sips">To unload the LoRA weights, use the <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.unload_lora_weights">unload_lora_weights()</a> method to discard the LoRA weights and restore the model to its original weights:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.unload_lora_weights()<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="load-multiple-loras" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#load-multiple-loras"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Load multiple LoRAs</span></h3> <p data-svelte-h="svelte-uewr75">It can be fun to use multiple LoRAs together to create something entirely new and unique. The <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.fuse_lora">fuse_lora()</a> method allows you to fuse the LoRA weights with the original weights of the underlying model.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-w59hk8">Fusing the weights can lead to a speedup in inference latency because you don’t need to separately load the base model and LoRA! You can save your fused pipeline with <a href="/docs/diffusers/v0.26.2/en/api/pipelines/overview#diffusers.DiffusionPipeline.save_pretrained">save_pretrained()</a> to avoid loading and fusing the weights every time you want to use the model.</p></div> <p data-svelte-h="svelte-521584">Load an initial model:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> StableDiffusionXLPipeline, AutoencoderKL | |
| <span class="hljs-keyword">import</span> torch | |
| vae = AutoencoderKL.from_pretrained(<span class="hljs-string">"madebyollin/sdxl-vae-fp16-fix"</span>, torch_dtype=torch.float16) | |
| pipeline = StableDiffusionXLPipeline.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, | |
| vae=vae, | |
| torch_dtype=torch.float16, | |
| ).to(<span class="hljs-string">"cuda"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-uk53wk">Next, load the LoRA checkpoint and fuse it with the original weights. The <code>lora_scale</code> parameter controls how much to scale the output by with the LoRA weights. It is important to make the <code>lora_scale</code> adjustments in the <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.fuse_lora">fuse_lora()</a> method because it won’t work if you try to pass <code>scale</code> to the <code>cross_attention_kwargs</code> in the pipeline.</p> <p data-svelte-h="svelte-nuiod0">If you need to reset the original model weights for any reason (use a different <code>lora_scale</code>), you should use the <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.unfuse_lora">unfuse_lora()</a> method.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.load_lora_weights(<span class="hljs-string">"ostris/ikea-instructions-lora-sdxl"</span>) | |
| pipeline.fuse_lora(lora_scale=<span class="hljs-number">0.7</span>) | |
| <span class="hljs-comment"># to unfuse the LoRA weights</span> | |
| pipeline.unfuse_lora()<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-14o3l2">Then fuse this pipeline with the next set of LoRA weights:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.load_lora_weights(<span class="hljs-string">"ostris/super-cereal-sdxl-lora"</span>) | |
| pipeline.fuse_lora(lora_scale=<span class="hljs-number">0.7</span>)<!-- HTML_TAG_END --></pre></div> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-15i918l">You can’t unfuse multiple LoRA checkpoints, so if you need to reset the model to its original weights, you’ll need to reload it.</p></div> <p data-svelte-h="svelte-17dfaab">Now you can generate an image that uses the weights from both LoRAs:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->prompt = <span class="hljs-string">"A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration"</span> | |
| image = pipeline(prompt).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="-peft" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#-peft"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>🤗 PEFT</span></h3> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-7mgl0k">Read the <a href="../tutorials/using_peft_for_inference">Inference with 🤗 PEFT</a> tutorial to learn more about its integration with 🤗 Diffusers and how you can easily work with and juggle multiple adapters. You’ll need to install 🤗 Diffusers and PEFT from source to run the example in this section.</p></div> <p data-svelte-h="svelte-1nb4od8">Another way you can load and use multiple LoRAs is to specify the <code>adapter_name</code> parameter in <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights">load_lora_weights()</a>. This method takes advantage of the 🤗 PEFT integration. For example, load and name both LoRA weights:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = DiffusionPipeline.from_pretrained(<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"ostris/ikea-instructions-lora-sdxl"</span>, weight_name=<span class="hljs-string">"ikea_instructions_xl_v1_5.safetensors"</span>, adapter_name=<span class="hljs-string">"ikea"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"ostris/super-cereal-sdxl-lora"</span>, weight_name=<span class="hljs-string">"cereal_box_sdxl_v1.safetensors"</span>, adapter_name=<span class="hljs-string">"cereal"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-14thu1c">Now use the <a href="/docs/diffusers/v0.26.2/en/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters">set_adapters()</a> to activate both LoRAs, and you can configure how much weight each LoRA should have on the output:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.set_adapters([<span class="hljs-string">"ikea"</span>, <span class="hljs-string">"cereal"</span>], adapter_weights=[<span class="hljs-number">0.7</span>, <span class="hljs-number">0.5</span>])<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-f9ixj4">Then, generate an image:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->prompt = <span class="hljs-string">"A cute brown bear eating a slice of pizza, stunning color scheme, masterpiece, illustration"</span> | |
| image = pipeline(prompt, num_inference_steps=<span class="hljs-number">30</span>, cross_attention_kwargs={<span class="hljs-string">"scale"</span>: <span class="hljs-number">1.0</span>}).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="kohya-and-thelastben" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#kohya-and-thelastben"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Kohya and TheLastBen</span></h3> <p data-svelte-h="svelte-6zj1or">Other popular LoRA trainers from the community include those by <a href="https://github.com/kohya-ss/sd-scripts/" rel="nofollow">Kohya</a> and <a href="https://github.com/TheLastBen/fast-stable-diffusion" rel="nofollow">TheLastBen</a>. These trainers create different LoRA checkpoints than those trained by 🤗 Diffusers, but they can still be loaded in the same way.</p> <p data-svelte-h="svelte-4kvz7b">Let’s download the <a href="https://civitai.com/models/150986/blueprintify-sd-xl-10" rel="nofollow">Blueprintify SD XL 1.0</a> checkpoint from <a href="https://civitai.com/" rel="nofollow">Civitai</a>:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->!wget https://civitai.com/api/download/models/168776 -O blueprintify-sd-xl-10.safetensors<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1ro1j2z">Load the LoRA checkpoint with the <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights">load_lora_weights()</a> method, and specify the filename in the <code>weight_name</code> parameter:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained(<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"path/to/weights"</span>, weight_name=<span class="hljs-string">"blueprintify-sd-xl-10.safetensors"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-1un5bjt">Generate an image:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-comment"># use bl3uprint in the prompt to trigger the LoRA</span> | |
| prompt = <span class="hljs-string">"bl3uprint, a highly detailed blueprint of the eiffel tower, explaining how to build all parts, many txt, blueprint grid backdrop"</span> | |
| image = pipeline(prompt).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400"><p data-svelte-h="svelte-1aa8at7">Some limitations of using Kohya LoRAs with 🤗 Diffusers include:</p> <ul data-svelte-h="svelte-9v6qx"><li>Images may not look like those generated by UIs - like ComfyUI - for multiple reasons, which are explained <a href="https://github.com/huggingface/diffusers/pull/4287/#issuecomment-1655110736" rel="nofollow">here</a>.</li> <li><a href="https://github.com/KohakuBlueleaf/LyCORIS" rel="nofollow">LyCORIS checkpoints</a> aren’t fully supported. The <a href="/docs/diffusers/v0.26.2/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights">load_lora_weights()</a> method loads LyCORIS checkpoints with LoRA and LoCon modules, but Hada and LoKR are not supported.</li></ul></div> <p data-svelte-h="svelte-qdt5l0">Loading a checkpoint from TheLastBen is very similar. For example, to load the <a href="https://huggingface.co/TheLastBen/William_Eggleston_Style_SDXL" rel="nofollow">TheLastBen/William_Eggleston_Style_SDXL</a> checkpoint:</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained(<span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.load_lora_weights(<span class="hljs-string">"TheLastBen/William_Eggleston_Style_SDXL"</span>, weight_name=<span class="hljs-string">"wegg.safetensors"</span>) | |
| <span class="hljs-comment"># use by william eggleston in the prompt to trigger the LoRA</span> | |
| prompt = <span class="hljs-string">"a house by william eggleston, sunrays, beautiful, sunlight, sunrays, beautiful"</span> | |
| image = pipeline(prompt=prompt).images[<span class="hljs-number">0</span>] | |
| image<!-- HTML_TAG_END --></pre></div> <h2 class="relative group"><a id="ip-adapter" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#ip-adapter"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>IP-Adapter</span></h2> <p data-svelte-h="svelte-1f8uwvu"><a href="https://ip-adapter.github.io/" rel="nofollow">IP-Adapter</a> is an effective and lightweight adapter that adds image prompting capabilities to a diffusion model. This adapter works by decoupling the cross-attention layers of the image and text features. All the other model components are frozen and only the embedded image features in the UNet are trained. As a result, IP-Adapter files are typically only ~100MBs.</p> <p data-svelte-h="svelte-88z36b">IP-Adapter works with most of our pipelines, including Stable Diffusion, Stable Diffusion XL (SDXL), ControlNet, T2I-Adapter, AnimateDiff. And you can use any custom models finetuned from the same base models. It also works with LCM-Lora out of box.</p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-lbes8h">You can find official IP-Adapter checkpoints in <a href="https://huggingface.co/h94/IP-Adapter" rel="nofollow">h94/IP-Adapter</a>.</p> <p data-svelte-h="svelte-dj57ph">IP-Adapter was contributed by <a href="https://github.com/okotaku" rel="nofollow">okotaku</a>.</p></div> <p data-svelte-h="svelte-rscg1q">Let’s first create a Stable Diffusion Pipeline.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image | |
| pipeline = AutoPipelineForText2Image.from_pretrained(<span class="hljs-string">"runwayml/stable-diffusion-v1-5"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>)<!-- HTML_TAG_END --></pre></div> <p data-svelte-h="svelte-18hl4jh">Now load the <a href="https://huggingface.co/h94/IP-Adapter" rel="nofollow">h94/IP-Adapter</a> weights with the <a href="/docs/diffusers/v0.26.2/en/api/loaders/ip_adapter#diffusers.loaders.IPAdapterMixin.load_ip_adapter">load_ip_adapter()</a> method.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.load_ip_adapter(<span class="hljs-string">"h94/IP-Adapter"</span>, subfolder=<span class="hljs-string">"models"</span>, weight_name=<span class="hljs-string">"ip-adapter_sd15.bin"</span>)<!-- HTML_TAG_END --></pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400">IP-Adapter relies on an image encoder to generate the image features, if your IP-Adapter weights folder contains a "image_encoder" subfolder, the image encoder will be automatically loaded and registered to the pipeline. Otherwise you can so load a [CLIPVisionModelWithProjection](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/clip#transformers.CLIPVisionModelWithProjection) model and pass it to a Stable Diffusion pipeline when you create it. | |
| <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> CLIPVisionModelWithProjection | |
| <span class="hljs-keyword">import</span> torch | |
| image_encoder = CLIPVisionModelWithProjection.from_pretrained( | |
| <span class="hljs-string">"h94/IP-Adapter"</span>, | |
| subfolder=<span class="hljs-string">"models/image_encoder"</span>, | |
| torch_dtype=torch.float16, | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline = AutoPipelineForText2Image.from_pretrained(<span class="hljs-string">"runwayml/stable-diffusion-v1-5"</span>, image_encoder=image_encoder, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>)<!-- HTML_TAG_END --></pre></div></div> <p data-svelte-h="svelte-4eucq1">IP-Adapter allows you to use both image and text to condition the image generation process. For example, let’s use the bear image from the <a href="#textual-inversion">Textual Inversion</a> section as the image prompt (<code>ip_adapter_image</code>) along with a text prompt to add “sunglasses”. 😎</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START -->pipeline.set_ip_adapter_scale(<span class="hljs-number">0.6</span>) | |
| image = load_image(<span class="hljs-string">"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png"</span>) | |
| generator = torch.Generator(device=<span class="hljs-string">"cpu"</span>).manual_seed(<span class="hljs-number">33</span>) | |
| images = pipeline( | |
| prompt=<span class="hljs-string">'best quality, high quality, wearing sunglasses'</span>, | |
| ip_adapter_image=image, | |
| negative_prompt=<span class="hljs-string">"monochrome, lowres, bad anatomy, worst quality, low quality"</span>, | |
| num_inference_steps=<span class="hljs-number">50</span>, | |
| generator=generator, | |
| ).images | |
| images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-1vum0wo"> <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip-bear.png"></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1npz33y">You can use the <code>set_ip_adapter_scale()</code> method to adjust the text prompt and image prompt condition ratio. If you’re only using the image prompt, you should set the scale to <code>1.0</code>. You can lower the scale to get more generation diversity, but it’ll be less aligned with the prompt. | |
| <code>scale=0.5</code> can achieve good results in most cases when you use both text and image prompts.</p></div> <p data-svelte-h="svelte-18vs95f">IP-Adapter also works great with Image-to-Image and Inpainting pipelines. See below examples of how you can use it with Image-to-Image and Inpaint.</p> <div class="flex space-x-2 items-center my-1.5 mr-8 h-7 !pl-0 -mx-3 md:mx-0"><div class="flex items-center border rounded-lg px-1.5 py-1 leading-none select-none text-smd border-gray-800 bg-black dark:bg-gray-700 text-white">image-to-image </div><div class="flex items-center border rounded-lg px-1.5 py-1 leading-none select-none text-smd text-gray-500 cursor-pointer opacity-90 hover:text-gray-700 dark:hover:text-gray-200 hover:shadow-sm">inpaint </div></div> <div class="language-select"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForImage2Image | |
| <span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image | |
| pipeline = AutoPipelineForImage2Image.from_pretrained(<span class="hljs-string">"runwayml/stable-diffusion-v1-5"</span>, torch_dtype=torch.float16).to(<span class="hljs-string">"cuda"</span>) | |
| image = load_image(<span class="hljs-string">"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/vermeer.jpg"</span>) | |
| ip_image = load_image(<span class="hljs-string">"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/river.png"</span>) | |
| pipeline.load_ip_adapter(<span class="hljs-string">"h94/IP-Adapter"</span>, subfolder=<span class="hljs-string">"models"</span>, weight_name=<span class="hljs-string">"ip-adapter_sd15.bin"</span>) | |
| generator = torch.Generator(device=<span class="hljs-string">"cpu"</span>).manual_seed(<span class="hljs-number">33</span>) | |
| images = pipeline( | |
| prompt=<span class="hljs-string">'best quality, high quality'</span>, | |
| image = image, | |
| ip_adapter_image=ip_image, | |
| num_inference_steps=<span class="hljs-number">50</span>, | |
| generator=generator, | |
| strength=<span class="hljs-number">0.6</span>, | |
| ).images | |
| images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> </div> <p data-svelte-h="svelte-dbuqym">IP-Adapters can also be used with <a href="../api/pipelines/stable_diffusion/stable_diffusion_xl.md">SDXL</a></p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image | |
| <span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image | |
| <span class="hljs-keyword">import</span> torch | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, | |
| torch_dtype=torch.float16 | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| image = load_image(<span class="hljs-string">"https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/watercolor_painting.jpeg"</span>) | |
| pipeline.load_ip_adapter(<span class="hljs-string">"h94/IP-Adapter"</span>, subfolder=<span class="hljs-string">"sdxl_models"</span>, weight_name=<span class="hljs-string">"ip-adapter_sdxl.bin"</span>) | |
| generator = torch.Generator(device=<span class="hljs-string">"cpu"</span>).manual_seed(<span class="hljs-number">33</span>) | |
| image = pipeline( | |
| prompt=<span class="hljs-string">"best quality, high quality"</span>, | |
| ip_adapter_image=image, | |
| negative_prompt=<span class="hljs-string">"monochrome, lowres, bad anatomy, worst quality, low quality"</span>, | |
| num_inference_steps=<span class="hljs-number">25</span>, | |
| generator=generator, | |
| ).images[<span class="hljs-number">0</span>] | |
| image.save(<span class="hljs-string">"sdxl_t2i.png"</span>)<!-- HTML_TAG_END --></pre></div> <div class="flex flex-row gap-4" data-svelte-h="svelte-1uteq2l"><div class="flex-1"><img class="rounded-xl" src="https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/watercolor_painting.jpeg"> <figcaption class="mt-2 text-center text-sm text-gray-500">input image</figcaption></div> <div class="flex-1"><img class="rounded-xl" src="https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/sdxl_t2i.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">adapted image</figcaption></div></div> <p data-svelte-h="svelte-1j1rhk6">You can use the IP-Adapter face model to apply specific faces to your images. It is an effective way to maintain consistent characters in your image generations. | |
| Weights are loaded with the same method used for the other IP-Adapters.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-comment"># Load ip-adapter-full-face_sd15.bin</span> | |
| pipeline.load_ip_adapter(<span class="hljs-string">"h94/IP-Adapter"</span>, subfolder=<span class="hljs-string">"models"</span>, weight_name=<span class="hljs-string">"ip-adapter-full-face_sd15.bin"</span>)<!-- HTML_TAG_END --></pre></div> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1qlwv3c">It is recommended to use <code>DDIMScheduler</code> and <code>EulerDiscreteScheduler</code> for face model.</p></div> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> StableDiffusionPipeline, DDIMScheduler | |
| <span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image | |
| pipeline = StableDiffusionPipeline.from_pretrained( | |
| <span class="hljs-string">"runwayml/stable-diffusion-v1-5"</span>, | |
| torch_dtype=torch.float16, | |
| ).to(<span class="hljs-string">"cuda"</span>) | |
| pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) | |
| pipeline.load_ip_adapter(<span class="hljs-string">"h94/IP-Adapter"</span>, subfolder=<span class="hljs-string">"models"</span>, weight_name=<span class="hljs-string">"ip-adapter-full-face_sd15.bin"</span>) | |
| pipeline.set_ip_adapter_scale(<span class="hljs-number">0.7</span>) | |
| image = load_image(<span class="hljs-string">"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png"</span>) | |
| generator = torch.Generator(device=<span class="hljs-string">"cpu"</span>).manual_seed(<span class="hljs-number">33</span>) | |
| image = pipeline( | |
| prompt=<span class="hljs-string">"A photo of a girl wearing a black dress, holding red roses in hand, upper body, behind is the Eiffel Tower"</span>, | |
| ip_adapter_image=image, | |
| negative_prompt=<span class="hljs-string">"monochrome, lowres, bad anatomy, worst quality, low quality"</span>, | |
| num_inference_steps=<span class="hljs-number">50</span>, num_images_per_prompt=<span class="hljs-number">1</span>, width=<span class="hljs-number">512</span>, height=<span class="hljs-number">704</span>, | |
| generator=generator, | |
| ).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <div class="flex flex-row gap-4" data-svelte-h="svelte-nksdg5"><div class="flex-1"><img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ai_face2.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">input image</figcaption></div> <div class="flex-1"><img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ipadapter_full_face_output.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">output image</figcaption></div></div> <p data-svelte-h="svelte-1botspc">You can load multiple IP-Adapter models and use multiple reference images at the same time. In this example we use IP-Adapter-Plus face model to create a consistent character and also use IP-Adapter-Plus model along with 10 images to create a coherent style in the image we generate.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> AutoPipelineForText2Image, DDIMScheduler | |
| <span class="hljs-keyword">from</span> transformers <span class="hljs-keyword">import</span> CLIPVisionModelWithProjection | |
| <span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image | |
| image_encoder = CLIPVisionModelWithProjection.from_pretrained( | |
| <span class="hljs-string">"h94/IP-Adapter"</span>, | |
| subfolder=<span class="hljs-string">"models/image_encoder"</span>, | |
| torch_dtype=torch.float16, | |
| ) | |
| pipeline = AutoPipelineForText2Image.from_pretrained( | |
| <span class="hljs-string">"stabilityai/stable-diffusion-xl-base-1.0"</span>, | |
| torch_dtype=torch.float16, | |
| image_encoder=image_encoder, | |
| ) | |
| pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) | |
| pipeline.load_ip_adapter( | |
| <span class="hljs-string">"h94/IP-Adapter"</span>, | |
| subfolder=<span class="hljs-string">"sdxl_models"</span>, | |
| weight_name=[<span class="hljs-string">"ip-adapter-plus_sdxl_vit-h.safetensors"</span>, <span class="hljs-string">"ip-adapter-plus-face_sdxl_vit-h.safetensors"</span>] | |
| ) | |
| pipeline.set_ip_adapter_scale([<span class="hljs-number">0.7</span>, <span class="hljs-number">0.3</span>]) | |
| pipeline.enable_model_cpu_offload() | |
| face_image = load_image(<span class="hljs-string">"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png"</span>) | |
| style_folder = <span class="hljs-string">"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/style_ziggy"</span> | |
| style_images = [load_image(<span class="hljs-string">f"<span class="hljs-subst">{style_folder}</span>/img<span class="hljs-subst">{i}</span>.png"</span>) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> <span class="hljs-built_in">range</span>(<span class="hljs-number">10</span>)] | |
| generator = torch.Generator(device=<span class="hljs-string">"cpu"</span>).manual_seed(<span class="hljs-number">0</span>) | |
| image = pipeline( | |
| prompt=<span class="hljs-string">"wonderwoman"</span>, | |
| ip_adapter_image=[style_images, face_image], | |
| negative_prompt=<span class="hljs-string">"monochrome, lowres, bad anatomy, worst quality, low quality"</span>, | |
| num_inference_steps=<span class="hljs-number">50</span>, num_images_per_prompt=<span class="hljs-number">1</span>, | |
| generator=generator, | |
| ).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <div class="flex justify-center" data-svelte-h="svelte-h63ghy"> <img src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_style_grid.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">style input image</figcaption></div> <div class="flex flex-row gap-4" data-svelte-h="svelte-19vr18q"><div class="flex-1"><img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/women_input.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">face input image</figcaption></div> <div class="flex-1"><img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_multi_out.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">output image</figcaption></div></div> <h3 class="relative group"><a id="lcm-lora" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#lcm-lora"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>LCM-Lora</span></h3> <p data-svelte-h="svelte-epa7g9">You can use IP-Adapter with LCM-Lora to achieve “instant fine-tune” with custom images. Note that you need to load IP-Adapter weights before loading the LCM-Lora weights.</p> <div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers <span class="hljs-keyword">import</span> DiffusionPipeline, LCMScheduler | |
| <span class="hljs-keyword">import</span> torch | |
| <span class="hljs-keyword">from</span> diffusers.utils <span class="hljs-keyword">import</span> load_image | |
| model_id = <span class="hljs-string">"sd-dreambooth-library/herge-style"</span> | |
| lcm_lora_id = <span class="hljs-string">"latent-consistency/lcm-lora-sdv1-5"</span> | |
| pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) | |
| pipe.load_ip_adapter(<span class="hljs-string">"h94/IP-Adapter"</span>, subfolder=<span class="hljs-string">"models"</span>, weight_name=<span class="hljs-string">"ip-adapter_sd15.bin"</span>) | |
| pipe.load_lora_weights(lcm_lora_id) | |
| pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config) | |
| pipe.enable_model_cpu_offload() | |
| prompt = <span class="hljs-string">"best quality, high quality"</span> | |
| image = load_image(<span class="hljs-string">"https://user-images.githubusercontent.com/24734142/266492875-2d50d223-8475-44f0-a7c6-08b51cb53572.png"</span>) | |
| images = pipe( | |
| prompt=prompt, | |
| ip_adapter_image=image, | |
| num_inference_steps=<span class="hljs-number">4</span>, | |
| guidance_scale=<span class="hljs-number">1</span>, | |
| ).images[<span class="hljs-number">0</span>]<!-- HTML_TAG_END --></pre></div> <h3 class="relative group"><a id="other-pipelines" class="header-link block pr-1.5 text-lg no-hover:hidden with-hover:absolute with-hover:p-1.5 with-hover:opacity-0 with-hover:group-hover:opacity-100 with-hover:right-full" href="#other-pipelines"><span><svg class="" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 256 256"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span></a> <span>Other pipelines</span></h3> <p data-svelte-h="svelte-1qrsaj2">IP-Adapter is compatible with any pipeline that (1) uses a text prompt and (2) uses Stable Diffusion or Stable Diffusion XL checkpoint. To use IP-Adapter with a different pipeline, all you need to do is to run <code>load_ip_adapter()</code> method after you create the pipeline, and then pass your image to the pipeline as <code>ip_adapter_image</code></p> <div class="course-tip bg-gradient-to-br dark:bg-gradient-to-r before:border-green-500 dark:before:border-green-800 from-green-50 dark:from-gray-900 to-white dark:to-gray-950 border border-green-50 text-green-700 dark:text-gray-400"><p data-svelte-h="svelte-1btgr3x">🤗 Diffusers currently only supports using IP-Adapter with some of the most popular pipelines, feel free to open a <a href="https://github.com/huggingface/diffusers/issues/new/choose" rel="nofollow">feature request</a> if you have a cool use-case and require integrating IP-adapters with a pipeline that does not support it yet!</p></div> <p data-svelte-h="svelte-ft519j">You can find below examples on how to use IP-Adapter with ControlNet and AnimateDiff.</p> <div class="flex space-x-2 items-center my-1.5 mr-8 h-7 !pl-0 -mx-3 md:mx-0"><div class="flex items-center border rounded-lg px-1.5 py-1 leading-none select-none text-smd border-gray-800 bg-black dark:bg-gray-700 text-white">ControlNet </div><div class="flex items-center border rounded-lg px-1.5 py-1 leading-none select-none text-smd text-gray-500 cursor-pointer opacity-90 hover:text-gray-700 dark:hover:text-gray-200 hover:shadow-sm">AnimateDiff </div></div> <div class="language-select"><div class="code-block relative"><div class="absolute top-2.5 right-4"><button class="inline-flex items-center relative text-sm focus:text-green-500 cursor-pointer focus:outline-none transition duration-200 ease-in-out opacity-0 mx-0.5 text-gray-600 " title="code excerpt" type="button"><svg class="" xmlns="http://www.w3.org/2000/svg" aria-hidden="true" fill="currentColor" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 32 32"><path d="M28,10V28H10V10H28m0-2H10a2,2,0,0,0-2,2V28a2,2,0,0,0,2,2H28a2,2,0,0,0,2-2V10a2,2,0,0,0-2-2Z" transform="translate(0)"></path><path d="M4,18H2V4A2,2,0,0,1,4,2H18V4H4Z" transform="translate(0)"></path><rect fill="none" width="32" height="32"></rect></svg> <div class="absolute pointer-events-none transition-opacity bg-black text-white py-1 px-2 leading-tight rounded font-normal shadow left-1/2 top-full transform -translate-x-1/2 translate-y-2 opacity-0"><div class="absolute bottom-full left-1/2 transform -translate-x-1/2 w-0 h-0 border-black border-4 border-t-0" style="border-left-color: transparent; border-right-color: transparent; "></div> Copied</div></button></div> <pre class=""><!-- HTML_TAG_START --><span class="hljs-keyword">from</span> diffusers import StableDiffusionControlNetPipeline, ControlNetModel | |
| import torch | |
| <span class="hljs-keyword">from</span> diffusers.utils import load_image | |
| controlnet_model_path = <span class="hljs-string">"lllyasviel/control_v11f1p_sd15_depth"</span> | |
| controlnet = ControlNetModel.from_pretrained(controlnet_model_path, <span class="hljs-attribute">torch_dtype</span>=torch.float16) | |
| pipeline = StableDiffusionControlNetPipeline.from_pretrained( | |
| <span class="hljs-string">"runwayml/stable-diffusion-v1-5"</span>, <span class="hljs-attribute">controlnet</span>=controlnet, <span class="hljs-attribute">torch_dtype</span>=torch.float16) | |
| pipeline.<span class="hljs-keyword">to</span>(<span class="hljs-string">"cuda"</span>) | |
| image = load_image(<span class="hljs-string">"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png"</span>) | |
| depth_map = load_image(<span class="hljs-string">"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/depth.png"</span>) | |
| pipeline.load_ip_adapter(<span class="hljs-string">"h94/IP-Adapter"</span>, <span class="hljs-attribute">subfolder</span>=<span class="hljs-string">"models"</span>, <span class="hljs-attribute">weight_name</span>=<span class="hljs-string">"ip-adapter_sd15.bin"</span>) | |
| generator = torch.Generator(<span class="hljs-attribute">device</span>=<span class="hljs-string">"cpu"</span>).manual_seed(33) | |
| images = pipeline( | |
| <span class="hljs-attribute">prompt</span>=<span class="hljs-string">'best quality, high quality'</span>, | |
| <span class="hljs-attribute">image</span>=depth_map, | |
| <span class="hljs-attribute">ip_adapter_image</span>=image, | |
| <span class="hljs-attribute">negative_prompt</span>=<span class="hljs-string">"monochrome, lowres, bad anatomy, worst quality, low quality"</span>, | |
| <span class="hljs-attribute">num_inference_steps</span>=50, | |
| <span class="hljs-attribute">generator</span>=generator, | |
| ).images | |
| images[0]<!-- HTML_TAG_END --></pre></div> <div class="flex flex-row gap-4" data-svelte-h="svelte-1lb28c0"><div class="flex-1"><img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/statue.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">input image</figcaption></div> <div class="flex-1"><img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ipa-controlnet-out.png"> <figcaption class="mt-2 text-center text-sm text-gray-500">adapted image</figcaption></div></div> </div> <p></p> | |
| <script> | |
| { | |
| __sveltekit_j7lbip = { | |
| assets: "/docs/diffusers/v0.26.2/en", | |
| base: "/docs/diffusers/v0.26.2/en", | |
| env: {} | |
| }; | |
| const element = document.currentScript.parentElement; | |
| const data = [null,null]; | |
| Promise.all([ | |
| import("/docs/diffusers/v0.26.2/en/_app/immutable/entry/start.ca2a3496.js"), | |
| import("/docs/diffusers/v0.26.2/en/_app/immutable/entry/app.76ddbcd8.js") | |
| ]).then(([kit, app]) => { | |
| kit.start(app, element, { | |
| node_ids: [0, 175], | |
| data, | |
| form: null, | |
| error: null | |
| }); | |
| }); | |
| } | |
| </script> | |
Xet Storage Details
- Size:
- 101 kB
- Xet hash:
- f49e5f3c9a335a8b7c4b071454e5b99ff67844a6fc624fc89e2a44765c167b0d
·
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.