--- tags: - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - diffusers - lora - template:sd-lora widget: - text: A cloesup rendering of of back right side from back right output: url: image-0.png - text: A cloesup rendering of from bottom back output: url: image-1.png - text: A cloesup rendering of of front right side from front left output: url: image-2.png - text: A rendering of from 334° degree output: url: image-3.png - text: A rendering of from 72° degree output: url: image-4.png - text: A rendering of from 84° degree output: url: image-5.png - text: A rendering of from 96° degree output: url: image-6.png - text: A rendering of from 108° degree output: url: image-7.png - text: A rendering of from 120° degree output: url: image-8.png - text: A rendering of from 132° degree output: url: image-9.png - text: A rendering of from 144° degree output: url: image-10.png - text: A rendering of from 166° degree output: url: image-11.png - text: A rendering of from 178° degree output: url: image-12.png - text: A rendering of from 170° degree output: url: image-13.png - text: ' A rendering of from 190° degree' output: url: image-14.png - text: A rendering of of top from top output: url: image-15.png - text: A rendering of from 346° degree output: url: image-16.png - text: ' A rendering of from 202° degree' output: url: image-17.png - text: A rendering of from 214° degree output: url: image-18.png - text: A rendering of from 226° degree output: url: image-19.png - text: A rendering of from 238° degree output: url: image-20.png - text: A rendering of from 250° degree output: url: image-21.png - text: A rendering of from 262° degree output: url: image-22.png - text: A rendering of from 274° degree output: url: image-23.png - text: A rendering of from 286° degree output: url: image-24.png - text: A rendering of from 298° degree output: url: image-25.png - text: A rendering of from 310° degree output: url: image-26.png - text: A rendering of from 358° degree output: url: image-27.png - text: A rendering of from 322° degree output: url: image-28.png - text: A rendering of from 0° degree output: url: image-29.png - text: A rendering of from 12° degree output: url: image-30.png - text: A rendering of from 24° degree output: url: image-31.png - text: A rendering of from 36° degree output: url: image-32.png - text: A rendering of from 48° degree output: url: image-33.png - text: A rendering of from 60° degree output: url: image-34.png - text: A rendering of of the left side from renderd from diagonal top output: url: image-35.png - text: A rendering of from bottom front right output: url: image-36.png - text: A cloeseup photo of from front bottom left output: url: image-37.png - text: A cloeseup photo of of the front from bottom right output: url: image-38.png - text: A cloeseup photo of of the front from 0° front output: url: image-39.png - text: A photo of from bottom left output: url: image-40.png - text: A photo of from bottom left output: url: image-41.png - text: A photo of from bottom right output: url: image-42.png - text: 'A photo of of left side from diagonal top ' output: url: image-43.png - text: A photo of of right side straight from side output: url: image-44.png - text: A photo of front of from top right output: url: image-45.png - text: A photo of front of from top right output: url: image-46.png - text: 'A photo of climbing on a rock from right front ' output: url: image-47.png - text: A photo of left back of in a garden on a stonepath output: url: image-48.png - text: A photo of lifting his left front leg in the air output: url: image-49.png - text: 'A photo of standing on a podest from right bottom front ' output: url: image-50.png - text: A photo of puting up his right front leg on a wooden surface output: url: image-51.png - text: A photo of balancing on left legs on a wooden surface output: url: image-52.png - text: A photo of giving his front legs into human hands output: url: image-53.png - text: A photo of sitting in front of a man in a bedroom on a zebra carpet output: url: image-54.png - text: A photo of lying on floor from top right output: url: image-55.png - text: A photo of standing output: url: image-56.png - text: A rendering of from 338° degree output: url: image-57.png - text: A photo of standing on a rock from right output: url: image-58.png - text: A photo of standing on on a plaza from front right output: url: image-59.png - text: A photo of going down stairs output: url: image-60.png - text: A photo of going up stairs output: url: image-61.png - text: A photo of going up stairs from back left output: url: image-62.png - text: A photo of going standing on a rock from bottom right output: url: image-63.png - text: A photo of lifting his front left and back right leg output: url: image-64.png - text: A wireframe of from front left output: url: image-65.png base_model: stabilityai/stable-diffusion-xl-base-1.0 instance_prompt: A rendering or photo of UnitreeGo2 license: openrail++ --- # SDXL LoRA DreamBooth - nited/unitreego2 ## Model description ### These are nited/unitreego2 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. ## Download model ### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke - **LoRA**: download **[`unitreego2.safetensors` here 💾](/nited/unitreego2/blob/main/unitreego2.safetensors)**. - Place it on your `models/Lora` folder. - On AUTOMATIC1111, load the LoRA by adding `` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/). - *Embeddings*: download **[`unitreego2_emb.safetensors` here 💾](/nited/unitreego2/blob/main/unitreego2_emb.safetensors)**. - Place it on it on your `embeddings` folder - Use it by adding `unitreego2_emb` to your prompt. For example, `A rendering or photo of unitreego2_emb UnitreeGo2` (you need both the LoRA and the embeddings as they were trained together for this LoRA) ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch from huggingface_hub import hf_hub_download from safetensors.torch import load_file pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('nited/unitreego2', weight_name='pytorch_lora_weights.safetensors') embedding_path = hf_hub_download(repo_id='nited/unitreego2', filename='unitreego2_emb.safetensors' repo_type="model") state_dict = load_file(embedding_path) pipeline.load_textual_inversion(state_dict["clip_l"], token=["", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", ""], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer) pipeline.load_textual_inversion(state_dict["clip_g"], token=["", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", ""], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2) image = pipeline('A rendering or photo of UnitreeGo2').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Trigger words To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens: to trigger concept `TOK` → use `` in your prompt ## Details All [Files & versions](/nited/unitreego2/tree/main). The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py). LoRA for the text encoder was enabled. False. Pivotal tuning was enabled: True. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.