tardispace / README.md
computational-mama's picture
Upload folder using huggingface_hub
4cf437d verified
metadata
tags:
  - stable-diffusion-xl
  - stable-diffusion-xl-diffusers
  - text-to-image
  - diffusers
  - lora
  - template:sd-lora
widget:
  - text: >-
      A <s0><s1> character pink green tardigrade floating in an empty
      curvilinear space
    output:
      url: image-0.png
  - text: >-
      A <s0><s1> character sleepy blue green tardigrade laying on the floor of
      an empty space with columns
    output:
      url: image-1.png
  - text: >-
      A <s0><s1> character a green pink tardigrade standing in front of a camera
      in an empty space with colonnade
    output:
      url: image-2.png
  - text: >-
      A <s0><s1> character a blue purple tardigrade walking in a curvilinear
      empty space
    output:
      url: image-3.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A <s0><s1> character
license: openrail++

SDXL LoRA DreamBooth - computational-mama/tardispace

Prompt
A <s0><s1> character pink green tardigrade floating in an empty curvilinear space
Prompt
A <s0><s1> character sleepy blue green tardigrade laying on the floor of an empty space with columns
Prompt
A <s0><s1> character a green pink tardigrade standing in front of a camera in an empty space with colonnade
Prompt
A <s0><s1> character a blue purple tardigrade walking in a curvilinear empty space

Model description

These are computational-mama/tardispace LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.

Download model

Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke

  • LoRA: download tardispace.safetensors here 💾.
    • Place it on your models/Lora folder.
    • On AUTOMATIC1111, load the LoRA by adding <lora:tardispace:1> to your prompt. On ComfyUI just load it as a regular LoRA.
  • Embeddings: download tardispace_emb.safetensors here 💾.
    • Place it on it on your embeddings folder
    • Use it by adding tardispace_emb to your prompt. For example, A tardispace_emb character (you need both the LoRA and the embeddings as they were trained together for this LoRA)

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
        
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('computational-mama/tardispace', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='computational-mama/tardispace', filename='tardispace_emb.safetensors' repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
pipeline.load_textual_inversion(state_dict["clip_g"], token=["<s0>", "<s1>"], text_encoder=pipeline.text_encoder_2, tokenizer=pipeline.tokenizer_2)
        
image = pipeline('A <s0><s1> character').images[0]

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

Trigger words

To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:

to trigger concept TOK → use <s0><s1> in your prompt

Details

All Files & versions.

The weights were trained using 🧨 diffusers Advanced Dreambooth Training Script.

LoRA for the text encoder was enabled. False.

Pivotal tuning was enabled: True.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.