How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("mcvertix/dreembooth_output")

prompt = "penvink laying and standing on the stony ground, with arctic landscape in the background"
image = pipe(prompt).images[0]

LoRA DreamBooth - mcvertix/dreembooth_output

These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on penvink laying and standing on the stony ground, with arctic landscape in the background using DreamBooth. You can find some example images in the following.

img_0 img_1

LoRA for the text encoder was enabled: True.

Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mcvertix/dreembooth_output

Adapter
(589)
this model