How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("fill-in-base-model", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("mkshing/lora-sdxl-3drendering")

prompt = "a dog in 3d rendering style"
image = pipe(prompt).images[0]

base_model: stable-diffusion-xl-base-1.0 instance_prompt: a woman of in 3d rendering style license: openrail++

SDXL LoRA DreamBooth - mkshing/lora-sdxl-3drendering

Prompt
a dog in 3d rendering style
Prompt
a dog in 3d rendering style
Prompt
a dog in 3d rendering style
Prompt
a dog in 3d rendering style

Model description

These are mkshing/lora-sdxl-3drendering LoRA adaption weights for /fsx/proj-jp-stable-diffusion/models/stable-diffusion/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: None.

Trigger words

You should use a woman of in 3d rendering style to trigger the image generation.

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Downloads last month
18
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support