Image Generation
Collection
4 items • Updated
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Chan-Y/reddish")
prompt = "in the style of reddish"
image = pipe(prompt).images[0]



reddish.safetensors here 💾.models/Lora folder.<lora:reddish:1> to your prompt. On ComfyUI just load it as a regular LoRA.from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('reddish', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('in the style of reddish').images[0]
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
The Reddish model excels in producing images with the following characteristics:
You should use in the style of reddish to trigger the image generation.
All Files & versions.
The weights were trained using 🧨 diffusers Advanced Dreambooth Training Script.
LoRA for the text encoder was enabled: True.
Pivotal tuning was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
Base model
stabilityai/stable-diffusion-xl-base-1.0