How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("MarkBW/ShinYuna-Itzy")

prompt = "UNICODE\u0000\u0000m\u0000a\u0000s\u0000t\u0000e\u0000r\u0000p\u0000i\u0000e\u0000c\u0000e\u0000,\u0000 \u0000b\u0000e\u0000s\u0000t\u0000 \u0000q\u0000u\u0000a\u0000l\u0000i\u0000t\u0000y\u0000,\u0000 \u0000u\u0000l\u0000t\u0000r\u0000a\u0000-\u0000d\u0000e\u0000t\u0000a\u0000i\u0000l\u0000e\u0000d\u0000,\u0000 \u0000u\u0000l\u0000t\u0000r\u0000a\u0000 \u0000h\u0000i\u0000g\u0000h\u0000 \u0000r\u0000e\u0000s\u0000,\u0000 \u0000(\u0000p\u0000h\u0000o\u0000t\u0000o\u0000r\u0000e\u0000a\u0000l\u0000i\u0000s\u0000t\u0000i\u0000c\u0000:\u00001\u0000.\u00004\u0000)\u0000,\u0000 \u0000r\u0000a\u0000w\u0000 \u0000p\u0000h\u0000o\u0000t\u0000o\u0000,\u0000 \u0000(\u0000r\u0000e\u0000a\u0000l\u0000i\u0000s\u0000t\u0000i\u0000c\u0000:\u00000\u0000.\u00002\u0000)\u0000,\u0000 \u00008\u0000k\u0000 \u0000H\u0000D\u0000R\u0000,\u0000 \u0000r\u0000e\u0000a\u0000l\u0000i\u0000s\u0000t\u0000i\u0000c\u0000 \u0000l\u0000i\u0000g\u0000h\u0000t\u0000i\u0000n\u0000g\u0000,\u0000 \u00001\u0000g\u0000i\u0000r\u0000l\u0000,\u0000 \u0000s\u0000o\u0000l\u0000o\u0000,\u0000 \u0000l\u0000o\u0000o\u0000k\u0000i\u0000n\u0000g\u0000 \u0000a\u0000t\u0000 \u0000v\u0000i\u0000e\u0000w\u0000e\u0000r\u0000,\u0000 \u0000a\u0000s\u0000y\u0000m\u0000m\u0000e\u0000t\u0000r\u0000i\u0000c\u0000a\u0000l\u0000 \u0000h\u0000a\u0000i\u0000r\u0000,\u0000 \u0000(\u0000d\u0000e\u0000t\u0000a\u0000i\u0000l\u0000e\u0000d\u0000 \u0000o\u0000i\u0000l\u0000y\u0000 \u0000s\u0000k\u0000i\u0000n\u0000)\u0000,\u0000 \u0000(\u0000d\u0000e\u0000t\u0000a\u0000i\u0000l\u0000e\u0000d\u0000 \u0000f\u0000a\u0000c\u0000e\u0000)\u0000,\u0000 \u0000(\u0000s\u0000i\u0000m\u0000p\u0000l\u0000e\u0000 \u0000g\u0000r\u0000a\u0000y\u0000 \u0000b\u0000a\u0000c\u0000k\u0000g\u0000r\u0000o\u0000u\u0000n\u0000d\u0000 \u0000:\u00001\u0000.\u00002\u0000)\u0000,\u0000 \u0000(\u0000u\u0000p\u0000p\u0000e\u0000r\u0000 \u0000b\u0000o\u0000d\u0000y\u0000:\u00001\u0000.\u00002\u0000)\u0000,\u0000 \u0000s\u0000i\u0000m\u0000p\u0000l\u0000e\u0000 \u0000d\u0000r\u0000e\u0000s\u0000s\u0000,\u0000 \u0000s\u0000l\u0000e\u0000e\u0000v\u0000e\u0000l\u0000e\u0000s\u0000s\u0000,\u0000 \u0000o\u0000f\u0000f\u0000 \u0000s\u0000h\u0000o\u0000u\u0000l\u0000d\u0000e\u0000r\u0000"
image = pipe(prompt).images[0]

ShinYuna-Itzy

Prompt
UNICODEmasterpiece, best quality, ultra-detailed, ultra high res, (photorealistic:1.4), raw photo, (realistic:0.2), 8k HDR, realistic lighting, 1girl, solo, looking at viewer, asymmetrical hair, (detailed oily skin), (detailed face), (simple gray background :1.2), (upper body:1.2), simple dress, sleeveless, off shoulder

Model description

This is a LORA model of Not Itzy - Yuna It should work well with any photorealistic models, You'll get best results with a LORA weight 0.7-0.9, Steps: 20-50, Sampler: DPM++ 3M SDE Karras, CFG scale: 4-8 By TissueAI

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Downloads last month
5
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MarkBW/ShinYuna-Itzy

Adapter
(2735)
this model