How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Mr-J-369/RealHotSpice-SD1.5-qnn2.28", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("agmjd/Tololo")

prompt = "UNICODE\u0000\u0000t\u0000o\u0000l\u0000o\u0000l\u0000o\u0000,\u0000 \u00001\u0000g\u0000i\u0000r\u0000l\u0000,\u0000 \u0000s\u0000o\u0000l\u0000o\u0000,\u0000 \u0000 \u0000l\u0000o\u0000n\u0000g\u0000 \u0000h\u0000a\u0000i\u0000r\u0000,\u0000s\u0000i\u0000t\u0000t\u0000i\u0000n\u0000g\u0000,\u0000f\u0000r\u0000o\u0000m\u0000 \u0000b\u0000e\u0000l\u0000o\u0000w\u0000,\u0000c\u0000o\u0000w\u0000b\u0000o\u0000y\u0000 \u0000s\u0000h\u0000o\u0000t\u0000,\u0000s\u0000h\u0000o\u0000e\u0000 \u0000l\u0000o\u0000c\u0000k\u0000e\u0000r\u0000,\u0000a\u0000n\u0000k\u0000l\u0000e\u0000 \u0000b\u0000o\u0000o\u0000t\u0000s\u0000,\u0000 \u0000 \u0000l\u0000o\u0000o\u0000k\u0000i\u0000n\u0000g\u0000 \u0000a\u0000t\u0000 \u0000v\u0000i\u0000e\u0000w\u0000e\u0000r\u0000,\u0000 \u0000<\u0000l\u0000o\u0000r\u0000a\u0000:\u0000t\u0000o\u0000l\u0000o\u0000l\u0000o\u0000v\u00002\u0000-\u0000p\u0000y\u0000n\u0000o\u0000i\u0000s\u0000e\u0000-\u00000\u00000\u00000\u00000\u00000\u00008\u0000:\u00001\u0000>\u0000"
image = pipe(prompt).images[0]

https://civitai.com/models/117618/girls-frontline-tololo-ak-alfa-with-multires-noise-version

Prompt
UNICODEtololo, 1girl, solo, long hair,sitting,from below,cowboy shot,shoe locker,ankle boots, looking at viewer, <lora:tololov2-pynoise-000008:1>

Download model

Download them in the Files & versions tab.

Downloads last month
2
Inference Examples
Examples
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for agmjd/Tololo

Adapter
(2)
this model