How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("leonel4rd/Comicflux")

prompt = "UNICODE\u0000\u0000<\u0000l\u0000o\u0000r\u0000a\u0000:\u0000c\u0000o\u0000m\u0000i\u0000c\u0000_\u0000S\u0000D\u0000X\u0000L\u0000_\u0000v\u00001\u0000:\u00000\u0000.\u00005\u0000>\u0000,\u0000 \u0000(\u0000a\u0000n\u0000i\u0000m\u0000e\u0000 \u0000c\u0000o\u0000l\u0000o\u0000r\u0000i\u0000n\u0000g\u0000:\u00000\u0000.\u00008\u0000)\u0000,\u0000 \u0000l\u0000a\u0000n\u0000d\u0000s\u0000c\u0000a\u0000p\u0000e\u0000,\u0000 \u0000m\u0000a\u0000n\u0000g\u0000a\u0000_\u0000a\u0000r\u0000t\u0000,\u0000 \u0000i\u0000n\u0000i\u0000p\u0000h\u0000i\u0000z\u0000i\u0000,\u0000 \u0000o\u0000e\u0000k\u0000a\u0000k\u0000i\u0000,\u0000 \u0000c\u0000i\u0000t\u0000y\u0000,\u0000 \u0000s\u0000h\u0000i\u0000b\u0000u\u0000y\u0000a\u0000 \u0000\\\u0000(\u0000t\u0000o\u0000k\u0000y\u0000o\u0000\\\u0000)\u0000,\u0000 \u0000"
image = pipe(prompt).images[0]

Comicflux

Prompt
UNICODE<lora:comic_SDXL_v1:0.5>, (anime coloring:0.8), landscape, manga_art, iniphizi, oekaki, city, shibuya \(tokyo\),

Trigger words

You should use anime coloring to trigger the image generation.

You should use manga_art to trigger the image generation.

You should use iniphizi to trigger the image generation.

You should use oekaki to trigger the image generation.

Download model

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Downloads last month
1
Inference Providers NEW
Examples

Model tree for leonel4rd/Comicflux

Adapter
(8353)
this model