AnimeLoom β€” Denji LoRA

Character LoRA for Denji (Chainsaw Man), trained as part of the AnimeLoom anime character-consistency pipeline.

Two adapters are provided in this repo:

Folder Base model Rank Steps Images Resolution
sdxl/ cagliostrolab/animagine-xl-3.1 32 2200 15 1024
sd15/ Lykon/dreamshaper-8 32 800 15 512

The SDXL adapter is the primary one used by AnimeLoom's identity-keyframe stage (SDXL + character LoRA + IP-Adapter). The SD 1.5 adapter is provided for compatibility with lighter inference pipelines (e.g. AnimateDiff workflows).

Trigger words

1boy, denji, chainsaw man, blonde hair, yellow eyes

Add booru-style descriptors as needed (e.g. black suit, chainsaw, shark teeth, smiling).

Usage β€” SDXL with PEFT / diffusers

import torch
from diffusers import StableDiffusionXLPipeline
from peft import PeftModel

pipe = StableDiffusionXLPipeline.from_pretrained(
    "cagliostrolab/animagine-xl-3.1",
    torch_dtype=torch.float16,
).to("cuda")

# AnimeLoom's training output is a PEFT adapter β€” load via PeftModel
pipe.unet = PeftModel.from_pretrained(
    pipe.unet,
    "AnimeLoom/denji",
    subfolder="sdxl",
)

img = pipe(
    "1boy, denji, chainsaw man, blonde hair, yellow eyes, anime, "
    "masterpiece, best quality, absurdres",
    negative_prompt="blurry, low quality, deformed, extra fingers, 3d render",
    num_inference_steps=28,
    guidance_scale=6.5,
    height=1024, width=1024,
).images[0]
img.save("denji.png")

Usage β€” SD 1.5 with diffusers

import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    "Lykon/dreamshaper-8",
    torch_dtype=torch.float16,
).to("cuda")

pipe.load_lora_weights(
    "AnimeLoom/denji",
    subfolder="sd15",
    weight_name="pytorch_lora_weights.safetensors",
)

img = pipe(
    "1boy, denji, chainsaw man, blonde hair, anime, masterpiece",
    num_inference_steps=28,
    guidance_scale=7.0,
).images[0]

Recommended weights & prompting

  • LoRA scale: 0.85 - 1.15 (SDXL), 0.6 - 0.8 (SD 1.5)
  • Pair with anime base models: Animagine XL 3.1, Counterfeit-V3.0, AnythingV5
  • For face consistency in video, combine with IP-Adapter SDXL and Wan2.2-Animate face-lock β€” see the AnimeLoom pipeline.

AnimeLoom video pipeline integration

This LoRA is built to feed AnimeLoom's text-to-anime-video pipeline:

SDXL + this LoRA + IP-Adapter   β†’  identity keyframe (Phase 2)
        ↓
Wan2.2 I2V                       β†’  motion driving clip (Phase 3a)
        ↓
Wan2.2-Animate face-lock         β†’  face from keyframe pasted onto motion (Phase 3b)
        ↓
RIFE + Real-ESRGAN + GFPGAN      β†’  temporal/spatial upscale + face restore (Phase 4)
        ↓
Final 24fps anime video with consistent character identity across shots.

Limitations

  • Anime-only. Photoreal prompts will degrade quality.
  • InsightFace face-swap does not work on these outputs (it is photoreal-only). For identity rescue use IP-Adapter-FaceID-SDXL or CharacterFaceSwap.
  • Trained on 15 images; expect occasional drift on unusual angles, complex outfits, or transformation/hybrid forms (chainsaw devil hybrid).
  • The SD 1.5 adapter is intentionally lower-fidelity (800 steps at 512 res) for use as a lightweight fallback.

License

Released under OpenRAIL++ (openrail++), inheriting from the base model Animagine XL 3.1.

Related models

Part of the AnimeLoom character collection:

Citation

If you use this LoRA in research or production, please credit AnimeLoom and the base model authors (Cagliostro Lab for Animagine XL 3.1, Lykon for DreamShaper 8).


Trained with PEFT as part of the AnimeLoom project.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for AnimeLoom/denji