AnimeLoom
Open-source character LoRAs and tooling for face-consistent anime video generation.
AnimeLoom is a text-to-anime-video pipeline focused on the hardest part of long-form anime generation: keeping the same character's face the same across every shot.
This organization hosts the character LoRAs used by the AnimeLoom pipeline on GitHub.
Character LoRAs
| Character | Series | Repo |
|---|---|---|
| Sakura Haruno | Naruto | AnimeLoom/sakura-haruno |
| Denji | Chainsaw Man | AnimeLoom/denji |
| Yuki Nagato | The Melancholy of Haruhi Suzumiya | AnimeLoom/yuki-nagato |
Every LoRA ships two adapters in one repo:
sdxl/β for Animagine XL 3.1, used by AnimeLoom's identity-keyframe stagesd15/β for DreamShaper 8, for lightweight AnimateDiff workflows
All adapters are LoRA rank 32, trained with PEFT.
The pipeline
AnimeLoom is a Director-Orchestrated pipeline that turns a text story into a multi-shot anime video with consistent character identity.
text story
β
Story Decomposer (Gemini β Claude) β shot script
β
SDXL + character LoRA + IP-Adapter β identity keyframe per shot (Phase 2)
β
Wan2.2 I2V β motion driving clip (Phase 3a)
β
Wan2.2-Animate face-lock β face from keyframe pasted
onto motion clip (Phase 3b)
β
RIFE + Real-ESRGAN + GFPGAN β temporal/spatial upscale,
anime face restore (Phase 4)
β
Cross-dissolve assembly β final 24fps video
Why face-lock matters
The "Animate" stage at Phase 3b is the moat. Per community testing, Wan2.2-Animate-14B achieves an 89% usable rate versus AnimateDiff's 71% and commercial APIs' ~78%. Combined with character-specific LoRAs trained for the SDXL identity stage, AnimeLoom can keep a character recognizably "themselves" across every shot of a multi-shot scene β the part most open-source pipelines struggle with.
Quick start
Train your own character LoRA, or use one from this org:
import torch
from diffusers import StableDiffusionXLPipeline
from peft import PeftModel
pipe = StableDiffusionXLPipeline.from_pretrained(
"cagliostrolab/animagine-xl-3.1",
torch_dtype=torch.float16,
).to("cuda")
# Load any AnimeLoom character LoRA
pipe.unet = PeftModel.from_pretrained(
pipe.unet,
"AnimeLoom/sakura-haruno", # or AnimeLoom/denji, AnimeLoom/yuki-nagato
subfolder="sdxl",
)
img = pipe(
"1girl, sakura haruno, pink hair, cherry blossom forest, anime, masterpiece",
num_inference_steps=28,
guidance_scale=6.5,
height=1024, width=1024,
).images[0]
img.save("out.png")
For the full text-to-video pipeline, see the AnimeLoom RunPod notebook.
License
All character LoRAs in this org are released under OpenRAIL++, inheriting from their base models (Animagine XL 3.1 / DreamShaper 8). The AnimeLoom pipeline code on GitHub is MIT licensed.
Contributing
Want to add a character? The training script is in
agents/character/trainer.py.
Recommended dataset: 15-30 clean anime reference images of the character at
512 or 1024 resolution.
Issues, PRs, and new character requests: GitHub Issues.
Links
- π GitHub: JoelJohnsonThomas/AnimeLoom
- π¦ Pipeline notebook: AnimeLoom_RunPod.ipynb
- π€ All models: huggingface.co/AnimeLoom