AI & ML interests

None defined yet.

Recent Activity

joelthomas77  updated a Space 8 days ago
AnimeLoom/README
joelthomas77  published a Space 8 days ago
AnimeLoom/README
joelthomas77  updated a model 8 days ago
AnimeLoom/README
View all activity

Organization Card

AnimeLoom

Open-source character LoRAs and tooling for face-consistent anime video generation.

AnimeLoom is a text-to-anime-video pipeline focused on the hardest part of long-form anime generation: keeping the same character's face the same across every shot.

This organization hosts the character LoRAs used by the AnimeLoom pipeline on GitHub.


Character LoRAs

Character Series Repo
Sakura Haruno Naruto AnimeLoom/sakura-haruno
Denji Chainsaw Man AnimeLoom/denji
Yuki Nagato The Melancholy of Haruhi Suzumiya AnimeLoom/yuki-nagato

Every LoRA ships two adapters in one repo:

All adapters are LoRA rank 32, trained with PEFT.


The pipeline

AnimeLoom is a Director-Orchestrated pipeline that turns a text story into a multi-shot anime video with consistent character identity.

text story
   ↓
Story Decomposer (Gemini → Claude)        →  shot script
   ↓
SDXL + character LoRA + IP-Adapter        →  identity keyframe per shot   (Phase 2)
   ↓
Wan2.2 I2V                                →  motion driving clip          (Phase 3a)
   ↓
Wan2.2-Animate face-lock                  →  face from keyframe pasted
                                              onto motion clip            (Phase 3b)
   ↓
RIFE + Real-ESRGAN + GFPGAN               →  temporal/spatial upscale,
                                              anime face restore           (Phase 4)
   ↓
Cross-dissolve assembly                   →  final 24fps video

Why face-lock matters

The "Animate" stage at Phase 3b is the moat. Per community testing, Wan2.2-Animate-14B achieves an 89% usable rate versus AnimateDiff's 71% and commercial APIs' ~78%. Combined with character-specific LoRAs trained for the SDXL identity stage, AnimeLoom can keep a character recognizably "themselves" across every shot of a multi-shot scene — the part most open-source pipelines struggle with.


Quick start

Train your own character LoRA, or use one from this org:

import torch
from diffusers import StableDiffusionXLPipeline
from peft import PeftModel
pipe = StableDiffusionXLPipeline.from_pretrained(
    "cagliostrolab/animagine-xl-3.1",
    torch_dtype=torch.float16,
).to("cuda")
# Load any AnimeLoom character LoRA
pipe.unet = PeftModel.from_pretrained(
    pipe.unet,
    "AnimeLoom/sakura-haruno",   # or AnimeLoom/denji, AnimeLoom/yuki-nagato
    subfolder="sdxl",
)
img = pipe(
    "1girl, sakura haruno, pink hair, cherry blossom forest, anime, masterpiece",
    num_inference_steps=28,
    guidance_scale=6.5,
    height=1024, width=1024,
).images[0]
img.save("out.png")

For the full text-to-video pipeline, see the AnimeLoom RunPod notebook.


License

All character LoRAs in this org are released under OpenRAIL++, inheriting from their base models (Animagine XL 3.1 / DreamShaper 8). The AnimeLoom pipeline code on GitHub is apache-2.0 licensed.


Contributing

Want to add a character? The training script is in agents/character/trainer.py. Recommended dataset: 15-30 clean anime reference images of the character at 512 or 1024 resolution.

Issues, PRs, and new character requests: GitHub Issues.


Links

datasets 0

None public yet