animate-lora-sd1.5 / README.md
J-YOON's picture
Add dataset reference and training provenance note
34fa68c
metadata
pipeline_tag: text-to-image
library_name: diffusers
license: mit
base_model: runwayml/stable-diffusion-v1-5
widget:
  - text: sms landscape, dog by the water, soft sky, anime background art
    output:
      url: images/example_dog_animate.png
  - text: >-
      sms landscape, rainy evening street, walking home, reflective wet road,
      cinematic anime background
    output:
      url: images/example_rainy_walk_animate.png
tags:
  - stable-diffusion
  - stable-diffusion-diffusers
  - lora
  - diffusers
  - image-generation
  - anime
  - landscape

animate-lora-sd1.5

LoRA adapter for cinematic anime-style landscape generation on top of Stable Diffusion 1.5.

Model summary

  • Base model: runwayml/stable-diffusion-v1-5
  • Trigger words: landscape, sms landscape
  • Adapter file: animate_v1-000005.safetensors
  • Intended style: cinematic anime-style scenery, sky-rich composition, stylized background art

Intended use

This adapter is intended for stylized landscape generation, scenic diary illustrations, and anime-inspired background imagery.

It works best as a style adapter layered on top of SD1.5 rather than as a broad general-purpose object model.

Related project

This model repo is maintained by the same author as the companion app, but published separately so the LoRA release and the application code can be versioned independently.

Preserved training evidence

Preserved local artifacts suggest the original training run used:

  • resolution: 512x512
  • network_dim: 25
  • network_alpha: 25
  • train_batch_size: 16
  • text_encoder_lr: 5e-05
  • unet_lr: 0.0001
  • optimizer: AdamW
  • max_train_steps: 93750
  • lr_warmup_steps: 9375
  • xformers enabled
  • dataset tag frequencies preserved in metadata/animation_training_log.txt

Preserved configs and logs indicate a kohya_ss-based LoRA training workflow.

See metadata/painting_v1_20231211-043052.json, metadata/painting_script_caption.txt, and metadata/animation_training_log.txt for the preserved config snapshot and training notes.

Dataset reference

This link is included as a public dataset reference related to the preserved landscape/anime-style training artifacts. The exact one-to-one mapping to the released checkpoint is not fully guaranteed.

Known unknowns

  • The exact mapping between the preserved local painting_v1 artifacts and this public checkpoint is not fully guaranteed.
  • Exact seed and exact dataset snapshot were not preserved.
  • This repository is an inference-oriented adapter release, not a full archival dump of the original training environment.

Diffusers usage

import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    torch_dtype=torch.float16,
).to("cuda")

pipe.load_lora_weights(
    "J-YOON/animate-lora-sd1.5",
    weight_name="animate_v1-000005.safetensors",
)

prompt = "sms landscape, evening sky over a quiet city, cinematic diary illustration"
image = pipe(prompt, num_inference_steps=30, guidance_scale=7.0).images[0]
image.save("animate_example.png")

Limitations

  • The adapter is specialized for landscape-oriented and background-like imagery.
  • Non-landscape prompts may be biased back toward scenic composition or stylized atmosphere.
  • For broader object coverage, reduce adapter strength or fall back to the base model when needed.

Gallery

Generated: Dog Generated: Diary img2img
Animate dog example Diary app animate img2img example
Reference: Dog Reference: Rainy walk
Reference dog photo Reference rainy walk photo