AgeBooth LoRA Models

Two LoRA adapters for age transformation with Stable Diffusion XL.

Files

  • young_lora.safetensors: Young age group (10-20 years)
  • old_lora.safetensors: Old age group (70-80 years)

Training Details

  • Base Model: SDXL 1.0
  • Method: DreamBooth LoRA
  • LoRA Rank: 4
  • Resolution: 512x512
  • Steps: 200 per LoRA
  • Precision: FP16 mixed precision

Usage

from diffusers import StableDiffusionXLImg2ImgPipeline
import torch

# Load base model
pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-xl-base-1.0",
    torch_dtype=torch.float16
).to("cuda")

# Load young LoRA
pipe.load_lora_weights("ShubhamBaghel307/agebooth-loras", weight_name="young_lora.safetensors")
young_image = pipe(prompt="young person", image=input_face).images[0]

# Load old LoRA
pipe.load_lora_weights("ShubhamBaghel307/agebooth-loras", weight_name="old_lora.safetensors")
old_image = pipe(prompt="elderly person", image=input_face).images[0]

Linear Interpolation

For intermediate ages, blend the LoRAs:

# Load both LoRAs
young_state = torch.load("young_lora.safetensors")
old_state = torch.load("old_lora.safetensors")

# Interpolate (alpha=0.5 for middle age)
alpha = 0.5
mixed_state = {
    k: alpha * young_state[k] + (1 - alpha) * old_state[k]
    for k in young_state.keys()
}

Dataset

Trained on age-filtered subsets of IMDB-Wiki dataset:

  • Young: 25 images (ages 10-20)
  • Old: 25 images (ages 70-80)

Performance

  • Inference Time: ~4-5 sec/step on RTX 4050
  • VRAM Usage: ~5.5GB
  • Quality: Best with 50+ inference steps

Citation

@misc{agebooth2025,
  title={AgeBooth: Identity-Preserved Age Transformation},
  author={Baghel, Shubham},
  year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ShubhamBaghel307/agebooth-loras

Adapter
(7809)
this model

Space using ShubhamBaghel307/agebooth-loras 1