mia-urban / README.md
thorjank's picture
Update README.md
6ce82e4 verified
metadata
base_model: Tongyi-MAI/Z-Image-Turbo
tags:
  - lora
  - text-to-image
  - diffusion
  - z-image-turbo
license: other

mia-urban — LoRA

LoRA adapter trained for the concept/style "mia-urban".

Trigger word

Use this token in your prompt:

  • mia-urban

Note: Some prompts in the training config use [mia urban] (with a space). If you use bracket-style tokens, you can try both mia-urban and [mia urban] depending on your workflow.

Base model

  • Tongyi-MAI/Z-Image-Turbo

Files

  • *.safetensors — LoRA weights
  • config.yaml, job_config.json — training configuration
  • (optional) log.txt — training log

How to use

A) ComfyUI / AUTOMATIC1111

  1. Put the .safetensors file into your LoRA folder.
  2. Prompt example:
    • mia-urban, clean studio portrait, soft side lighting, sharp focus

(Adjust LoRA strength to taste, e.g. 0.6–1.0.)

B) Diffusers (generic example)

import torch
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
    "Tongyi-MAI/Z-Image-Turbo",
    torch_dtype=torch.bfloat16
).to("cuda")

# Replace with your actual filename:
pipe.load_lora_weights("thorjank/<REPO_NAME>", weight_name="<YOUR_LORA_FILENAME>.safetensors")

prompt = "mia-urban, clean studio portrait, soft side lighting, sharp focus"
image = pipe(prompt).images[0]
image.save("out.png")