metadata
base_model: Tongyi-MAI/Z-Image-Turbo
tags:
- lora
- text-to-image
- diffusion
- z-image-turbo
license: other
mia-urban — LoRA
LoRA adapter trained for the concept/style "mia-urban".
Trigger word
Use this token in your prompt:
mia-urban
Note: Some prompts in the training config use
[mia urban](with a space). If you use bracket-style tokens, you can try bothmia-urbanand[mia urban]depending on your workflow.
Base model
- Tongyi-MAI/Z-Image-Turbo
Files
*.safetensors— LoRA weightsconfig.yaml,job_config.json— training configuration- (optional)
log.txt— training log
How to use
A) ComfyUI / AUTOMATIC1111
- Put the
.safetensorsfile into your LoRA folder. - Prompt example:
mia-urban, clean studio portrait, soft side lighting, sharp focus
(Adjust LoRA strength to taste, e.g. 0.6–1.0.)
B) Diffusers (generic example)
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"Tongyi-MAI/Z-Image-Turbo",
torch_dtype=torch.bfloat16
).to("cuda")
# Replace with your actual filename:
pipe.load_lora_weights("thorjank/<REPO_NAME>", weight_name="<YOUR_LORA_FILENAME>.safetensors")
prompt = "mia-urban, clean studio portrait, soft side lighting, sharp focus"
image = pipe(prompt).images[0]
image.save("out.png")