| | --- |
| | base_model: Tongyi-MAI/Z-Image-Turbo |
| | tags: |
| | - lora |
| | - text-to-image |
| | - diffusion |
| | - z-image-turbo |
| | - character |
| | license: other |
| | --- |
| | |
| | # hardbody — LoRA |
| |
|
| | LoRA adapter trained on **Tongyi-MAI/Z-Image-Turbo**. |
| |
|
| | > Note: `trigger_word` is **not set** in the training config. In practice, use the concept name **`hardbody`** in your prompt, and/or rely on the dataset’s default caption described below. |
| | |
| | ## Base model |
| | - **Tongyi-MAI/Z-Image-Turbo** |
| | |
| | ## Trigger / keyword |
| | - Suggested keyword: **`hardbody`** |
| | - Default caption used during training: **`curvy female body`** |
| | |
| | ## Files |
| | - `*.safetensors` — LoRA weights |
| | - `config.yaml`, `job_config.json` — training configuration |
| | - (optional) `log.txt` — training log |
| |
|
| | ## How to use |
| |
|
| | ### A) ComfyUI / AUTOMATIC1111 |
| | 1. Put the `.safetensors` file into your LoRA folder. |
| | 2. Prompt examples (safe / non-explicit): |
| | - `hardbody, athletic figure, studio photo, soft lighting, high detail` |
| | - `hardbody, fashion shoot, street style, natural light, high detail` |
| |
|
| | (Adjust LoRA strength to taste, e.g. 0.6–1.0.) |
| |
|
| | ### B) Diffusers (generic example) |
| | ```python |
| | import torch |
| | from diffusers import DiffusionPipeline |
| | |
| | pipe = DiffusionPipeline.from_pretrained( |
| | "Tongyi-MAI/Z-Image-Turbo", |
| | torch_dtype=torch.bfloat16 |
| | ).to("cuda") |
| | |
| | pipe.load_lora_weights("thorjank/<REPO_NAME>", weight_name="<YOUR_LORA_FILENAME>.safetensors") |
| | |
| | prompt = "hardbody, athletic figure, studio photo, soft lighting, high detail" |
| | image = pipe(prompt).images[0] |
| | image.save("out.png") |