File size: 1,511 Bytes
7b5a285
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
base_model: Tongyi-MAI/Z-Image-Turbo
tags:
  - lora
  - text-to-image
  - diffusion
  - z-image-turbo
  - character
license: other
---

# hardbody — LoRA

LoRA adapter trained on **Tongyi-MAI/Z-Image-Turbo**.

> Note: `trigger_word` is **not set** in the training config. In practice, use the concept name **`hardbody`** in your prompt, and/or rely on the dataset’s default caption described below.

## Base model
- **Tongyi-MAI/Z-Image-Turbo**

## Trigger / keyword
- Suggested keyword: **`hardbody`**  
- Default caption used during training: **`curvy female body`**

## Files
- `*.safetensors` — LoRA weights
- `config.yaml`, `job_config.json` — training configuration
- (optional) `log.txt` — training log

## How to use

### A) ComfyUI / AUTOMATIC1111
1. Put the `.safetensors` file into your LoRA folder.
2. Prompt examples (safe / non-explicit):
   - `hardbody, athletic figure, studio photo, soft lighting, high detail`
   - `hardbody, fashion shoot, street style, natural light, high detail`

(Adjust LoRA strength to taste, e.g. 0.6–1.0.)

### B) Diffusers (generic example)
```python
import torch
from diffusers import DiffusionPipeline

pipe = DiffusionPipeline.from_pretrained(
    "Tongyi-MAI/Z-Image-Turbo",
    torch_dtype=torch.bfloat16
).to("cuda")

pipe.load_lora_weights("thorjank/<REPO_NAME>", weight_name="<YOUR_LORA_FILENAME>.safetensors")

prompt = "hardbody, athletic figure, studio photo, soft lighting, high detail"
image = pipe(prompt).images[0]
image.save("out.png")