| | --- |
| | base_model: Tongyi-MAI/Z-Image-Turbo |
| | tags: |
| | - lora |
| | - text-to-image |
| | - diffusion |
| | - z-image-turbo |
| | - fashion |
| | license: other |
| | --- |
| | |
| | # latex_v1 β LoRA |
| | |
| | LoRA adapter trained for the concept/material look **"latex"** (glossy, reflective latex texture). |
| | |
| | ## Trigger word |
| | Use this token in your prompt: |
| | - **`latex`** |
| | |
| | ## Base model |
| | - **Tongyi-MAI/Z-Image-Turbo** |
| | |
| | ## Files |
| | - `*.safetensors` β LoRA weights |
| | - `config.yaml` / `.job_config.json` β training configuration |
| | - (optional) `log.txt` β training log |
| |
|
| | ## How to use |
| |
|
| | ### A) ComfyUI / AUTOMATIC1111 |
| | 1. Put the `.safetensors` file into your LoRA folder. |
| | 2. Prompt examples (safe / non-explicit): |
| | - `latex, editorial fashion photo, studio lighting, high detail, sharp focus` |
| | - `latex, glossy jacket, urban night street photo, neon reflections` |
| |
|
| | (Adjust LoRA strength to taste, e.g. 0.6β1.0.) |
| |
|
| | ### B) Diffusers (generic example) |
| | ```python |
| | import torch |
| | from diffusers import DiffusionPipeline |
| | |
| | pipe = DiffusionPipeline.from_pretrained( |
| | "Tongyi-MAI/Z-Image-Turbo", |
| | torch_dtype=torch.bfloat16 |
| | ).to("cuda") |
| | |
| | # Replace with your actual repo + filename: |
| | pipe.load_lora_weights("thorjank/<REPO_NAME>", weight_name="<YOUR_LORA_FILENAME>.safetensors") |
| | |
| | prompt = "latex, editorial fashion photo, studio lighting, high detail" |
| | image = pipe(prompt).images[0] |
| | image.save("out.png") |