File size: 3,516 Bytes
16818e1 614a465 93772d0 a5332c9 93772d0 46e910a 93772d0 46e910a 46983e8 93772d0 46983e8 46e910a 93772d0 286e141 614a465 93772d0 e963edc 93772d0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
title: Zig
emoji: 🏃
colorFrom: green
colorTo: purple
sdk: gradio
sdk_version: 6.2.0
app_file: app.py
pinned: false
---
# Z-Image Hugging Face Space
Gradio Space using the official Z-Image pipeline (`Tongyi-MAI/Z-Image-Turbo`) with optional LoRA injection (Civitai link you provided). There is **no SD1.5 fallback**—if the Z-Image model is unavailable, the Space will fail to load.
## Files
- `app.py`: Z-Image pipeline, FlowMatch scheduler, LoRA toggle/strength, simple gallery UI.
- `requirements.txt`: Python deps for Spaces/local runs.
- `lora/`: Place `zit-mystic-xxx.safetensors` here (or point `LORA_PATH` to your filename).
- `.gitattributes`: Tracks `.safetensors` via Git LFS for large LoRA files.
## Using on Hugging Face Spaces
1) Create a Space (Python) and select a GPU hardware type.
2) Add/clone this repo into the Space.
3) Manually add the LoRA file from https://civitai.com/models/2206377/zit-mystic-xxx to `lora/zit-mystic-xxx.safetensors` (or set `LORA_PATH`). Network fetch of Civitai is not handled in the Space.
4) If model download fails with a token error, set `HF_TOKEN` in the Space secrets (some repos require authentication).
5) (Optional) Toggle advanced envs below; then the Space will launch `app.py`. The header shows whether the LoRA was detected/loaded.
- If the header/log says `PEFT backend is required for LoRA`, install `peft` (already included in `requirements.txt`) and restart/rebuild.
## Environment variables
- `MODEL_PATH` (default `Tongyi-MAI/Z-Image-Turbo`): HF repo or local path for the Z-Image model.
- `LORA_PATH` (default `lora/zit-mystic-xxx.safetensors`): Path to the LoRA file; loaded if present.
- `HF_TOKEN`: HF token for gated/private models or faster pulls.
- `MODEL_DTYPE` (default `auto`): `bf16` if supported, else `fp16` (override with `bf16`/`fp16`/`fp32`).
- `ENABLE_COMPILE` (default `true`): Enable `torch.compile` on the transformer.
- `ENABLE_WARMUP` (default `false`): Run a quick warmup across resolutions after load (adds startup time).
- `ATTENTION_BACKEND` (default `flash_3`): Backend for transformer attention (falls back to `flash`/`xformers`/`native`).
- `OFFLOAD_TO_CPU_AFTER_RUN` (default `false`): Move the model back to CPU after each generation (useful on ZeroGPU; slower on normal GPUs).
- `ENABLE_AOTI` (default `true`): Try to load ZeroGPU AoTI blocks via `spaces.aoti_blocks_load` for faster inference.
- `AOTI_REPO` (default `zerogpu-aoti/Z-Image`): AoTI blocks repo.
- `AOTI_VARIANT` (default `fa3`): AoTI variant.
- `AOTI_ALLOW_LORA` (default `false`): Allow AoTI to load even if LoRA adapters are loaded (may crash; AoTI blocks generally don’t support LoRA).
- `DEBUG` (default `false`): When set to a truthy value (`1`, `true`, `yes`, `on`), hide the Status/Debug floating panel.
## Run locally
```bash
python -m venv .venv
.venv\Scripts\activate # Windows; on Linux/macOS: source .venv/bin/activate
pip install -r requirements.txt
python app.py
```
Place the LoRA file under `lora/` first (or set `LORA_PATH`); otherwise the app will run the base Z-Image model without LoRA.
## UI controls
- Prompt
- Resolution category + explicit WxH selection
- Seed (with random toggle)
- Steps + Time Shift
- Advanced: CFG, scheduler + extra scheduler params, max sequence length
- LoRA toggle + strength (enabled only if the file is found)
## Git LFS note
`.gitattributes` tracks `.safetensors` with LFS. If you commit the LoRA, run `git lfs install` once before pushing so large files go through LFS.
|