How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Tongyi-MAI/Z-Image,Tongyi-MAI/Z-Image-Turbo", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("nynxz/RealGen-V2")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

RealGen-V2 β€” ComfyUI-ready LoRA for Z-Image

A drop-in ComfyUI build of Yunncheng/RealGen-V2. Same weights as upstream, repackaged so ComfyUI's stock LoraLoader can load it without a custom node. Works on both Z-Image base and Z-Image-Turbo.

Examples

Each row is the same prompt and seed, rendered with and without the LoRA at strength 1.0.

Z-Image (base)

Without LoRA With RealGen-V2
Z-Image base, prompt 1, no LoRA Z-Image base, prompt 1, with RealGen-V2
Z-Image base, prompt 2, no LoRA Z-Image base, prompt 2, with RealGen-V2
Z-Image base, prompt 3, no LoRA Z-Image base, prompt 3, with RealGen-V2
Z-Image base, prompt 4, no LoRA Z-Image base, prompt 4, with RealGen-V2
Z-Image base, prompt 5, no LoRA Z-Image base, prompt 5, with RealGen-V2

Z-Image-Turbo

Without LoRA With RealGen-V2
Z-Image-Turbo, prompt 1, no LoRA Z-Image-Turbo, prompt 1, with RealGen-V2
Z-Image-Turbo, prompt 2, no LoRA Z-Image-Turbo, prompt 2, with RealGen-V2
Z-Image-Turbo, prompt 3, no LoRA Z-Image-Turbo, prompt 3, with RealGen-V2
Z-Image-Turbo, prompt 4, no LoRA Z-Image-Turbo, prompt 4, with RealGen-V2
Z-Image-Turbo, prompt 5, no LoRA Z-Image-Turbo, prompt 5, with RealGen-V2

Note: on Turbo the LoRA still affects the image even though negative prompts don't β€” CFG=1 disables the negative branch, not the LoRA patch.

Why this repo exists

The upstream release ships the adapter in PEFT format (base_model.model.<path>.lora_A.<adapter>.weight keys, with lora_alpha living separately in adapter_config.json). ComfyUI's stock LoraLoader doesn't understand that layout, so this repo provides:

  • the same weights, repackaged with diffusers-style keys (<path>.lora_down.weight, <path>.lora_up.weight) and
  • per-module alpha tensors baked into the file so the alpha/rank scaling ComfyUI applies matches what PEFT would have applied at runtime.

No retraining, no quantisation, no surgery beyond key renaming and alpha injection β€” the math is identical to running the original adapter through PEFT.

Files

File Purpose
realgen_v2.safetensors The repackaged LoRA. Drop into ComfyUI/models/loras/.
scripts/convert_realgen_v2.py The script used to produce it from the upstream PEFT adapter. Re-runnable for transparency.
examples/ Side-by-side renders, with and without the LoRA, on both Z-Image base and Z-Image-Turbo.
LICENSE Apache 2.0 (matches both RealGen-V2 and Z-Image upstream).

Usage in ComfyUI

  1. Download realgen_v2.safetensors and place it in ComfyUI/models/loras/.
  2. Build a graph: Load Diffusion Model (Z-Image) β†’ LoraLoader β†’ sampler.
    • Select realgen_v2.safetensors in the loader.
    • Strength 1.0 reproduces the upstream training intent (alpha=128, rank=64 β†’ scale=2.0).
    • Lower (e.g. 0.5–0.8) for a softer effect; the LoRA scales linearly.

That's it β€” there is no custom node to install.

Reproducing the conversion

If you'd rather convert the upstream weights yourself:

# from a Python env with torch + safetensors + packaging:
python scripts/convert_realgen_v2.py adapter_model.safetensors realgen_v2.safetensors

The script reads lora_alpha from adapter_config.json (sitting next to the adapter), strips the base_model.model. prefix, rewrites lora_A/lora_B β†’ lora_down/lora_up, and writes one <module>.alpha tensor per LoRA module. See the source for the full mapping.

Credits

This repo redistributes the weights under their original Apache 2.0 license; all credit for the LoRA itself belongs to the upstream authors.

Downloads last month
32
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for nynxz/RealGen-V2

Adapter
(127)
this model