It's broken in the ComfyUI.
no... its not, you r just doing it wrong bud.
i am using this pipeline https://comfyanonymous.github.io/ComfyUI_examples/z_image/
9-50 steps for Z-IMAGE,
9 steps for Z-Image-turbo
(comfyui) c:\ComfyUI>python main.py
Checkpoint files will always be loaded safely.
Total VRAM 15931 MB, total RAM 130960 MB
pytorch version: 2.11.0.dev20260127+xpu
Set vram state to: NORMAL_VRAM
Device: xpu:0 Intel(R) Arc(TM) A770 Graphics
Found comfy_kitchen backend triton: {'available': False, 'disabled': True, 'unavailable_reason': 'CUDA not available on this system', 'capabilities': []}
Found comfy_kitchen backend eager: {'available': True, 'disabled': False, 'unavailable_reason': None, 'capabilities': ['apply_rope', 'apply_rope1', 'dequantize_nvfp4', 'dequantize_per_tensor_fp8', 'quantize_nvfp4', 'quantize_per_tensor_fp8', 'scaled_mm_nvfp4']}
Found comfy_kitchen backend cuda: {'available': False, 'disabled': True, 'unavailable_reason': 'CUDA not available on this system', 'capabilities': []}
Using pytorch attention
Python version: 3.12.12 | packaged by conda-forge | (main, Jan 26 2026, 23:38:32) [MSC v.1944 64 bit (AMD64)]
ComfyUI version: 0.11.0
ComfyUI frontend version: 1.37.11
[Prompt Server] web root: C:\Users\uuk\miniconda3\envs\comfyui\Lib\site-packages\comfyui_frontend_package\static
Import times for custom nodes:
0.0 seconds: C:\ComfyUI\custom_nodes\websocket_image_save.py
Context impl SQLiteImpl.
Will assume non-transactional DDL.
Assets scan(roots=['models']) completed in 0.033s (created=0, skipped_existing=17, total_seen=17)
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: xpu:0, offload device: cpu, dtype: torch.bfloat16
Found quantization metadata version 1
Using MixedPrecisionOps for text encoder
CLIP/text encoder model load device: xpu:0, offload device: cpu, current: cpu, dtype: torch.float16
Requested to load ZImageTEModel_
loaded completely; 14411.68 MB usable, 5371.00 MB loaded, full load: True
model weight dtype torch.bfloat16, manual cast: None
model_type FLOW
Requested to load Lumina2
Unloaded partially: 4059.13 MB freed, 1311.88 MB remains loaded, 142.50 MB buffer reserved, lowvram patches: 0
loaded completely; 13093.68 MB usable, 11739.54 MB loaded, full load: True
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:05<00:00, 1.57it/s]
Requested to load AutoencodingEngine
loaded completely; 1108.52 MB usable, 159.87 MB loaded, full load: True
Prompt executed in 51.88 seconds
got prompt
100%|████████████████████████████████████████████████████████████████████████████████████| 9/9 [00:03<00:00, 2.32it/s]
Prompt executed in 4.25 seconds
And this is if i generate through a python script (as in the example from the model page)
I use z-image turbo template replacing the diffusion model and it renders just fine. The CFG must be at least 3 to render. And i do get some small artifacts in renders but probably due to settings.
I use z-image turbo template replacing the diffusion model and it renders just fine. The CFG must be at least 3 to render. And i do get some small artifacts in renders but probably due to settings.
What hardware and what library versions are installed?
Was having a issues like noise in the beginning. But was having issues Comfy couldn't get the Zimage template from ComfyUI. I reinstalled Comfy but then was getting all black images. I disabled sageattention by not running it with the --use-sage-attention flag. Remember use CFG 3-5 and about 30 - 50 steps.
The Z-image model provided by ConfyUI works normally. The image is created normally using the existing Z-image Turbo. However, the model I received here is broken like the image above. What file should I receive? Do I need a separate file for the model I downloaded here??
Update comfyui, go to template and then use the workflow from there, download the bf16 from the missing models pop up inside comfyui... problem solved.
Update comfyui, go to template and then use the workflow from there, download the bf16 from the missing models pop up inside comfyui... problem solved.
Yes~ I told you that the one provided by comfyui works normally...
This is a version tuned by ComfyUI...
I thought it wasn't possible to do the official version.
I use z-image turbo template replacing the diffusion model and it renders just fine. The CFG must be at least 3 to render. And i do get some small artifacts in renders but probably due to settings.
What hardware and what library versions are installed?
No idea, but i am using comfyUI. I dont have much knowledge about specifics.
I have big problems with Comfyui too. I tried a couple of gguf variants and clip models too, and all give black images, and I can't run the Comfyui variant since it is too big for my HW.
I did it the way you told me, but it's the same.
can u share your workflow with us or a screenshot of it?
run this command in your terminal:
python -c "import sys, torch, platform, importlib; print('Python:', sys.version); print('PyTorch:', torch.__version__); print('CUDA available:', torch.cuda.is_available()); print('CUDA version:', torch.version.cuda); print('cuDNN:', torch.backends.cudnn.version()); print('GPU:', torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'None'); print('Platform:', platform.platform()); mods=['flash_attn','flash_attn_2','xformers','sageattention','triton','einops']; print('Attention libs:'); [print(' ',m, 'Installed' if importlib.util.find_spec(m) else 'Not installed') for m in mods]"
and post the result here.
This is mine:
Python: 3.12.12 | packaged by conda-forge | (main, Oct 13 2025, 14:23:59) [MSC v.1944 64 bit (AMD64)]
PyTorch: 2.10.0+cu130
CUDA available: True
CUDA version: 13.0
cuDNN: 91200
GPU: NVIDIA GeForce RTX 3050
Platform: Windows-11-10.0.26200-SP0
Attention libs:
flash_attn Installed
flash_attn_2 Not installed
xformers Installed
sageattention Installed
triton Installed
einops Installed
I have big problems with Comfyui too. I tried a couple of gguf variants and clip models too, and all give black images, and I can't run the Comfyui variant since it is too big for my HW.
Is your environment by any chance using Triton or Sage-Attention?
This is mine:
Python: 3.12.12 | packaged by conda-forge | (main, Oct 13 2025, 14:23:59) [MSC v.1944 64 bit (AMD64)]
PyTorch: 2.10.0+cu130
CUDA available: True
CUDA version: 13.0
Make sure you're not using Triton and Sage-Attention with Z-Image, as it will result in a black image
I have big problems with Comfyui too. I tried a couple of gguf variants and clip models too, and all give black images, and I can't run the Comfyui variant since it is too big for my HW.
Is your environment by any chance using Triton or Sage-Attention?
yes
This is mine:
Python: 3.12.12 | packaged by conda-forge | (main, Oct 13 2025, 14:23:59) [MSC v.1944 64 bit (AMD64)]
PyTorch: 2.10.0+cu130
CUDA available: True
CUDA version: 13.0Make sure you're not using Triton and Sage-Attention with Z-Image, as it will result in a black image
I will try this and report back soon.
Thanks
This is mine:
Python: 3.12.12 | packaged by conda-forge | (main, Oct 13 2025, 14:23:59) [MSC v.1944 64 bit (AMD64)]
PyTorch: 2.10.0+cu130
CUDA available: True
CUDA version: 13.0Make sure you're not using Triton and Sage-Attention with Z-Image, as it will result in a black image
any thoughts on black image without triton and sage? Tried two guffs so far and just black t2i, and iti just noise ontop of the input
Python: 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]
PyTorch: 2.9.1+cu130
CUDA available: True
CUDA version: 13.0
cuDNN: 91200
GPU: NVIDIA GeForce RTX 3080 Ti
Platform: Windows-11-10.0.22631-SP0
Attention libs:
flash_attn Not installed
flash_attn_2 Not installed
xformers Not installed
sageattention Not installed
triton Not installed
einops Installed
This is mine:
Make sure you're not using Triton and Sage-Attention with Z-Image, as it will result in a black image
any thoughts on black image without triton and sage? Tried two guffs so far and just black t2i, and iti just noise ontop of the input
Python: 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]
PyTorch: 2.9.1+cu130
CUDA available: True
CUDA version: 13.0
If the environment is not the issue, then compatibility is the next thing to check. Could you share your workflow? and the link to GGUF model you're using
or compare it to this one. I'm using KJnodes GGUF loader + this model: https://huggingface.co/unsloth/Z-Image-GGUF/blob/main/z-image-Q4_K_M.gguf
The output is black, or they have many small squares etc, if you run it from run_nvidia_gpu_fast_fp16_accumulation.bat. The images become visible just switching to run_nvidia_gpu.bat.
That might change in the future, but currently that is the situation. February 28 2026.
I can confirm that it is indeed working without the Triton and Sage attention.
Thank you.
My startup file in case someone needs it:
@echo
off
setlocal
:: find VS installation path (requires vswhere installed with Visual Studio)
for /f "usebackq tokens=*" %%I in ("%ProgramFiles(x86)%\Microsoft Visual Studio\Installer\vswhere.exe" -latest -products * -requires Microsoft.VisualStudio.Component.VC.Tools.x86.x64 -property installationPath) do set VSINSTALL=%%I
if not defined VSINSTALL (
echo Visual Studio not found.
exit /b 1
)
call "%VSINSTALL%\VC\Auxiliary\Build\vcvars64.bat"
::---------------------------------------------------------------------------
call conda activate comfy
call update_comfy.bat
:: comfy-cli update comfy
set ANONYMIZED_TELEMETRY=False
set USE_MEMORY_EFFICIENT_ATTENTION=1
set TORCH_USE_CUDA_DSA=1
set CUDA_LAUNCH_BLOCKING=1
set PYTORCH_ALLOC_CONF=1
:: set COMFYUI_MODEL_PATH
:: set HF_HUB_ENABLE_HF_TRANSFER=0
::call "C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Auxiliary\Build\vcvars64.bat"
::call "C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\Tools\Launch-VsDevShell.ps1"
:: --cache-ram
:: --disable-pinned-memory --disable-async-offload
:: --fast fp16_accumulation
:: --use-pytorch-cross-attention
:: --mmap-torch-files
:: --verbose DEBUG
:: --normalvram
set "PYTHON_OPTS=-c "import sys; sys.modules['triton']=None; sys.modules['sageattention']=None; exec(open('main.py').read())""
python.exe -s main.py --windows-standalone-build --cache-ram --disable-auto-launch --use-pytorch-cross-attention --disable-xformers --preview-size 320 %1
endlocal
Also getting noise only when using fp16 with the base, bf16 is way too slow, making it unusable. Running on an rtx2060 6gb and 32ram, latest comfyui.
This is mine:
Make sure you're not using Triton and Sage-Attention with Z-Image, as it will result in a black image
any thoughts on black image without triton and sage? Tried two guffs so far and just black t2i, and iti just noise ontop of the input
Python: 3.13.9 (tags/v3.13.9:8183fa5, Oct 14 2025, 14:09:13) [MSC v.1944 64 bit (AMD64)]
PyTorch: 2.9.1+cu130
CUDA available: True
CUDA version: 13.0If the environment is not the issue, then compatibility is the next thing to check. Could you share your workflow? and the link to GGUF model you're using
or compare it to this one. I'm using KJnodes GGUF loader + this model: https://huggingface.co/unsloth/Z-Image-GGUF/blob/main/z-image-Q4_K_M.gguf
Ah i dun goofed and was running fp16_accumulation, using normal startup worked! Thanks!
I use z-image turbo template replacing the diffusion model and it renders just fine. The CFG must be at least 3 to render. And i do get some small artifacts in renders but probably due to settings.
ThankYou!
@gemstonebro
with CFG 4 and 50 steps (512x512) is working
Intel Arc A770 (pytorch version: 2.11.0.dev20260127+xpu)
Python version: 3.12.12
ComfyUI version: 0.11.1
I checked the model repo contents (weights + config only) and there is no runtime code here to patch. The black/noise outputs reported in ComfyUI look like environment/runtime conflicts rather than model weights.
Based on confirmations in this thread:
- Triton or Sage-Attention can produce black images with Z-Image base. Try disabling them (or avoid launchers/flags that enable them).
- run_nvidia_gpu_fast_fp16_accumulation.bat is reported to cause black/noise; switching to the normal launcher fixes output.
- CFG and steps matter: users report stable results around CFG 3-5 and 30-50 steps.
- If using ComfyUI templates, make sure you are using the correct workflow and the BF16 model variant (ComfyUI missing-models popup).
If you are still seeing issues, please share:
- workflow screenshot or JSON,
- ComfyUI version + launcher flags,
- PyTorch + CUDA versions,
- attention libs installed (triton/sageattention/xformers),
so we can pinpoint the conflict.










