UD-Q5_K_XL seemingly broken
my first download of UD-Q5_K_XL was also broken. it ran and I was able to generate a one-shot html app, but in the agent (opencode), it would crash the server. i re-downloaded and the Q5 has been cooking for an hour straight. already knocked out a few issues in my application issues list. i've been running minimax m2 since it dropped, giving GLM 4.7 a spin now (4.6 was a dud for me). So far, I'm liking the concise and correct code 4.7 has spun out. Takes more than an hour and a couple issues to form an opinion, but so far I'm liking it.
spoke too soon. cooking on some tasks and then crash. i'll try stripping down some of the command line arguments, if it crashes again, then back to minimax-m2 i guess. it's been VERY stable.
slot update_slots: id 2 | task 23541 | prompt done, n_tokens = 46812, batch.n_tokens = 65
slot print_timing: id 2 | task 23541 |
prompt eval time = 1323.43 ms / 65 tokens ( 20.36 ms per token, 49.11 tokens per second)
eval time = 31156.17 ms / 383 tokens ( 81.35 ms per token, 12.29 tokens per second)
total time = 32479.60 ms / 448 tokens
slot release: id 2 | task 23541 | stop processing: n_tokens = 47194, truncated = 0
srv update_slots: all slots are idle
srv log_server_r: request: POST /v1/chat/completions 172.20.0.107 200
srv params_from_: Chat format: GLM 4.5
slot get_availabl: id 2 | task -1 | selected slot by LCP similarity, sim_best = 0.995 (> 0.100 thold), f_keep = 0.992
slot launch_slot_: id 2 | task -1 | sampler chain: logits -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
slot launch_slot_: id 2 | task 23925 | processing task
slot update_slots: id 2 | task 23925 | new prompt, n_ctx_slot = 131072, n_keep = 0, task.n_tokens = 47051
slot update_slots: id 2 | task 23925 | n_tokens = 46811, memory_seq_rm [46811, end)
slot update_slots: id 2 | task 23925 | prompt processing progress, n_tokens = 47051, batch.n_tokens = 240, progress = 1.000000
slot update_slots: id 2 | task 23925 | prompt done, n_tokens = 47051, batch.n_tokens = 240
/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:94: CUDA error
CUDA error: unspecified launch failure
current device: 2, in function ggml_cuda_mul_mat_id at /llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:2327
cudaMemcpyAsync(ids_host.data(), ids->data, ggml_nbytes(ids), cudaMemcpyDeviceToHost, stream)
llama-server(+0x557ccf)[0x5878fd36accf]
llama-server(+0x6481d7)[0x5878fd45b1d7]
llama-server(+0x65c7fb)[0x5878fd46f7fb]
llama-server(+0x65d230)[0x5878fd470230]
llama-server(+0x661543)[0x5878fd474543]
llama-server(+0x663f6a)[0x5878fd476f6a]
llama-server(+0x5649b7)[0x5878fd3779b7]
llama-server(+0x3aeed3)[0x5878fd1c1ed3]
llama-server(+0x3a2d07)[0x5878fd1b5d07]
llama-server(+0x3b14e2)[0x5878fd1c44e2]
llama-server(+0x1c377f)[0x5878fcfd677f]
llama-server(+0x19a632)[0x5878fcfad632]
llama-server(+0xe07f0)[0x5878fcef37f0]
/usr/lib/x86_64-linux-gnu/libc.so.6(+0x2a1ca)[0x7d031f79a1ca]
/usr/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x8b)[0x7d031f79a28b]
llama-server(+0x138585)[0x5878fcf4b585]
/mnt/data/models/GLM-4.7-UD-Q5_K_XL/start-llama: line 21: 139 Aborted (core dumped) llama-server --model /mnt/data/models/GLM-4.7-UD-Q5_K_XL/GLM-4.7-UD-Q5_K_XL-00001-of-00006.gguf --host 0.0.0.0 --alias glm-4.7 --n-gpu-layers -1 --ctx-size 131072 --cache-ram 4096 --threads 8 --tensor-split 32,34,34 --temp 1.0 --top-p 0.95 --flash-attn on --cache-type-k q8_0 --cache-type-v q8_0 --batch-size 4096 --ubatch-size 2048 --cont-batching --prio 3 --jinja
@Nimbz Sorry definitely please re-download the model! For eg use:
# !pip install huggingface_hub hf_transfer
import os
os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "0" # Can sometimes rate limit, so set to 0 to disable
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/GLM-4.7-GGUF",
local_dir = "unsloth/GLM-4.7-GGUF",
allow_patterns = ["*UD-Q2_K_XL*"], # Dynamic 2bit Use "*UD-TQ1_0*" for Dynamic 1bit
)
@aaron-newsome If you have a stripped reproducible example, I can then forward this to the llama cpp team - apologies on the issue!
i've rebuilt the b7522, now running b7524. Cooking again so we'll see if it holds up. These are the same issues I saw with GLM-4.5-Air when flash attention on. With air, I was able to run with flash attention off because it fit with full context. I'm using flash attention with this Q5 because it won't fit without it. If it crashes the server again (a real pain because I have to reboot the entire system), then I'll move down to a smaller quant that can fit with fa off or just go back to MiniMax-M2, which has been rock solid.
The cuda_mul_mat crashes may be related to: https://github.com/ggml-org/llama.cpp/issues/18331
Try adding --defrag-thold 0 and see if that helps, but note its still under investigation.

