Qwen3-Coder-Next-IQ4_NL.gguf tool calling issue. - Update: SOLVED
Hi
Qwen3-Coder-Next-IQ4_NL.gguf have tool calling issue.
IQ4_NL loaded exactly the same way as i used Qwen3-Coder-Next-IQ4_XS.gguf:
/adat/ai/llama.cpp/llama-server -m /adat/ai/models/unsloth/Qwen3-Coder-Next-GGUF/Qwen3-Coder-Next-IQ4_NL.gguf --host 0.0.0.0 --port 1234 -ngl 99 -c 65
536 -fa on --ctx-size 96500 --temp 1.0 --top-p 0.95 --min-p 0.01 --top-k 40 --fit on --parallel 1 --chat-template-file /adat/ai/llama.cpp/qwen3.jinja
But Opencode has lots of errors and does not work with IQ4_NL:
invalid [tool=write, error=Invalid input for tool write: JSON parsing failed: Text: {"
IQ4_XS- still works with Opencode with the same Llama.cpp server settings.
I'm gettting this error with all quants of this model:
invalid [tool=write, error=Invalid input for tool write: JSON parsing failed: Text: {"content":"...","filePath":"/home/myfile.py","filePath"/home/myfile.py"}.
Error message: JSON Parse error: Unrecognized token '/']
You can see "filePath" is specified twice, and second time incorrectly. Something is probably broken in the embedded template
I'm still having the same problem, too, wit the current llama.cpp Version and the recommended settings.
Q4_K_M. Same with the GGUF from Qwen themselves though.
In OpenCode it is always the write tool with the double "filePath" and the messed up JSON at the end like in your example.
It doesn't seem to be an unsloth specific problem but I wish there was a solution to this.
Llama.cpp is so much faster than ollama for me but ollama does not cause the problem with the model from their registry.
Hi
This solved the tool call (save file tool call) issue and also for random server segfault issues too:
PR for the fix has been tested, but it hasn’t been merged into the main branch yet.
https://github.com/pwilkin/llama.cpp/tree/autoparser
Many thanks for it.
Hello, I was very frustrated that calling tools didn't work until I solved it like this for any model
https://github.com/ladislav-danis/systemd-llm-switch
In OpenCode it is always the write tool with the double "filePath" and the messed up JSON at the end like in your example.
It doesn't seem to be an unsloth specific problem but I wish there was a solution to this.
The malformed second filePath occurs when Qwen3-Coder-Next emits two filePath parameters for the same tool call. It doesn't happen on every write call but I can trigger it on longer sessions and/or with long writes. This is an unexpected quirk, so llama.cpp's XML parser and streaming diff logic weren't designed to handle duplicate keys.
I made a minimal PR to correctly translate duplicate XML parameter keys into JSON output which fixes the issue for me: https://github.com/ggml-org/llama.cpp/pull/19753