Error jinja template LM Studio + Open Code or Qwen code or Kilo Code

#2
by RGMC98 - opened

I have this error with the model : LMS api + codin gagent
image

Using opencode, I see a different kind of error, not related to the "safe" thing:

invalid [tool=write, error=Invalid input for tool write: JSON parsing failed: Text: {"content":"use ...

the write tool in opencode with this model gives an error:
invalid [tool=write, error=Invalid input for tool write: JSON parsing failed: Text: { ... }.
Error message: JSON Parse error: Unrecognized token '/']

I checked today, with latest build of llama.cpp ghcr.io/ggml-org/llama.cpp:server-cuda12-b7941
and everything appears to be working. Tools are working.

I think this PR fixed it:
https://github.com/ggml-org/llama.cpp/pull/19239

Tested with RooCode and opencode.

That fix is 3 days old so it would already have been in all my testing. It's definitely still broken with opencode.

write tool is still broken for me in opencode with llama.cpp version: 7948 (b828e18c7) and Qwen3-Coder-Next-UD-Q5_K_XL-00001-of-00003.gguf downloaded 5. feb.

Example prompt:

use write tool to write "{
"first_name": "Sammy",
"last_name": "Shark",
"location": "Ocean",
"online": true,
"followers": 987
}" into test.txt

Result:
← Write test.txt
Error: The write tool was called with invalid arguments: [
{
"expected": "string",
"code": "invalid_type",
"path": [
"content"
],
"message": "Invalid input: expected string, received object"
}
].
Please rewrite the input so it satisfies the expected schema.

Interesting. Below is the screenshot of the example how it works in Visual code + RooCode. Not sure why it works in some cases, but not others:

Screenshot 2026-02-05 at 14.27.27

I am also having this problem. It consistently fails with write tool calls in opencode. Although other tool calls such as edit seem to work. It was failing with this error when I checked:
Invalid input for tool write: JSON parsing failed: Text: {"content":"valid code","filePath":"/path/to/file","filePath"/path/to/file"}.
Error message: JSON Parse error: Unrecognized token '/'

I got it working with this reverse proxy which I wrote some time ago to connect a streaming client to llama-server when it wasn't able to stream when tool calling was used so it's unlikely an model issue.
https://github.com/crashr/llama-stream

The PR for the fix has been tested, but it hasn’t been merged into the main branch yet.
https://github.com/pwilkin/llama.cpp/tree/autoparser

The PR for the fix has been tested, but it hasn’t been merged into the main branch yet.
https://github.com/pwilkin/llama.cpp/tree/autoparser

Man, thanks for the hint, this autoparser branch works like a charm. No tool errors anymore. Kudos to the developer.

I got it working with this reverse proxy which I wrote some time ago to connect a streaming client to llama-server when it wasn't able to stream when tool calling was used so it's unlikely an model issue.
https://github.com/crashr/llama-stream

Many thanks for this! :)
it is fully solved the Opencode tool call issue!

I see now others also have segfaults on llama.cpp server randomly - and the autoparser solved this. so i dont suggest llama-stream as a full solution. it solved the save file tool call issue i had earlier, but does not solve the segfault.
i try now the autoparser.

UPDATE (2026.02.17)
Autoparser solve not only the call tool issue but also the random llama.cpp server segfault issues:
https://github.com/pwilkin/llama.cpp/tree/autoparser

Many thanks.

Holy crap. After 2 straight days of trying to figure out why llama.cpp was segfaulting when using opencode and qwen3-coder-next and a full rebuild of my fedora 43 machine I dev on - pwilkin's autoparser branch solved all of my problems. Now up and running on llama with ROCm 7.2, and opencode is rock solid. THANK YOU!

Sign up or log in to comment