Allucination parsing JSON responses containing numerical data such as 1234.00
While testing the model, I've noticed that it has trouble parsing JSON responses containing numerical data such as
1234.00, but only when those data is in tool_response
Hey @jeisonvendetta ,
Thank you for reporting this issue. Floating point numbers with trailing zeros can sometimes behave unpredictably, but it's unusual for this to occur strictly within a tool_response block. This might be related either to tokenisation quirks or how the reasoning parser handles JSON data.
To help us reproduce and diagnose the behavior, could you please provide a bit more detail about your setup? Specifically it would be helpful to know:
- The exact system prompt and tool schema you are using.
- The raw
tool_responsestring being injected back into the context. - The inference framework you’re using (Hugging Face Transformers, LiteRT, vLLM).
- Your generation parameters, such as temperature, top-p, and whether
skip_special_tokensis enabled.
With these details, we can run a reproduction on our side and identify whether this is a model side issue or an artifact of JSON parsing.