Fixed Chat Template to Fix Tool Calls
Hey @LiquidAI Team!
I am LOVING this model, primarily for its use as my HomeAssistant Agent model since it is small and has great tool calling capabilities. I did notice and present a fix for the chat template which makes the tool calling possible!
LFM2.5 Chat Template Bug Fixes
TL;DR β The upstream chat template for
LiquidAI/LFM2.5-1.2B-Instructbreaks multi-turn tool calling. Replace it with the fixed template below. Drop-in replacement that handlescontent=None, reconstructstool_callsinto native format, and works with any Jinja2-based framework (HuggingFace transformers, vLLM, TGI, etc.).
Fixed Template
{{- bos_token -}}
{#-
Fixed chat template for LiquidAI/LFM2.5-1.2B-Instruct.
Framework-agnostic β works with HuggingFace transformers apply_chat_template(),
vLLM, TGI, and any Jinja2-based rendering pipeline.
Bugs fixed in the upstream HF template:
1. content=None on assistant messages renders "null" (via tojson) instead of empty
2. tool_calls field on assistant messages is completely ignored β multi-turn tool
conversations lose all tool call context
3. content=None on tool messages renders "null" instead of empty
Tool call reconstruction: when assistant messages carry a tool_calls field (OpenAI
format), this template reconstructs them into LFM's native format:
<|tool_call_start|>[func_name(param="value")]<|tool_call_end|>
Handles function.arguments as either a Python dict (pre-parsed) or a JSON string
(OpenAI wire format). Dict arguments get full pythonic formatting; JSON strings
are embedded as-is (best effort β no from_json filter in standard Jinja2).
Options:
keep_past_thinking (default: false) β preserve <think> blocks in non-final assistant turns
Ref: https://docs.liquid.ai/lfm/key-concepts/tool-use
Ref: https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct/blob/main/chat_template.jinja
-#}
{%- set keep_past_thinking = keep_past_thinking | default(false) -%}
{#- Extract system prompt from first message if present -#}
{%- set ns = namespace(system_prompt="") -%}
{%- if messages[0]["role"] == "system" -%}
{%- set ns.system_prompt = messages[0]["content"] -%}
{%- set messages = messages[1:] -%}
{%- endif -%}
{#- Append tool definitions to system prompt -#}
{%- if tools -%}
{%- set ns.system_prompt = ns.system_prompt + ("\n" if ns.system_prompt else "") + "List of tools: [" -%}
{%- for tool in tools -%}
{%- if tool is not string -%}
{%- set tool = tool | tojson -%}
{%- endif -%}
{%- set ns.system_prompt = ns.system_prompt + tool -%}
{%- if not loop.last -%}
{%- set ns.system_prompt = ns.system_prompt + ", " -%}
{%- endif -%}
{%- endfor -%}
{%- set ns.system_prompt = ns.system_prompt + "]" -%}
{%- endif -%}
{%- if ns.system_prompt -%}
{{- "<|im_start|>system\n" + ns.system_prompt + "<|im_end|>\n" -}}
{%- endif -%}
{#- Find last assistant index for think-stripping logic -#}
{%- set ns.last_assistant_index = -1 -%}
{%- for message in messages -%}
{%- if message["role"] == "assistant" -%}
{%- set ns.last_assistant_index = loop.index0 -%}
{%- endif -%}
{%- endfor -%}
{#- Macro: format a Python value in the style LFM was trained on -#}
{#- Strings β double quotes (via tojson), bools β True/False, None β None -#}
{%- macro pyval(v) -%}
{%- if v is none -%}None
{%- elif v is boolean and v -%}True
{%- elif v is boolean and not v -%}False
{%- elif v is string -%}{{ v | tojson }}
{%- elif v is mapping -%}{
{%- for mk, mv in v.items() -%}
{{ mk | tojson }}: {{ pyval(mv) }}
{%- if not loop.last -%}, {% endif -%}
{%- endfor -%}}
{%- elif v is iterable -%}[
{%- for item in v -%}
{{ pyval(item) }}
{%- if not loop.last -%}, {% endif -%}
{%- endfor -%}]
{%- else -%}{{ v }}
{%- endif -%}
{%- endmacro -%}
{#- Render each message -#}
{%- for message in messages -%}
{{- "<|im_start|>" + message["role"] + "\n" -}}
{%- if message["role"] == "assistant" -%}
{#- --- ASSISTANT MESSAGE --- -#}
{#- Get text content, treating None as empty -#}
{%- set text_content = message["content"] if "content" in message and message["content"] is string else "" -%}
{#- Reconstruct tool_calls into native format if present -#}
{%- set tc = message.get("tool_calls", none) if message.get is defined else message["tool_calls"] if "tool_calls" in message else none -%}
{%- if tc -%}
{%- set ns.tc_parts = [] -%}
{%- for call in tc -%}
{%- if call is mapping -%}
{%- set func = call["function"] -%}
{%- else -%}
{%- set func = call.function -%}
{%- endif -%}
{%- if func is mapping -%}
{%- set fname = func["name"] -%}
{%- set fargs = func.get("arguments", {}) if func.get is defined else func["arguments"] if "arguments" in func else {} -%}
{%- else -%}
{%- set fname = func.name -%}
{%- set fargs = func.arguments if func.arguments is defined else {} -%}
{%- endif -%}
{#- Arguments can be a dict (pre-parsed) or a JSON string (wire format) -#}
{%- if fargs is mapping -%}
{#- Dict: format as pythonic key=value pairs -#}
{%- set ns.kv_parts = [] -%}
{%- for k, v in fargs.items() -%}
{%- set ns.kv_parts = ns.kv_parts + [k + "=" + pyval(v)] -%}
{%- endfor -%}
{%- set ns.tc_parts = ns.tc_parts + [fname + "(" + ns.kv_parts | join(", ") + ")"] -%}
{%- elif fargs is string -%}
{#- JSON string: embed as-is (no from_json in standard Jinja2) -#}
{%- set ns.tc_parts = ns.tc_parts + [fname + "(" + fargs + ")"] -%}
{%- else -%}
{%- set ns.tc_parts = ns.tc_parts + [fname + "()"] -%}
{%- endif -%}
{%- endfor -%}
{%- set tool_call_str = "<|tool_call_start|>[" + ns.tc_parts | join(", ") + "]<|tool_call_end|>" -%}
{%- else -%}
{%- set tool_call_str = "" -%}
{%- endif -%}
{#- Combine: text content BEFORE tool call markers -#}
{#- This ordering is safe for think-stripping: -#}
{#- <think>...</think>text<|tool_call_start|>...<|tool_call_end|> -#}
{#- split("</think>")[-1] β text + markers preserved -#}
{%- if text_content and tool_call_str -%}
{%- set content = text_content + tool_call_str -%}
{%- elif tool_call_str -%}
{%- set content = tool_call_str -%}
{%- else -%}
{%- set content = text_content -%}
{%- endif -%}
{#- Strip thinking from non-final assistant messages -#}
{%- if not keep_past_thinking and loop.index0 != ns.last_assistant_index -%}
{%- if "</think>" in content -%}
{%- set content = content.split("</think>")[-1] | trim -%}
{%- endif -%}
{%- endif -%}
{{- content -}}
{%- elif message["role"] == "tool" -%}
{#- --- TOOL RESULT MESSAGE --- -#}
{#- Handle content=None gracefully (render empty instead of "null") -#}
{%- set content = message["content"] if "content" in message else "" -%}
{%- if content is none -%}
{%- set content = "" -%}
{%- endif -%}
{%- if content is not string -%}
{%- set content = content | tojson -%}
{%- endif -%}
{{- content -}}
{%- else -%}
{#- --- USER / SYSTEM / OTHER MESSAGES --- -#}
{%- set content = message["content"] -%}
{%- if content is not string -%}
{%- set content = content | tojson -%}
{%- endif -%}
{{- content -}}
{%- endif -%}
{{- "<|im_end|>\n" -}}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{- "<|im_start|>assistant\n" -}}
{%- endif -%}
Bug 1: content=None on assistant messages renders as "null"
The original template applies tojson to any non-string content:
{%- set content = message["content"] -%}
{%- if content is not string -%}
{%- set content = content | tojson -%}
{%- endif -%}
When a client sends an assistant message with content=None (standard for tool-call-only turns), None | tojson produces the literal string null.
Reproduce
messages = [
{"role": "user", "content": "What's the weather?"},
{"role": "assistant", "content": None, "tool_calls": [
{"function": {"name": "get_weather", "arguments": {"location": "SF"}}}
]},
{"role": "tool", "content": '{"temp": 65}'},
]
Broken output
<|im_start|>assistant
null<|im_end|>
Fix
Treat None as empty string instead of passing it through tojson:
{%- set text_content = message["content"]
if "content" in message and message["content"] is string else "" -%}
Bug 2: tool_calls field is completely ignored
The original template only reads message["content"]. The structured tool_calls field on assistant messages is never accessed, so the model's tool call history is silently lost in multi-turn conversations.
Reproduce
messages = [
{"role": "user", "content": "What's the weather in SF?"},
{"role": "assistant", "content": None, "tool_calls": [
{"function": {"name": "get_weather", "arguments": {"location": "SF"}}}
]},
{"role": "tool", "content": '{"temp": 65}'},
{"role": "user", "content": "And in NYC?"},
]
Broken output (combined with Bug 1)
<|im_start|>assistant
null<|im_end|>
<|im_start|>tool
{"temp": 65}<|im_end|>
The model sees null where its tool call should be β it has no context for the tool result or the calling pattern to follow.
Fix
Reconstruct tool_calls into LFM's native format:
{%- set tc = message["tool_calls"] if "tool_calls" in message else none -%}
{%- if tc -%}
{#- Build: <|tool_call_start|>[func(key=val)]<|tool_call_end|> -#}
...
{%- endif -%}
Fixed output
<|im_start|>assistant
<|tool_call_start|>[get_weather(location="SF")]<|tool_call_end|><|im_end|>
<|im_start|>tool
{"temp": 65}<|im_end|>
Bug 3: content=None on tool messages renders as "null"
Same root cause as Bug 1, but on the tool role. Some clients send tool results with content=None (e.g., for fire-and-forget actions with no return value).
Reproduce
messages = [
{"role": "user", "content": "Turn on the lights"},
{"role": "assistant", "content": None, "tool_calls": [
{"function": {"name": "light.turn_on", "arguments": {"entity_id": "light.living_room"}}}
]},
{"role": "tool", "content": None},
]
Broken output
<|im_start|>tool
null<|im_end|>
Fix
Explicit None check before tojson:
{%- set content = message["content"] if "content" in message else "" -%}
{%- if content is none -%}
{%- set content = "" -%}
{%- endif -%}
Fixed output
<|im_start|>tool
<|im_end|>
Quick Start
Save the template above as lfm-25-chat-template.jinja, then:
vLLM
python -m vllm.entrypoints.openai.api_server \
--model LiquidAI/LFM2.5-1.2B-Instruct \
--chat-template lfm-25-chat-template.jinja \
--enable-auto-tool-choice --tool-call-parser lfm \
--trust-remote-code
llama-server (llama.cpp)
llama-server \
-hf LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q8_0 \
--chat-template-file lfm-25-chat-template.jinja \
--jinja -ngl 99
Test: plain chat
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "LFM2.5-1.2B-Instruct",
"messages": [{"role": "user", "content": "What is 2+2?"}]
}'
Test: tool call
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "LFM2.5-1.2B-Instruct",
"messages": [{"role": "user", "content": "What is the weather in San Francisco?"}],
"tools": [{"type": "function", "function": {
"name": "get_weather", "description": "Get weather",
"parameters": {"type": "object", "properties": {"location": {"type": "string"}}, "required": ["location"]}
}}]
}'
Test: multi-turn (the bug this template fixes)
curl http://localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "LFM2.5-1.2B-Instruct",
"messages": [
{"role": "user", "content": "What is the weather in SF?"},
{"role": "assistant", "content": null, "tool_calls": [
{"id": "call_0", "type": "function", "function": {"name": "get_weather", "arguments": "{\"location\": \"SF\"}"}}
]},
{"role": "tool", "tool_call_id": "call_0", "content": "{\"temp\": 65}"},
{"role": "user", "content": "And in NYC?"}
],
"tools": [{"type": "function", "function": {
"name": "get_weather", "description": "Get weather",
"parameters": {"type": "object", "properties": {"location": {"type": "string"}}, "required": ["location"]}
}}]
}'
Template Options
| Option | Default | Description |
|---|---|---|
keep_past_thinking |
false |
Preserve <think> blocks in non-final assistant turns |
Impact
These bugs compound in multi-turn tool calling (e.g., Home Assistant, agentic workflows). After one round of tool use, the model sees null instead of its previous tool calls, loses the calling pattern, and degrades rapidly β often refusing to call tools or hallucinating responses instead.