phoebdroid / Qwen3-30B-A3B-Hybrid-Auto
nvm...
@phoebdroid You need to YAML string encode your chat template like it was done for the original unmodified chat template. Mainely escape all problematic characters like quotes and newlines.
nvm...
@phoebdroid
While it already looks much better now you still have not yet escaped the " characters using \". If you are unsure how to escape a double quote enclosed YAML string please take a look at https://www.geeksforgeeks.org/devops/how-to-escape-the-special-character-in-yaml-with-examples/
nvm...
nvm...
Why you removed your messages and closed this before I even had time to respond? You no longer want us to do your model and respond to your message? I was simply busy with other things so my delayed response did not in any way meant to communicate refusal to queue your model or answer your request. Sorry that I didn't saw your message on Reddit. I'm not actively checking Reddit as it's a platform I barely use.
It's queued! :D
No idea if that was still your intention but I decided to queue it anyways as your model is still up.
Regarding the offer to collaborate I already work together with some model creators like Guilherme34, RichardErkhov and drwlf. You probably could join the MedraAI group where we all help each other’s with developing AI models and the platforms around them (like the amazing web interface you are currently developing). Unfortunately this all happens on Discord so with you being unable to use Discord it might not be feasible as without everyone helping each others collaboration seams unfeasible from a time investment perspective given how busy I usually am.
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Qwen3-30B-A3B-Hybrid-Auto-GGUF
nvm
The mode is working as otherwise imatrix computation would almost certainly fail! :D
-2000 61 Qwen3-30B-A3B-Hybrid-Auto run/imatrix (GPU-18) 16/48 5.40s/c 1.5/28.6m(9.7-22.7) [21/318] 6.2585
nvm
nvm
nvm
I pulled the model exactly when I wrote you that I queued it a few hours ago. So your current version must include the wrong template. No problem just update the model and we will nuke and requantize it with the correct chat template.
I stopped marco from continue generating further quants for Qwen3-30B-A3B-Hybrid-Auto by setting the override and interrupt flag so no more resources are wasted on quantizing the faulty version of Qwen3-30B-A3B-Hybrid-Auto. Please fix the chat template and let me know once it is fixed.
nvm
nvm
if you want you can make the quant private so people don't download faulty version
As soon you fix the model, I will nuke all our current quants as part of the requantisation process but I can also nuke them now if you prefer.
nvm
nvm
nvm
OK great I will now start to requant this model.
nvm
nvm
You have 39 < but 40 > characters in your chat template so something must be wrong as they always need to match in jinja unless I'm missing something.
Edit1: No that is fine because of if loop.index0 > ns.last_query_index
Edit2: { and } are all booth 135 times and so fine.
nvm
With https://j2live.ttl255.com/ I'm getting:
Rendering error:
expected token 'end of print statement', got 'name'
Using:
{%- if tools %} {{- '<|im_start|>system
' }} {%- if messages[0].role == 'system' %} {{- 'You are Qwen3 30B Hybrid Auto
' + messages[0].content }} {%- endif %} {{- "
# Important Guidelines:" }} {{- "
* You have the ability to enable step-by-step thinking, for each response, depending on the complexity of the query. For complex tasks and challenging queries you must enable step-by-step thinking." }} {{- "
* To enable step-by-step thinking, you must use <think> XML tag at the beginning of generation. This will enable thinking. At the end of your thinking you must use </think> XML tag, and after that deliver your final answer to the user. " }} {{- "If you use any spaces inside these tags, thinking will fail. Make sure to never use spaces inside the tags." }} {{- "
# Tools
Upon user request you may call one or more of the following tools.
You are provided with the tool signatures within <tools></tools> XML tags:
<tools>" }} {%- for tool in tools %} {{- "
" }} {{- tool | tojson }} {%- endfor %} {{- "
</tools>
For each tool call, return a json object with tool name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <tool-name>, "arguments": <args-json-object>}
</tool_call>" }} {{- "<|im_end|>
" }} {%- else %} {%- if messages[0].role == 'system' %} {{- '<|im_start|>system
' }} {{- 'You are Qwen3 30B Hybrid Auto
' + messages[0].content }} {{- "
# Important Guidelines:" }} {{- "
* You have the ability to enable step-by-step thinking for each response, depending on the complexity of the query. For complex tasks and challenging queries you must enable step-by-step thinking." }} {{- "
* To enable step-by-step thinking you must use <think> XML tag at the beginning of generation. This will enable thinking. At the end of your thinking you must use </think> XML tag, and after that deliver your final answer to the user. " }} {{- "If you use any spaces inside these tags, thinking will fail. Make sure to never use spaces inside the tags." }} {{- '<|im_end|>
' }} {%- endif %} {%- endif %} {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} {%- for message in messages[::-1] %} {%- set index = (messages|length - 1) - loop.index0 %} {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %} {%- set ns.multi_step_tool = false %} {%- set ns.last_query_index = index %} {%- endif %} {%- endfor %} {%- for message in messages %} {%- if message.content is string %} {%- set content = message.content %} {%- else %} {%- set content = '' %} {%- endif %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '
' + content + '<|im_end|>' + '
' }} {%- elif message.role == "assistant" %} {%- set reasoning_content = '' %} {%- if message.reasoning_content is string %} {%- set reasoning_content = message.reasoning_content %} {%- else %} {%- if '</think>' in content %} {%- set reasoning_content = content.split('</think>')[0].rstrip('
').split('<think>')[-1].lstrip('
') %} {%- set content = content.split('</think>')[-1].lstrip('
') %} {%- endif %} {%- endif %} {%- if loop.index0 > ns.last_query_index %} {%- if loop.last or (not loop.last and reasoning_content) %} {{- '<|im_start|>' + message.role + '
<think>
' + reasoning_content.strip('
') + '
</think>
' + content.lstrip('
') }} {%- else %} {{- '<|im_start|>' + message.role + '
' + content }} {%- endif %} {%- else %} {{- '<|im_start|>' + message.role + '
' + content }} {%- endif %} {%- if message.tool_calls %} {%- for tool_call in message.tool_calls %} {%- if (loop.first and content) or (not loop.first) %} {{- '
' }} {%- endif %} {%- if tool_call.function %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '<tool_call>
{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {%- if tool_call.arguments is string %} {{- tool_call.arguments }} {%- else %} {{- tool_call.arguments | tojson }} {%- endif %} {{- '}
</tool_call>' }} {%- endfor %} {%- endif %} {{- '<|im_end|>
' }} {%- elif message.role == "tool" %} {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>tool' }} {%- endif %} {{- '
<tool_response>
' }} {{- content }} {{- '
</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>
' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant
' }} {%- endif %}
nvm
nvm
nvm
This will probably be what llama.cpp sees:
{%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0].role == 'system' %} {{- 'You are Qwen3 30B Hybrid Auto\n' + messages[0].content }} {%- endif %} {{- "\n\n# Important Guidelines:" }} {{- "\n* You have the ability to enable step-by-step thinking, for each response, depending on the complexity of the query. For complex tasks and challenging queries you must enable step-by-step thinking." }} {{- "\n* To enable step-by-step thinking, you must use <think> XML tag at the beginning of generation. This will enable thinking. At the end of your thinking you must use </think> XML tag, and after that deliver your final answer to the user. " }} {{- "If you use any spaces inside these tags, thinking will fail. Make sure to never use spaces inside the tags." }} {{- "\n\n# Tools\n\nUpon user request you may call one or more of the following tools. \nYou are provided with the tool signatures within <tools></tools> XML tags:\n<tools>" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n</tools>\n\nFor each tool call, return a json object with tool name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{"name": <tool-name>, "arguments": <args-json-object>}\n</tool_call>" }} {{- "<|im_end|>\n" }} {%- else %} {%- if messages[0].role == 'system' %} {{- '<|im_start|>system\n' }} {{- 'You are Qwen3 30B Hybrid Auto\n' + messages[0].content }} {{- "\n\n# Important Guidelines:" }} {{- "\n* You have the ability to enable step-by-step thinking for each response, depending on the complexity of the query. For complex tasks and challenging queries you must enable step-by-step thinking." }} {{- "\n* To enable step-by-step thinking you must use <think> XML tag at the beginning of generation. This will enable thinking. At the end of your thinking you must use </think> XML tag, and after that deliver your final answer to the user. " }} {{- "If you use any spaces inside these tags, thinking will fail. Make sure to never use spaces inside the tags." }} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} {%- for message in messages[::-1] %} {%- set index = (messages|length - 1) - loop.index0 %} {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %} {%- set ns.multi_step_tool = false %} {%- set ns.last_query_index = index %} {%- endif %} {%- endfor %} {%- for message in messages %} {%- if message.content is string %} {%- set content = message.content %} {%- else %} {%- set content = '' %} {%- endif %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set reasoning_content = '' %} {%- if message.reasoning_content is string %} {%- set reasoning_content = message.reasoning_content %} {%- else %} {%- if '</think>' in content %} {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %} {%- set content = content.split('</think>')[-1].lstrip('\n') %} {%- endif %} {%- endif %} {%- if loop.index0 > ns.last_query_index %} {%- if loop.last or (not loop.last and reasoning_content) %} {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- if message.tool_calls %} {%- for tool_call in message.tool_calls %} {%- if (loop.first and content) or (not loop.first) %} {{- '\n' }} {%- endif %} {%- if tool_call.function %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {%- if tool_call.arguments is string %} {{- tool_call.arguments }} {%- else %} {{- tool_call.arguments | tojson }} {%- endif %} {{- '}\n</tool_call>' }} {%- endfor %} {%- endif %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>tool' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- endif %}
https://j2live.ttl255.com/ states:
Rendering error:
expected token 'end of print statement', got 'name'
You must have made some mistake when YAML string encoding your chat template as your original template turns out as valid on https://j2live.ttl255.com/
nvm
I think I see what went wrong. Check this line in your original template:
{{- "\n</tools>\n\nFor each tool call, return a json object with tool name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <tool-name>, \"arguments\": <args-json-object>}\n</tool_call>" }}
Mainely take note of {\"name\": <tool-name>, \"arguments\"
When you YAML string decode your template that results in:
{"name": <tool-name>, "arguments"
Note how the backslashes are suddendly missing!
When you YAML encode your chat template \" should translate to \\\" and not \"
- Replace
\"with\\" - Replace
"with\" - Replace
CRLF(or LF if you are on anything other than Windows) with nothing to remove all newlines.
nvm I give up. I won't be doing this. I have everything working perfectly here and that's where I stop.
Can you please stop keep messing with your messages? I find it quite disrespectfull I spent 40 minutes helping you just for you to then edit all your messages.
nvm I give up. I won't be doing this. I have everything working perfectly here and that's where I stop.
All that you missed was 4 backslash caracters to escape the backslash before the quotation marks.
thank you. sorry for your time. I've deleted the model already. I won't be doing this. I apologize for taking your time
thank you. sorry for your time. I've deleted the model already. I won't be doing this. I apologize for taking your time
Why can't you just add the missing 4 characters now that we spent almost an hour debugging this issue?
@phoebdroid
Sorry if you make me spent one hour debugging your chat template it's too late. I'm not letting my valuable time go to waste so screw you I just recreated your entire model from scratch but actually working under: https://huggingface.co/nicoboss/Qwen3-30B-A3B-Hybrid-Auto
I'm now nuking the broken quants from your model and replacing them with quants from my working one.
that's ok. If you got it working this is very good news for the community. Thank you. Sorry for taking your time.
btw, just looked at "your" model, I've noticed you've used my words, my screenshot, my chat template, without a single mention of me :) Not that I care about myself to be honest, I am Not4Fame, I'm a pretty selfless person, but now I know you. I feel validated even more for disappearing now. I'll sleep good tonight, I wish the same for you.
btw, just looked at "your" model, I've noticed you've used my words, my screenshot, my chat template, without a single mention of me :) Not that I care about myself to be honest, I am Not4Fame, I'm a pretty selfless person, but now I know you. I feel validated even more for disappearing now. I'll sleep good tonight, I wish the same for you.
I mentioned you inside the commit message where I added your chat template and in there also linked to this very discussion from which I copied your model description. I will obviously mention you on the model card as well. I just wanted to make sure it actually works before I spending any time improving the model card. It would have been much better for everyone if you just kept your model under your account so you get your proper credits for it and had the ability to maintain it yourself. Me having to recreate your model was totally avoidable effort. Really the only thing missing in your model where 4 backslash characters. I find it quite crazy you decided to go nuclear and delete everything instead of spending the probably 10 second it would have taken you to add them after having me to spend almost an hour debugging it. Feel free to duplicate it to your account using https://huggingface.co/spaces/huggingface-projects/repo_duplicator and I will link to it but likely still keep a copy of it under my account as I don't feel you can take proper care of it after deleting the previous one for no reason.
@phoebdroid I now added this section to the top of the model card of https://huggingface.co/nicoboss/Qwen3-30B-A3B-Hybrid-Auto. I hope you are happy now:
Model card and chat template from @phoebdroid . You can read about the unfortunate circumstances that forced me to recreate this model under https://huggingface.co/mradermacher/model_requests/discussions/1362. To get the full picture you will need to look at the edit history of his messages.
I have no interest, all I wanted from the beginning was for the community to have it, that's all. As what I've achieved is something that doesn't exist yet, an autonomously reasoning model. And before you blabber about this being YOYO-AI's model, with all the respect towards YOYO-AI for the NuSlerp merge, without my chat template, the model they've released is borderline unusable. It's a confused Frankenstein model which doesn't know what to do.
You forced me into creating the model, I never wanted. I just made all this possible by creating the chat template hack. And all I asked from you was HELP. Because I am someone who knows what they know and don't, and never shy to ask for help. All of this could have been much. MUCH easier and faster if you just listened to me and embedded the chat template into YOYO-AI's model and did the quant. I told you I've never done any of this and that I don't know about any of this. But you wouldn't do that, probably due to your valuable time concerns. The result? A sad and heartbroken man walks away from his creation. I don't allow anything or anyone to make me sad. So, I want nothing to do with any of this. If the model works and community gets an auto reasoning Qwen3, I'll call it a nice day.
P.S. you were not able to copy my chat template correctly. If you use it the way it is in "your" model right now. It will not work
Also. I do NOT give my consent for you to use "my" words, the lines "I" wrote when I was having fun, "my" screenshot from "my" custom user interface etc.
The fact that you went ahead and made them all yours all of a sudden, speaks volumes about you Nico.
I however, do give my consent for my chat template to be used. Because that's what I wanted for the community in the first place.
I also give my consent for you to keep using the name "I" gave the model. (not that you seem to respect any of this and have asked but anyway, for the community)
You MUST remove the two remnant lines in the chat template which you have copied from an old version though, which come from my personal chat template. the ones about my turn_enabler tool. If you don't remove them they will end up just confusing the model.
And no Nico, screw "you".
I have no interest, all I wanted from the beginning was for the community to have it, that's all. As what I've achieved is something that doesn't exist yet, an autonomously reasoning model.
Which is my exact goal as well.
You forced me into creating the model, I never wanted. I just made all this possible by creating the chat template hack.
I did not at all force you to create it. I just recommended that you do because without a having the chat template embedded inside a model nobody will be able to use it. You could have simply said from the very beginning that this is not something you want to do.
And all I asked from you was HELP. Because I am someone who knows what they know and don't, and never shy to ask for help. All of this could have been much. MUCH easier and faster if you just listened to me and embedded the chat template into YOYO-AI's model and did the quant. I told you I've never done any of this and that I don't know about any of this. But you wouldn't do that, probably due to your valuable time concerns.
I spent hours helping you on HuggingFace and Reddit. I'm perfectly fine with helping you and will keep helping you in the future if you ask me. I want you and everyone else to improve and learn how to create great LLMs. You could have at any point said that you just want me to do it instead of working together and I would have done so. I also want to mention at this point that I genuinely had to debug your chat template for 40 minutes to spot the mistake so it's not like I had the solution all along and didn’t tell you for no reason. I was about to post the fixed chat template the moment you nuked everything.
The result? A sad and heartbroken man walks away from his creation. I don't allow anything or anyone to make me sad. So, I want nothing to do with any of this. If the model works and community gets an auto reasoning Qwen3, I'll call it a nice day.
I'm honestly also really sad how it all turned out but I can't see how I caused this terrible situation. I gave my very best helping you to succeed. Only once you decided to go nuclear I decided to upload a fork of it so our work wasn’t all for nothing.
I'm actually even more sad how you decided to just abandon your amazing creation. You put so much work into it and now just walk away leaving it all behind. This is a personal choice which I will respect but at least for me it's likely the saddest part
P.S. you were not able to copy my chat template correctly. If you use it the way it is in "your" model right now. It will not work
I'm using exactly the one we worked on fixing. Keep in mind that the one in the model card is not the one inside the actual model.
The fact that you went ahead and made them all yours all of a sudden, speaks volumes about you Nico.
I gave proper credit to you. It's the first sentence on the model card is Model card and chat template from
@phoebdroid
. You released the model including your chat template under the Apache 2.0 license so I have the right to fork it. Legally I don't even need to give credit but morally it is for sure the right thing to do given how much time and effort went into coming up with this awesome idea and implementing it.
You MUST remove the two remnant lines in the chat template which you have copied from an old version though, which come from my personal chat template. the ones about my turn_enabler tool. If you don't remove them, they will end up just confusing the model.
I will remove it from the model card not because I legally have to (they were part of the model your released) but because they make the model worse if used.
And no Nico, screw "you".
I regret how my first message came over a quite rude and unprofessional but you have to see things from my perspective. I have a massive backlog of relatively urgent things to do but made time to help some stranger on the internet because I felt they are creating something awesome and need my help just to then be told that it’s all for nothing the moment I was finally done so obviously I was quite emotional at that moment.
man I have no idea what I'm doing, and I'm trying to make it happen but I've never done this before, can you please help me if you have time ? I don't know this YAML stuff. here is the working version of the chat template from my system
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0].role == 'system' %}
{{- 'You are Qwen3 30B Hybrid Auto\n' + messages[0].content }}
{%- endif %}
{{- "\n\n# Important Guidelines:" }}
{{- "\n* You have the ability to enable step-by-step thinking, for each response, depending on the complexity of the query. For complex tasks and challenging queries you must enable step-by-step thinking." }}
{{- "\n* To enable step-by-step thinking, you must use <think> XML tag at the beginning of generation. This will enable thinking. At the end of your thinking you must use </think> XML tag, and after that deliver your final answer to the user. " }}
{{- "\n* If you use any spaces inside these tags, thinking will fail. Make sure to never use spaces inside the tags. " }}
{{- "\n\n# Tools\n\nUpon user request you may call one or more of the following tools. \nYou are provided with the tool signatures within XML tags:\n" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n\n\nFor each tool call, return a json object with tool name and arguments within XML tags:\n\n{"name": , "arguments": }\n" }}
{{- "\nBy default you have 5 tool-turns within a single assistant turn, if you ever need to chain more than 5 tools back-to-back, in a single turn, you need to first use turn_enabler to add more turns." }}
{{- "\nRemember, tool_enabler itself will consume one tool-turn (it can enable multiple tools at once if needed). This means by default, you can call tool_enabler + 4 more tools in a single assistant turn." }}
{{- "<|im_end|>\n" }}
{%- else %}
{%- if messages[0].role == 'system' %}
{{- '<|im_start|>system\n' }}
{{- 'You are Qwen3 30B Hybrid Auto\n' + messages[0].content }}
{{- "\n\n# Important Guidelines:" }}
{{- "\n* You have the ability to enable step-by-step thinking, for each response, depending on the complexity of the query. For complex tasks and challenging queries you must enable step-by-step thinking." }}
{{- "\n* To enable step-by-step thinking, you must use <think> XML tag at the beginning of generation. This will enable thinking. At the end of your thinking you must use </think> XML tag, and after that deliver your final answer to the user. " }}
{{- "\n* If you use any spaces inside these tags, thinking will fail. Make sure to never use spaces inside the tags. " }}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
{%- for message in messages[::-1] %}
{%- set index = (messages|length - 1) - loop.index0 %}
{%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('') and message.content.endswith('')) %}
{%- set ns.multi_step_tool = false %}
{%- set ns.last_query_index = index %}
{%- endif %}
{%- endfor %}
{%- for message in messages %}
{%- if message.content is string %}
{%- set content = message.content %}
{%- else %}
{%- set content = '' %}
{%- endif %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{%- set reasoning_content = '' %}
{%- if message.reasoning_content is string %}
{%- set reasoning_content = message.reasoning_content %}
{%- else %}
{%- if '' in content %}
{%- set reasoning_content = content.split('')[0].rstrip('\n').split('')[-1].lstrip('\n') %}
{%- set content = content.split('')[-1].lstrip('\n') %}
{%- endif %}
{%- endif %}
{%- if loop.index0 > ns.last_query_index %}
{%- if loop.last or (not loop.last and reasoning_content) %}
{{- '<|im_start|>' + message.role + '\n\n' + reasoning_content.strip('\n') + '\n\n\n' + content.lstrip('\n') }}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- else %}
{{- '<|im_start|>' + message.role + '\n' + content }}
{%- endif %}
{%- if message.tool_calls %}
{%- for tool_call in message.tool_calls %}
{%- if (loop.first and content) or (not loop.first) %}
{{- '\n' }}
{%- endif %}
{%- if tool_call.function %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{%- if tool_call.arguments is string %}
{{- tool_call.arguments }}
{%- else %}
{{- tool_call.arguments | tojson }}
{%- endif %}
{{- '}\n' }}
{%- endfor %}
{%- endif %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>tool' }}
{%- endif %}
{{- '\n\n' }}
{{- content }}
{{- '\n' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %}
Wow so the chat template you gave me to fix contains the turn_enabler lines?!?
Can you please tell me which 2 lines you want me to remove? Otherwise I'm unable to remove them.
The only line containing turn_enabler I can find is this one:
{{- "\nBy default you have 5 tool-turns within a single assistant turn, if you ever need to chain more than 5 tools back-to-back, in a single turn, you need to first use turn_enabler to add more turns." }}