Image-Text-to-Text
Transformers
Safetensors
English
gemma4
text-generation-inference
unsloth
conversational
Instructions to use PS4Research/gemma-4-test-delete with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use PS4Research/gemma-4-test-delete with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="PS4Research/gemma-4-test-delete") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("PS4Research/gemma-4-test-delete") model = AutoModelForImageTextToText.from_pretrained("PS4Research/gemma-4-test-delete") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use PS4Research/gemma-4-test-delete with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "PS4Research/gemma-4-test-delete" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PS4Research/gemma-4-test-delete", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/PS4Research/gemma-4-test-delete
- SGLang
How to use PS4Research/gemma-4-test-delete with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "PS4Research/gemma-4-test-delete" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PS4Research/gemma-4-test-delete", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "PS4Research/gemma-4-test-delete" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "PS4Research/gemma-4-test-delete", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Unsloth Studio new
How to use PS4Research/gemma-4-test-delete with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for PS4Research/gemma-4-test-delete to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for PS4Research/gemma-4-test-delete to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for PS4Research/gemma-4-test-delete to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="PS4Research/gemma-4-test-delete", max_seq_length=2048, ) - Docker Model Runner
How to use PS4Research/gemma-4-test-delete with Docker Model Runner:
docker model run hf.co/PS4Research/gemma-4-test-delete
| { | |
| "audio_token": "<|audio|>", | |
| "backend": "tokenizers", | |
| "boa_token": "<|audio>", | |
| "boi_token": "<|image>", | |
| "bos_token": "<bos>", | |
| "eoa_token": "<audio|>", | |
| "eoc_token": "<channel|>", | |
| "eoi_token": "<image|>", | |
| "eos_token": "<eos>", | |
| "eot_token": "<turn|>", | |
| "escape_token": "<|\"|>", | |
| "etc_token": "<tool_call|>", | |
| "etd_token": "<tool|>", | |
| "etr_token": "<tool_response|>", | |
| "extra_special_tokens": [ | |
| "<|video|>" | |
| ], | |
| "image_token": "<|image|>", | |
| "is_local": false, | |
| "local_files_only": false, | |
| "mask_token": "<mask>", | |
| "model_max_length": 262144, | |
| "model_specific_special_tokens": { | |
| "audio_token": "<|audio|>", | |
| "boa_token": "<|audio>", | |
| "boi_token": "<|image>", | |
| "eoa_token": "<audio|>", | |
| "eoc_token": "<channel|>", | |
| "eoi_token": "<image|>", | |
| "eot_token": "<turn|>", | |
| "escape_token": "<|\"|>", | |
| "etc_token": "<tool_call|>", | |
| "etd_token": "<tool|>", | |
| "etr_token": "<tool_response|>", | |
| "image_token": "<|image|>", | |
| "soc_token": "<|channel>", | |
| "sot_token": "<|turn>", | |
| "stc_token": "<|tool_call>", | |
| "std_token": "<|tool>", | |
| "str_token": "<|tool_response>", | |
| "think_token": "<|think|>" | |
| }, | |
| "pad_token": "<pad>", | |
| "padding_side": "right", | |
| "processor_class": "Gemma4Processor", | |
| "response_schema": { | |
| "properties": { | |
| "content": { | |
| "type": "string" | |
| }, | |
| "role": { | |
| "const": "assistant" | |
| }, | |
| "thinking": { | |
| "type": "string" | |
| }, | |
| "tool_calls": { | |
| "items": { | |
| "properties": { | |
| "function": { | |
| "properties": { | |
| "arguments": { | |
| "additionalProperties": {}, | |
| "type": "object", | |
| "x-parser": "gemma4-tool-call" | |
| }, | |
| "name": { | |
| "type": "string" | |
| } | |
| }, | |
| "type": "object", | |
| "x-regex": "call\\:(?P<name>\\w+)(?P<arguments>\\{.*\\})" | |
| }, | |
| "type": { | |
| "const": "function" | |
| } | |
| }, | |
| "type": "object" | |
| }, | |
| "type": "array", | |
| "x-regex-iterator": "<\\|tool_call>(.*?)<tool_call\\|>" | |
| } | |
| }, | |
| "type": "object", | |
| "x-regex": "(\\<\\|channel\\>thought\\n(?P<thinking>.*?)\\<channel\\|\\>)?(?P<tool_calls>\\<\\|tool_call\\>.*\\<tool_call\\|\\>)?(?P<content>(?:(?!\\<turn\\|\\>)(?!\\<\\|tool_response\\>).)+)?(?:\\<turn\\|\\>|\\<\\|tool_response\\>)?" | |
| }, | |
| "soc_token": "<|channel>", | |
| "sot_token": "<|turn>", | |
| "stc_token": "<|tool_call>", | |
| "std_token": "<|tool>", | |
| "str_token": "<|tool_response>", | |
| "think_token": "<|think|>", | |
| "tokenizer_class": "GemmaTokenizer", | |
| "unk_token": "<unk>", | |
| "added_tokens_decoder": { | |
| "0": { | |
| "content": "<pad>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "1": { | |
| "content": "<eos>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "2": { | |
| "content": "<bos>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "3": { | |
| "content": "<unk>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "4": { | |
| "content": "<mask>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "46": { | |
| "content": "<|tool>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "47": { | |
| "content": "<tool|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "48": { | |
| "content": "<|tool_call>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "49": { | |
| "content": "<tool_call|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "50": { | |
| "content": "<|tool_response>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "51": { | |
| "content": "<tool_response|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "52": { | |
| "content": "<|\"|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "98": { | |
| "content": "<|think|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "100": { | |
| "content": "<|channel>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "101": { | |
| "content": "<channel|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "105": { | |
| "content": "<|turn>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "106": { | |
| "content": "<turn|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "255999": { | |
| "content": "<|image>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "256000": { | |
| "content": "<|audio>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "258880": { | |
| "content": "<|image|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "258881": { | |
| "content": "<|audio|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "258882": { | |
| "content": "<image|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "258883": { | |
| "content": "<audio|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| }, | |
| "258884": { | |
| "content": "<|video|>", | |
| "single_word": false, | |
| "lstrip": false, | |
| "rstrip": false, | |
| "normalized": false, | |
| "special": true | |
| } | |
| }, | |
| "chat_template": "{%- macro format_parameters(properties, required, filter_keys=false) -%}\n {%- set standard_keys = ['description', 'type', 'properties', 'required', 'nullable'] -%}\n {%- set ns = namespace(found_first=false) -%}\n {%- for key, value in properties | dictsort -%}\n {%- set add_comma = false -%}\n {%- if not filter_keys or key not in standard_keys -%}\n {%- if ns.found_first %},{% endif -%}\n {%- set ns.found_first = true -%}\n {{ key }}:{\n {%- if value['description'] -%}\n description:<|\"|>{{ value['description'] }}<|\"|>\n {%- set add_comma = true -%}\n {%- endif -%}\n {%- if value['type'] | upper == 'STRING' -%}\n {%- if value['enum'] -%}\n {%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}\n enum:{{ format_argument(value['enum']) }}\n {%- endif -%}\n {%- elif value['type'] | upper == 'ARRAY' -%}\n {%- if value['items'] is mapping and value['items'] -%}\n {%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}\n items:{\n {%- set ns_items = namespace(found_first=false) -%}\n {%- for item_key, item_value in value['items'] | dictsort -%}\n {%- if item_value is not none -%}\n {%- if ns_items.found_first %},{% endif -%}\n {%- set ns_items.found_first = true -%}\n {%- if item_key == 'properties' -%}\n properties:{\n {%- if item_value is mapping -%}\n {{- format_parameters(item_value, value['items']['required'] | default([])) -}}\n {%- endif -%}\n }\n {%- elif item_key == 'required' -%}\n required:[\n {%- for req_item in item_value -%}\n <|\"|>{{- req_item -}}<|\"|>\n {%- if not loop.last %},{% endif -%}\n {%- endfor -%}\n ]\n {%- elif item_key == 'type' -%}\n {%- if item_value is string -%}\n type:{{ format_argument(item_value | upper) }}\n {%- else -%}\n type:{{ format_argument(item_value | map('upper') | list) }}\n {%- endif -%}\n {%- else -%}\n {{ item_key }}:{{ format_argument(item_value) }}\n {%- endif -%}\n {%- endif -%}\n {%- endfor -%}\n }\n {%- endif -%}\n {%- endif -%}\n {%- if value['nullable'] %}\n {%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}\n nullable:true\n {%- endif -%}\n {%- if value['type'] | upper == 'OBJECT' -%}\n {%- if value['properties'] is defined and value['properties'] is mapping -%}\n {%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}\n properties:{\n {{- format_parameters(value['properties'], value['required'] | default([])) -}}\n }\n {%- elif value is mapping -%}\n {%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}\n properties:{\n {{- format_parameters(value, value['required'] | default([]), filter_keys=true) -}}\n }\n {%- endif -%}\n {%- if value['required'] -%}\n {%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}\n required:[\n {%- for item in value['required'] | default([]) -%}\n <|\"|>{{- item -}}<|\"|>\n {%- if not loop.last %},{% endif -%}\n {%- endfor -%}\n ]\n {%- endif -%}\n {%- endif -%}\n {%- if add_comma %},{%- else -%} {%- set add_comma = true -%} {% endif -%}\n type:<|\"|>{{ value['type'] | upper }}<|\"|>}\n {%- endif -%}\n {%- endfor -%}\n{%- endmacro -%}\n{%- macro format_function_declaration(tool_data) -%}\n declaration:{{- tool_data['function']['name'] -}}{description:<|\"|>{{- tool_data['function']['description'] -}}<|\"|>\n {%- set params = tool_data['function']['parameters'] -%}\n {%- if params -%}\n ,parameters:{\n {%- if params['properties'] -%}\n properties:{ {{- format_parameters(params['properties'], params['required']) -}} },\n {%- endif -%}\n {%- if params['required'] -%}\n required:[\n {%- for item in params['required'] -%}\n <|\"|>{{- item -}}<|\"|>\n {{- ',' if not loop.last -}}\n {%- endfor -%}\n ],\n {%- endif -%}\n {%- if params['type'] -%}\n type:<|\"|>{{- params['type'] | upper -}}<|\"|>}\n {%- endif -%}\n {%- endif -%}\n {%- if 'response' in tool_data['function'] -%}\n {%- set response_declaration = tool_data['function']['response'] -%}\n ,response:{\n {%- if response_declaration['description'] -%}\n description:<|\"|>{{- response_declaration['description'] -}}<|\"|>,\n {%- endif -%}\n {%- if response_declaration['type'] | upper == 'OBJECT' -%}\n type:<|\"|>{{- response_declaration['type'] | upper -}}<|\"|>}\n {%- endif -%}\n {%- endif -%}\n }\n{%- endmacro -%}\n{%- macro format_argument(argument, escape_keys=True) -%}\n {%- if argument is string -%}\n {{- '<|\"|>' + argument + '<|\"|>' -}}\n {%- elif argument is boolean -%}\n {{- 'true' if argument else 'false' -}}\n {%- elif argument is mapping -%}\n {{- '{' -}}\n {%- set ns = namespace(found_first=false) -%}\n {%- for key, value in argument | dictsort -%}\n {%- if ns.found_first %},{% endif -%}\n {%- set ns.found_first = true -%}\n {%- if escape_keys -%}\n {{- '<|\"|>' + key + '<|\"|>' -}}\n {%- else -%}\n {{- key -}}\n {%- endif -%}\n :{{- format_argument(value, escape_keys=escape_keys) -}}\n {%- endfor -%}\n {{- '}' -}}\n {%- elif argument is sequence -%}\n {{- '[' -}}\n {%- for item in argument -%}\n {{- format_argument(item, escape_keys=escape_keys) -}}\n {%- if not loop.last %},{% endif -%}\n {%- endfor -%}\n {{- ']' -}}\n {%- else -%}\n {{- argument -}}\n {%- endif -%}\n{%- endmacro -%}\n{%- macro strip_thinking(text) -%}\n {%- set ns = namespace(result='') -%}\n {%- for part in text.split('<channel|>') -%}\n {%- if '<|channel>' in part -%}\n {%- set ns.result = ns.result + part.split('<|channel>')[0] -%}\n {%- else -%}\n {%- set ns.result = ns.result + part -%}\n {%- endif -%}\n {%- endfor -%}\n {{- ns.result | trim -}}\n{%- endmacro -%}\n\n{%- macro format_tool_response_block(tool_name, response) -%}\n {{- '<|tool_response>' -}}\n {%- if response is mapping -%}\n {{- 'response:' + tool_name + '{' -}}\n {%- for key, value in response | dictsort -%}\n {{- key -}}:{{- format_argument(value, escape_keys=False) -}}\n {%- if not loop.last %},{% endif -%}\n {%- endfor -%}\n {{- '}' -}}\n {%- else -%}\n {{- 'response:' + tool_name + '{value:' + format_argument(response, escape_keys=False) + '}' -}}\n {%- endif -%}\n {{- '<tool_response|>' -}}\n{%- endmacro -%}\n\n{%- set ns = namespace(prev_message_type=None) -%}\n{%- set loop_messages = messages -%}\n{{- bos_token -}}\n{#- Handle System/Tool Definitions Block -#}\n{%- if (enable_thinking is defined and enable_thinking) or tools or messages[0]['role'] in ['system', 'developer'] -%}\n {{- '<|turn>system\\n' -}}\n {#- Inject Thinking token at the very top of the FIRST system turn -#}\n {%- if enable_thinking is defined and enable_thinking -%}\n {{- '<|think|>\\n' -}}\n {%- set ns.prev_message_type = 'think' -%}\n {%- endif -%}\n {%- if messages[0]['role'] in ['system', 'developer'] -%}\n {%- if messages[0]['content'] is string -%}\n {{- messages[0]['content'] | trim -}}\n {%- elif messages[0]['content'] is sequence -%}\n {%- for item in messages[0]['content'] -%}\n {{- item['text'] | trim + ' '-}}\n {%- endfor -%}\n {%- endif -%}\n {%- set loop_messages = messages[1:] -%}\n {%- endif -%}\n {%- if tools -%}\n {%- for tool in tools %}\n {{- '<|tool>' -}}\n {{- format_function_declaration(tool) | trim -}}\n {{- '<tool|>' -}}\n {%- endfor %}\n {%- set ns.prev_message_type = 'tool' -%}\n {%- endif -%}\n {{- '<turn|>\\n' -}}\n{%- endif %}\n\n{#- Pre-scan: find last user message index for reasoning guard -#}\n{%- set ns_turn = namespace(last_user_idx=-1) -%}\n{%- for i in range(loop_messages | length) -%}\n {%- if loop_messages[i]['role'] == 'user' -%}\n {%- set ns_turn.last_user_idx = i -%}\n {%- endif -%}\n{%- endfor -%}\n\n{#- Loop through messages -#}\n{%- for message in loop_messages -%}\n {%- if message['role'] != 'tool' -%}\n {%- set ns.prev_message_type = None -%}\n {%- set role = 'model' if message['role'] == 'assistant' else message['role'] -%}\n {#- Detect continuation: suppress duplicate <|turn>model when previous non-tool message was also assistant -#}\n {%- set prev_nt = namespace(role=None, found=false) -%}\n {%- if loop.index0 > 0 -%}\n {%- for j in range(loop.index0 - 1, -1, -1) -%}\n {%- if not prev_nt.found -%}\n {%- if loop_messages[j]['role'] != 'tool' -%}\n {%- set prev_nt.role = loop_messages[j]['role'] -%}\n {%- set prev_nt.found = true -%}\n {%- endif -%}\n {%- endif -%}\n {%- endfor -%}\n {%- endif -%}\n {%- set continue_same_model_turn = (role == 'model' and prev_nt.role == 'assistant') -%}\n {%- if not continue_same_model_turn -%}\n {{- '<|turn>' + role + '\\n' }}\n {%- endif -%}\n\n {#- Render reasoning/reasoning_content as thinking channel -#}\n {%- set thinking_text = message.get('reasoning') or message.get('reasoning_content') -%}\n {%- if thinking_text and loop.index0 > ns_turn.last_user_idx and message.get('tool_calls') -%}\n {{- '<|channel>thought\\n' + thinking_text + '\\n<channel|>' -}}\n {%- endif -%}\n\n {%- if message['tool_calls'] -%}\n {%- for tool_call in message['tool_calls'] -%}\n {%- set function = tool_call['function'] -%}\n {{- '<|tool_call>call:' + function['name'] + '{' -}}\n {%- if function['arguments'] is mapping -%}\n {%- set ns_args = namespace(found_first=false) -%}\n {%- for key, value in function['arguments'] | dictsort -%}\n {%- if ns_args.found_first %},{% endif -%}\n {%- set ns_args.found_first = true -%}\n {{- key -}}:{{- format_argument(value, escape_keys=False) -}}\n {%- endfor -%}\n {%- elif function['arguments'] is string -%}\n {{- function['arguments'] -}}\n {%- endif -%}\n {{- '}<tool_call|>' -}}\n {%- endfor -%}\n {%- set ns.prev_message_type = 'tool_call' -%}\n {%- endif -%}\n\n {%- set ns_tr_out = namespace(flag=false) -%}\n {%- if message.get('tool_responses') -%}\n {#- Legacy: tool_responses embedded on the assistant message (Google/Gemma native) -#}\n {%- for tool_response in message['tool_responses'] -%}\n {{- format_tool_response_block(tool_response['name'] | default('unknown'), tool_response['response']) -}}\n {%- set ns_tr_out.flag = true -%}\n {%- set ns.prev_message_type = 'tool_response' -%}\n {%- endfor -%}\n {%- elif message.get('tool_calls') -%}\n {#- OpenAI Chat Completions: forward-scan consecutive role:tool messages -#}\n {%- set ns_tool_scan = namespace(stopped=false) -%}\n {%- for k in range(loop.index0 + 1, loop_messages | length) -%}\n {%- if ns_tool_scan.stopped -%}\n {%- elif loop_messages[k]['role'] != 'tool' -%}\n {%- set ns_tool_scan.stopped = true -%}\n {%- else -%}\n {%- set follow = loop_messages[k] -%}\n {#- Resolve tool_call_id to function name -#}\n {%- set ns_tname = namespace(name=follow.get('name') | default('unknown')) -%}\n {%- for tc in message['tool_calls'] -%}\n {%- if tc.get('id') == follow.get('tool_call_id') -%}\n {%- set ns_tname.name = tc['function']['name'] -%}\n {%- endif -%}\n {%- endfor -%}\n {#- Handle content as string or content-parts array -#}\n {%- set tool_body = follow.get('content') -%}\n {%- if tool_body is string -%}\n {{- format_tool_response_block(ns_tname.name, tool_body) -}}\n {%- elif tool_body is sequence and tool_body is not string -%}\n {%- set ns_txt = namespace(s='') -%}\n {%- for part in tool_body -%}\n {%- if part.get('type') == 'text' -%}\n {%- set ns_txt.s = ns_txt.s + (part.get('text') | default('')) -%}\n {%- endif -%}\n {%- endfor -%}\n {{- format_tool_response_block(ns_tname.name, ns_txt.s) -}}\n {%- else -%}\n {{- format_tool_response_block(ns_tname.name, tool_body) -}}\n {%- endif -%}\n {%- set ns_tr_out.flag = true -%}\n {%- set ns.prev_message_type = 'tool_response' -%}\n {%- endif -%}\n {%- endfor -%}\n {%- endif -%}\n\n {%- set captured_content -%}\n {%- if message['content'] is string -%}\n {%- if role == 'model' -%}\n {{- strip_thinking(message['content']) -}}\n {%- else -%}\n {{- message['content'] | trim -}}\n {%- endif -%}\n {%- elif message['content'] is sequence -%}\n {%- for item in message['content'] -%}\n {%- if item['type'] == 'text' -%}\n {%- if role == 'model' -%}\n {{- strip_thinking(item['text']) -}}\n {%- else -%}\n {{- item['text'] | trim -}}\n {%- endif -%}\n {%- elif item['type'] == 'image' -%}\n {{- '<|image|>' -}}\n {%- set ns.prev_message_type = 'image' -%}\n {%- elif item['type'] == 'audio' -%}\n {{- '<|audio|>' -}}\n {%- set ns.prev_message_type = 'audio' -%}\n {%- elif item['type'] == 'video' -%}\n {{- '<|video|>' -}}\n {%- set ns.prev_message_type = 'video' -%}\n {%- endif -%}\n {%- endfor -%}\n {%- endif -%}\n {%- endset -%}\n\n {{- captured_content -}}\n {%- set has_content = captured_content | trim | length > 0 -%}\n\n {%- if ns.prev_message_type == 'tool_call' and not ns_tr_out.flag -%}\n {{- '<|tool_response>' -}}\n {%- elif not (ns_tr_out.flag and not has_content) -%}\n {{- '<turn|>\\n' -}}\n {%- endif -%}\n {%- endif -%}\n{%- endfor -%}\n\n{%- if add_generation_prompt -%}\n {%- if ns.prev_message_type != 'tool_response' and ns.prev_message_type != 'tool_call' -%}\n {{- '<|turn>model\\n' -}}\n {%- if not enable_thinking | default(false) -%}\n {{- '<|channel>thought\\n<channel|>' -}}\n {%- endif -%}\n {%- endif -%}\n{%- endif -%}" | |
| } |