Buckets:

hf-doc-build/doc-dev / trl /pr_5607 /en /chat_template_utils.md
|
download
raw
5.34 kB

Chat template utilities

clone_chat_template[[trl.clone_chat_template]]

trl.clone_chat_template[[trl.clone_chat_template]]

Source

Clones a chat template from a source tokenizer to the target tokenizer and updates the model accordingly.

This function:

  • Copies the chat template from a source tokenizer to the target tokenizer.
  • Adds any new tokens from the source tokenizer to the target tokenizer.
  • Sets and synchronizes the EOS token across the tokenizer and model.
  • Resizes the model's token embeddings to match the new vocabulary size, optionally rounding it up to a multiple of a specified value. In such cases, dummy tokens are added to the tokenizer to ensure the vocabulary size matches the embedding dimensions.

Example:

from transformers import AutoModelForCausalLM, AutoTokenizer
from trl import clone_chat_template

model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-1B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-1B")
model, tokenizer, added_tokens = clone_chat_template(model, tokenizer, "Qwen/Qwen3-0.6B")

Parameters:

model (PreTrainedModel) : Model to update.

tokenizer (PreTrainedTokenizer) : Tokenizer to update.

source_tokenizer_path (str) : Path or identifier of the pretrained tokenizer to clone from.

resize_to_multiple_of (int or None, optional, defaults to 64) : The embedding layer will be resized to the new vocabulary size. If this is not None, it will round up the new vocabulary size to the nearest multiple of this value.

Returns:

model ([PreTrainedModel](https://huggingface.co/docs/transformers/main/en/main_classes/model#transformers.PreTrainedModel))

Updated model with resized token embeddings and EOS token configured. tokenizer (PreTrainedTokenizer): Updated tokenizer with the chat template and special tokens applied. added_tokens (list[int]): List of tokens that were added to the tokenizer from the source tokenizer.

is_chat_template_prefix_preserving[[trl.chat_template_utils.is_chat_template_prefix_preserving]]

trl.chat_template_utils.is_chat_template_prefix_preserving[[trl.chat_template_utils.is_chat_template_prefix_preserving]]

Source

Check whether the chat template preserves prefixes when applied.

A prefix-preserving chat template renders earlier messages identically regardless of what messages follow. This property is required by _get_tool_suffix_ids, which extracts tool response formatting tokens by comparing tokenizations with and without tool messages appended.

Parameters:

processing_class (PreTrainedTokenizer or ProcessorMixin) : Tokenizer or processor instance to check.

Returns:

bool

True if the chat template preserves prefixes, False otherwise.

get_training_chat_template[[trl.get_training_chat_template]]

trl.get_training_chat_template[[trl.get_training_chat_template]]

Source

Get a training-compatible chat template, if needed.

Returns a patched chat template that is prefix-preserving and includes {%% generation %%} / {%% endgeneration %%} markers for assistant-only loss masking. Returns None if the tokenizer's template already satisfies both requirements. Currently DeepSeek-V3, GPT-OSS, LLaMA 3, Qwen2.5, and Qwen3 are supported.

Example:

>>> from trl.chat_template_utils import get_training_chat_template
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
>>> messages1 = [
...     {"role": "user", "content": "What is 2 * 3?"},
...     {
...         "role": "assistant",
...         "content": "",
...         "tool_calls": [{"type": "function", "function": {"name": "multiply", "arguments": {"a": 2, "b": 3}}}],
...     },
... ]
>>> messages2 = messages1 + [
...     {"role": "tool", "name": "multiply", "content": "6"},
... ]
>>> tokenizer.apply_chat_template(messages1, tokenize=False)
'user\nWhat is 2 * 3?\nassistant\n\n\n\n\n\n{"name": "multiply", "arguments": {"a": 2, "b": 3}}\n\n'

>>> tokenizer.apply_chat_template(messages2, tokenize=False, add_generation_prompt=True)
'user\nWhat is 2 * 3?\nassistant\n\n{"name": "multiply", "arguments": {"a": 2, "b": 3}}\n\nuser\n\n6\n\nassistant\n'

>>> #                                                        ^ think tags missing
>>> chat_template = get_training_chat_template(tokenizer)
>>> tokenizer.apply_chat_template(messages1, tokenize=False, chat_template=chat_template)
'user\nWhat is 2 * 3?\nassistant\n\n\n\n\n\n{"name": "multiply", "arguments": {"a": 2, "b": 3}}\n\n'

>>> tokenizer.apply_chat_template(
...     messages2, tokenize=False, add_generation_prompt=True, chat_template=chat_template
... )
'user\nWhat is 2 * 3?\nassistant\n\n\n\n\n\n{"name": "multiply", "arguments": {"a": 2, "b": 3}}\n\nuser\n\n6\n\nassistant\n'

Parameters:

tokenizer (PreTrainedTokenizer) : Tokenizer instance to check.

Returns:

str` or `None

Training-compatible chat template, or None if no patching is needed.

Xet Storage Details

Size:
5.34 kB
·
Xet hash:
d9fa2b333c1fe36059a906ef2b7352d25c95fca4e04c8379523fc03c3e4679b8

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.