Buckets:
Data Utilities
prepare_multimodal_messages[[trl.prepare_multimodal_messages]]
trl.prepare_multimodal_messagestrl.prepare_multimodal_messageslist[dict[str, Any]]) --
Messages with "role" and "content". Content may be a raw string before transformation. List of messages
a "role" key ("system", "user", or "assistant") and a "content" key containing either a string or
a list of structured blocks if already prepared.
- images (
list) -- List of image objects to insert.0list[dict[str, Any]]A deep-copied list of messages where every"content"value is a list of structured content blocks, and all"image"placeholders are populated with the corresponding image objects.
Convert messages into a structured multimodal format and inject the provided images into the message contents.
Notes:
- When the input
messagesisn't already in the structured format, (i.e., all"content"values are strings), the function transforms them into the structured format by wrapping text in{"type": "text", "text": ...}and inserting{"type": "image"}placeholders for the images before the first user message. - When the input
messagesis already in the structured format (i.e., all"content"values are lists of structured blocks), the function only fills in the actual images in the existing{"type": "image"}placeholders. If the number of placeholders does not match the number of provided images, an error is raised.
Example:
# Input
[
{"role": "user", "content": "What's in this image?"},
{"role": "assistant", "content": "It looks like a cat."},
]
# Output, one image provided
[
{"role": "user", "content": [{"type": "image", "image": <PIL.Image.Image>}, {"type": "text", "text": "What's in this image?"}]},
{"role": "assistant", "content": [{"type": "text", "text": "It looks like a cat."}]},
]
prepare_multimodal_messages_vllm[[trl.prepare_multimodal_messages_vllm]]
trl.prepare_multimodal_messages_vllmtrl.prepare_multimodal_messages_vllmlist[dict[str, Any]]) --
Messages with "role" and "content". Content is expected to be a list of structured blocks.0list[dict[str, Any]]A deep-copied list of messages compatible with vLLM's expected input format.
Convert structured multimodal messages into a format compatible with vLLM. Replaces "type": "image" blocks with
"type": "image_pil" blocks, and "image": Image with "image_pil": Image.
Example:
# Input
[{"role": "user", "content": [{"type": "image", "image": <PIL.Image.Image>}, {"type": "text", "text": "What's in this image?"}]}]
# Output
[{"role": "user", "content": [{"type": "image_pil", "image_pil": <PIL.Image.Image>}, {"type": "text", "text": "What's in this image?"}]}]
is_conversational[[trl.is_conversational]]
trl.is_conversationaltrl.is_conversationaldict[str, Any]) --
A single data entry of a dataset. The example can have different keys depending on the dataset type.0boolTrue if the data is in a conversational format, False otherwise.
Check if the example is in a conversational format.
Examples:
>>> example = {"prompt": [{"role": "user", "content": "What color is the sky?"}]}
>>> is_conversational(example)
True
>>> example = {"prompt": "The sky is"}
>>> is_conversational(example)
False
is_conversational_from_value[[trl.is_conversational_from_value]]
trl.is_conversational_from_valuetrl.is_conversational_from_valuedict[str, Any]) --
A single data entry of a dataset. The example can have different keys depending on the dataset type.0boolTrue if the data is in a conversational Chatformat, False otherwise.
Check if the example is in a conversational format (from/value). Note that this format isn't recommended. Prefer the ChatML format (role/content)
Examples:
>>> example = {"conversations": [{"from": "user", "value": "What color is the sky?"}]}
>>> is_conversational_from_value(example)
True
>>> example = {"conversations": [{"role": "user", "content": "What color is the sky?"}]}
>>> is_conversational_from_value(example)
False
>>> example = {"conversations": "The sky is"}
>>> is_conversational_from_value(example)
False
apply_chat_template[[trl.apply_chat_template]]
trl.apply_chat_templatetrl.apply_chat_template
Apply a chat template to a conversational example along with the schema for a list of functions in tools.
For more details, see maybe_apply_chat_template().
maybe_apply_chat_template[[trl.maybe_apply_chat_template]]
trl.maybe_apply_chat_templatetrl.maybe_apply_chat_templatedict[str, list[dict[str, str]]) --
Dictionary representing a single data entry of a conversational dataset. Each data entry can have different
keys depending on the dataset type. The supported dataset types are:
- Language modeling dataset:
"messages". - Prompt-only dataset:
"prompt". - Prompt-completion dataset:
"prompt"and"completion". - Preference dataset:
"prompt","chosen", and"rejected". - Preference dataset with implicit prompt:
"chosen"and"rejected". - Unpaired preference dataset:
"prompt","completion", and"label".
For keys "messages", "prompt", "chosen", "rejected", and "completion", the values are lists of
messages, where each message is a dictionary with keys "role" and "content". Additionally, the example
may contain a "chat_template_kwargs" key, which is a dictionary of additional keyword arguments to pass
to the chat template renderer.
- tokenizer (PreTrainedTokenizerBase) -- Tokenizer to apply the chat template with.
- tools (
list[dict | Callable], optional) -- A list of tools (callable functions) that will be accessible to the model. If the template does not support function calling, this argument will have no effect. - **template_kwargs (
Any, optional) -- Additional kwargs to pass to the template renderer. Will be accessible by the chat template.0dict[str, str]Formatted example with the chat template applied.
If the example is in a conversational format, apply a chat template to it.
Notes:
This function does not alter the keys, except for Language modeling dataset, where
"messages"is replaced by"text".In case of prompt-only data, if the last role is
"user", the generation prompt is added to the prompt. Else, if the last role is"assistant", the final message is continued.
Example:
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
>>> example = {
... "prompt": [{"role": "user", "content": "What color is the sky?"}],
... "completion": [{"role": "assistant", "content": "It is blue."}],
... }
>>> apply_chat_template(example, tokenizer)
{'prompt': '<|user|>\nWhat color is the sky?<|end|>\n<|assistant|>\n', 'completion': 'It is blue.<|end|>\n'}
maybe_convert_to_chatml[[trl.maybe_convert_to_chatml]]
trl.maybe_convert_to_chatmltrl.maybe_convert_to_chatmldict[str, list]) --
A single data entry containing a list of messages.0dict[str, list]Example reformatted to ChatML style.
Convert a conversational dataset with fields from and value to ChatML format.
This function modifies conversational data to align with OpenAI's ChatML format:
- Replaces the key
"from"with"role"in message dictionaries. - Replaces the key
"value"with"content"in message dictionaries. - Renames
"conversations"to"messages"for consistency with ChatML.
Example:
>>> from trl import maybe_convert_to_chatml
>>> example = {
... "conversations": [
... {"from": "user", "value": "What color is the sky?"},
... {"from": "assistant", "value": "It is blue."},
... ]
... }
>>> maybe_convert_to_chatml(example)
{'messages': [{'role': 'user', 'content': 'What color is the sky?'},
{'role': 'assistant', 'content': 'It is blue.'}]}
extract_prompt[[trl.extract_prompt]]
trl.extract_prompttrl.extract_prompt
Extracts the shared prompt from a preference data example, where the prompt is implicit within both the chosen and rejected completions.
For more details, see maybe_extract_prompt().
maybe_extract_prompt[[trl.maybe_extract_prompt]]
trl.maybe_extract_prompttrl.maybe_extract_promptdict[str, list]) --
A dictionary representing a single data entry in the preference dataset. It must contain the keys
"chosen" and "rejected", where each value is either conversational or standard (str).0dict[str, list]A dictionary containing:
"prompt": The longest common prefix between the "chosen" and "rejected" completions."chosen": The remainder of the "chosen" completion, with the prompt removed."rejected": The remainder of the "rejected" completion, with the prompt removed.
Extracts the shared prompt from a preference data example, where the prompt is implicit within both the chosen and rejected completions.
If the example already contains a "prompt" key, the function returns the example as is. Else, the function
identifies the longest common sequence (prefix) of conversation turns between the "chosen" and "rejected"
completions and extracts this as the prompt. It then removes this prompt from the respective "chosen" and
"rejected" completions.
Examples:
>>> example = {
... "chosen": [
... {"role": "user", "content": "What color is the sky?"},
... {"role": "assistant", "content": "It is blue."},
... ],
... "rejected": [
... {"role": "user", "content": "What color is the sky?"},
... {"role": "assistant", "content": "It is green."},
... ],
... }
>>> extract_prompt(example)
{'prompt': [{'role': 'user', 'content': 'What color is the sky?'}],
'chosen': [{'role': 'assistant', 'content': 'It is blue.'}],
'rejected': [{'role': 'assistant', 'content': 'It is green.'}]}
Or, with the map method of Dataset:
>>> from trl import extract_prompt
>>> from datasets import Dataset
>>> dataset_dict = {
... "chosen": [
... [
... {"role": "user", "content": "What color is the sky?"},
... {"role": "assistant", "content": "It is blue."},
... ],
... [
... {"role": "user", "content": "Where is the sun?"},
... {"role": "assistant", "content": "In the sky."},
... ],
... ],
... "rejected": [
... [
... {"role": "user", "content": "What color is the sky?"},
... {"role": "assistant", "content": "It is green."},
... ],
... [
... {"role": "user", "content": "Where is the sun?"},
... {"role": "assistant", "content": "In the sea."},
... ],
... ],
... }
>>> dataset = Dataset.from_dict(dataset_dict)
>>> dataset = dataset.map(extract_prompt)
>>> dataset[0]
{'prompt': [{'role': 'user', 'content': 'What color is the sky?'}],
'chosen': [{'role': 'assistant', 'content': 'It is blue.'}],
'rejected': [{'role': 'assistant', 'content': 'It is green.'}]}
unpair_preference_dataset[[trl.unpair_preference_dataset]]
trl.unpair_preference_datasettrl.unpair_preference_dataset"chosen", "rejected" and optionally
"prompt".
- num_proc (
int, optional) -- Number of processes to use for processing the dataset. - desc (
str, optional) -- Meaningful description to be displayed alongside with the progress bar while mapping examples.0DatasetThe unpaired preference dataset.
Unpair a preference dataset.
Example:
>>> from datasets import Dataset
>>> dataset_dict = {
... "prompt": ["The sky is", "The sun is"],
... "chosen": [" blue.", "in the sky."],
... "rejected": [" green.", " in the sea."],
... }
>>> dataset = Dataset.from_dict(dataset_dict)
>>> dataset = unpair_preference_dataset(dataset)
>>> dataset
Dataset({
features: ['prompt', 'completion', 'label'],
num_rows: 4
})
>>> dataset[0]
{'prompt': 'The sky is', 'completion': ' blue.', 'label': True}
maybe_unpair_preference_dataset[[trl.maybe_unpair_preference_dataset]]
trl.maybe_unpair_preference_datasettrl.maybe_unpair_preference_dataset"chosen", "rejected" and optionally
"prompt".
- num_proc (
int, optional) -- Number of processes to use for processing the dataset. - desc (
str, optional) -- Meaningful description to be displayed alongside with the progress bar while mapping examples.0Dataset or DatasetDictThe unpaired preference dataset if it was paired, otherwise the original dataset.
Unpair a preference dataset if it is paired.
Example:
>>> from datasets import Dataset
>>> dataset_dict = {
... "prompt": ["The sky is", "The sun is"],
... "chosen": [" blue.", "in the sky."],
... "rejected": [" green.", " in the sea."],
... }
>>> dataset = Dataset.from_dict(dataset_dict)
>>> dataset = unpair_preference_dataset(dataset)
>>> dataset
Dataset({
features: ['prompt', 'completion', 'label'],
num_rows: 4
})
>>> dataset[0]
{'prompt': 'The sky is', 'completion': ' blue.', 'label': True}
pack_dataset[[trl.pack_dataset]]
trl.pack_datasettrl.pack_dataset
seq_length (
int) -- Target sequence length to pack to.strategy (
str, optional, defaults to"bfd") -- Packing strategy to use. Can be either:"bfd"(Best Fit Decreasing): Slower but preserves sequence boundaries. Sequences are never cut in the middle."wrapped": Faster but more aggressive. Ignores sequence boundaries and will cut sequences in the middle to completely fill each packed sequence with data.
map_kwargs (
dict, optional) -- Additional keyword arguments to pass to the dataset's map method when packing examples.0Dataset or DatasetDictThe dataset with packed sequences. The number of examples may decrease as sequences are combined.
Pack sequences in a dataset into chunks of size seq_length.
Example:
>>> from datasets import Dataset
>>> from trl import pack_dataset
>>> examples = {
... "input_ids": [[1, 2, 3], [4, 5], [6, 7, 8], [9]],
... "attention_mask": [[1, 1, 0], [1, 0], [1, 0, 0], [1]],
... }
>>> dataset = Dataset.from_dict(examples)
>>> packed_dataset = pack_dataset(dataset, seq_length=4, strategy="bfd")
>>> packed_dataset[:]
{'input_ids': [[1, 2, 3, 9], [6, 7, 8], [4, 5]],
'attention_mask': [[1, 1, 0, 1], [1, 0, 0], [1, 0]],
'seq_lengths': [[3, 1], [3], [2]]}
truncate_dataset[[trl.truncate_dataset]]
trl.truncate_datasettrl.truncate_dataset
- max_length (
int) -- Maximum sequence length to truncate to. - map_kwargs (
dict, optional) -- Additional keyword arguments to pass to the dataset's map method when truncating examples.0Dataset or DatasetDictThe dataset with truncated sequences.
Truncate sequences in a dataset to a specified max_length.
Example:
>>> from datasets import Dataset
>>> examples = {
... "input_ids": [[1, 2, 3], [4, 5, 6, 7], [8]],
... "attention_mask": [[0, 1, 1], [0, 0, 1, 1], [1]],
... }
>>> dataset = Dataset.from_dict(examples)
>>> truncated_dataset = truncate_dataset(dataset, max_length=2)
>>> truncated_dataset[:]
{'input_ids': [[1, 2], [4, 5], [8]],
'attention_mask': [[0, 1], [0, 0], [1]]}
Xet Storage Details
- Size:
- 24.7 kB
- Xet hash:
- c4b7470905c645e7f48eded4e52245dc69da479ac7d97ee563a5c69698dd29bb
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.