Buckets:
tokenizers
Tokenization utilities
- tokenizers
- static
- .PreTrainedTokenizer
new PreTrainedTokenizer(tokenizerJSON, tokenizerConfig)- instance
.convert_tokens_to_ids(tokens)⇒ any._call(text, options)⇒ BatchEncoding._encode_text(text)⇒ Array | null.tokenize(text, options)⇒ Array.encode(text, options)⇒ Array.batch_decode(batch, decode_args)⇒ Array.decode(token_ids, [decode_args])⇒ string.decode_single(token_ids, decode_args)⇒ string.get_chat_template(options)⇒ string.apply_chat_template(conversation, options)⇒ string | Tensor | Array | Array | BatchEncoding.parse_response(response, [options])⇒ Record.<string, any> | Array.<Record>
- static
.from_pretrained(pretrained_model_name_or_path, options)⇒ Promise.<PreTrainedTokenizer>
.loadTokenizer(pretrained_model_name_or_path, options)⇒ Promise.<Array>.prepareTensorForDecode(tensor)⇒ Array._build_translation_inputs(self, raw_inputs, tokenizer_options, generate_kwargs)⇒ Object
- .PreTrainedTokenizer
- inner
~PretrainedTokenizerOptions: PretrainedOptions~TextContent: Object~ImageContent: Object~MessageContent: TextContent | ImageContent | Object~Message: Object~BatchEncoding: Array | Array | Tensor
- static
tokenizers.PreTrainedTokenizer
Kind: static class of tokenizers
- .PreTrainedTokenizer
new PreTrainedTokenizer(tokenizerJSON, tokenizerConfig)- instance
.convert_tokens_to_ids(tokens)⇒ any._call(text, options)⇒ BatchEncoding._encode_text(text)⇒ Array | null.tokenize(text, options)⇒ Array.encode(text, options)⇒ Array.batch_decode(batch, decode_args)⇒ Array.decode(token_ids, [decode_args])⇒ string.decode_single(token_ids, decode_args)⇒ string.get_chat_template(options)⇒ string.apply_chat_template(conversation, options)⇒ string | Tensor | Array | Array | BatchEncoding.parse_response(response, [options])⇒ Record.<string, any> | Array.<Record>
- static
.from_pretrained(pretrained_model_name_or_path, options)⇒ Promise.<PreTrainedTokenizer>
new PreTrainedTokenizer(tokenizerJSON, tokenizerConfig)
Create a new PreTrainedTokenizer instance.
ParamTypeDescription
tokenizerJSONObjectThe JSON of the tokenizer.
tokenizerConfigObjectThe config of the tokenizer.
preTrainedTokenizer.convert_tokens_to_ids(tokens) ⇒ any
Converts a token string (or a sequence of tokens) into a single integer id (or a sequence of ids), using the vocabulary.
Kind: instance method of PreTrainedTokenizer
Returns: any - The token id or list of token ids.
ParamTypeDescription
tokensTOne or several token(s) to convert to token id(s).
preTrainedTokenizer._call(text, options) ⇒ BatchEncoding
Encode/tokenize the given text(s).
Kind: instance method of PreTrainedTokenizer
Returns: BatchEncoding - Object to be passed to the model.
ParamTypeDefaultDescription
textstring | ArrayThe text to tokenize.
optionsObjectAn optional object containing the following properties:
[options.text_pair]string | ArraynullOptional second sequence to be encoded. If set, must be the same type as text.
[options.padding]boolean | 'max_length'falseWhether to pad the input sequences.
[options.add_special_tokens]booleantrueWhether or not to add the special tokens associated with the corresponding model.
[options.truncation]boolean | nullWhether to truncate the input sequences.
[options.max_length]number | nullMaximum length of the returned list and optionally padding length.
[options.return_tensor]booleantrueWhether to return the results as Tensors or arrays.
[options.return_token_type_ids]boolean | nullWhether to return the token type ids.
preTrainedTokenizer._encode_text(text) ⇒ Array | null
Encodes a single text using the preprocessor pipeline of the tokenizer.
Kind: instance method of PreTrainedTokenizer
Returns: Array | null - The encoded tokens.
ParamTypeDescription
textstring | nullThe text to encode.
preTrainedTokenizer.tokenize(text, options) ⇒ Array
Converts a string into a sequence of tokens.
Kind: instance method of PreTrainedTokenizer
Returns: Array - The list of tokens.
ParamTypeDefaultDescription
textstringThe sequence to be encoded.
optionsObjectAn optional object containing the following properties:
[options.pair]string | nullA second sequence to be encoded with the first.
[options.add_special_tokens]booleanfalseWhether or not to add the special tokens associated with the corresponding model.
preTrainedTokenizer.encode(text, options) ⇒ Array
Encodes a single text or a pair of texts using the model's tokenizer.
Kind: instance method of PreTrainedTokenizer
Returns: Array - An array of token IDs representing the encoded text(s).
ParamTypeDefaultDescription
textstringThe text to encode.
optionsObjectAn optional object containing the following properties:
[options.text_pair]string | nullnullThe optional second text to encode.
[options.add_special_tokens]booleantrueWhether or not to add the special tokens associated with the corresponding model.
[options.return_token_type_ids]boolean | nullWhether to return token_type_ids.
preTrainedTokenizer.batch_decode(batch, decode_args) ⇒ Array
Decode a batch of tokenized sequences.
Kind: instance method of PreTrainedTokenizer
Returns: Array - List of decoded sequences.
ParamTypeDescription
batchArray | TensorList/Tensor of tokenized input sequences.
decode_argsObject(Optional) Object with decoding arguments.
preTrainedTokenizer.decode(token_ids, [decode_args]) ⇒ string
Decodes a sequence of token IDs back to a string.
Kind: instance method of PreTrainedTokenizer
Returns: string - The decoded string.
Throws:
Error If
token_idsis not a non-empty array of integers.ParamTypeDefaultDescriptiontoken_idsArray | Array | TensorList/Tensor of token IDs to decode.
[decode_args]Object{}
[decode_args.skip_special_tokens]booleanfalseIf true, special tokens are removed from the output string.
[decode_args.clean_up_tokenization_spaces]booleantrueIf true, spaces before punctuations and abbreviated forms are removed.
preTrainedTokenizer.decode_single(token_ids, decode_args) ⇒ string
Decode a single list of token ids to a string.
Kind: instance method of PreTrainedTokenizer
Returns: string - The decoded string
ParamTypeDefaultDescription
token_idsArray | ArrayList of token ids to decode
decode_argsObjectOptional arguments for decoding
[decode_args.skip_special_tokens]booleanfalseWhether to skip special tokens during decoding
[decode_args.clean_up_tokenization_spaces]boolean | nullWhether to clean up tokenization spaces during decoding.
If null, the value is set to this.decoder.cleanup if it exists, falling back to this.clean_up_tokenization_spaces if it exists, falling back to true.
preTrainedTokenizer.get_chat_template(options) ⇒ string
Retrieve the chat template string used for tokenizing chat messages. This template is used
internally by the apply_chat_template method and can also be used externally to retrieve the model's chat
template for better generation tracking.
Kind: instance method of PreTrainedTokenizer
Returns: string - The chat template string.
ParamTypeDefaultDescription
optionsObjectAn optional object containing the following properties:
[options.chat_template]string | nullnullA Jinja template or the name of a template to use for this conversion.
It is usually not necessary to pass anything to this argument, as the model's template will be used by default.
[options.tools]ArrayA list of tools (callable functions) that will be accessible to the model. If the template does not
support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our chat templating guide for more information.
preTrainedTokenizer.apply_chat_template(conversation, options) ⇒ string | Tensor | Array | Array | BatchEncoding
Converts a list of message objects with "role" and "content" keys to a list of token
ids. This method is intended for use with chat models, and will read the tokenizer's chat_template attribute to
determine the format and control tokens to use when converting.
See here for more information.
Example: Applying a chat template to a conversation.
import { AutoTokenizer } from "@huggingface/transformers";
const tokenizer = await AutoTokenizer.from_pretrained("Xenova/mistral-tokenizer-v1");
const chat = [
{ "role": "user", "content": "Hello, how are you?" },
{ "role": "assistant", "content": "I'm doing great. How can I help you today?" },
{ "role": "user", "content": "I'd like to show off how chat templating works!" },
]
const text = tokenizer.apply_chat_template(chat, { tokenize: false });
// "[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today? [INST] I'd like to show off how chat templating works! [/INST]"
const input_ids = tokenizer.apply_chat_template(chat, { tokenize: true, return_tensor: false });
// [1, 733, 16289, 28793, 22557, 28725, 910, 460, 368, 28804, 733, 28748, 16289, 28793, 28737, 28742, 28719, 2548, 1598, 28723, 1602, 541, 315, 1316, 368, 3154, 28804, 2, 28705, 733, 16289, 28793, 315, 28742, 28715, 737, 298, 1347, 805, 910, 10706, 5752, 1077, 3791, 28808, 733, 28748, 16289, 28793]
Kind: instance method of PreTrainedTokenizer
Returns: string | Tensor | Array | Array | BatchEncoding - The tokenized output.
ParamTypeDefaultDescription
conversationArrayA list of message objects with "role" and "content" keys,
representing the chat history so far.
optionsObjectAn optional object containing the following properties:
[options.chat_template]string | nullnullA Jinja template to use for this conversion. If
this is not passed, the model's chat template will be used instead.
[options.tools]ArrayA list of tools (callable functions) that will be accessible to the model. If the template does not
support function calling, this argument will have no effect. Each tool should be passed as a JSON Schema, giving the name, description and argument types for the tool. See our chat templating guide for more information.
[options.documents]Array.<Record>A list of dicts representing documents that will be accessible to the model if it is performing RAG
(retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing "title" and "text" keys. Please see the RAG section of the chat templating guide for examples of passing documents with chat templates.
[options.add_generation_prompt]booleanfalseWhether to end the prompt with the token(s) that indicate
the start of an assistant message. This is useful when you want to generate a response from the model. Note that this argument will be passed to the chat template, and so it must be supported in the template for this argument to have any effect.
[options.tokenize]booleantrueWhether to tokenize the output. If false, the output will be a string.
[options.padding]booleanfalseWhether to pad sequences to the maximum length. Has no effect if tokenize is false.
[options.truncation]booleanfalseWhether to truncate sequences to the maximum length. Has no effect if tokenize is false.
[options.max_length]number | nullMaximum length (in tokens) to use for padding or truncation. Has no effect if tokenize is false.
If not specified, the tokenizer's max_length attribute will be used as a default.
[options.return_tensor]booleantrueWhether to return the output as a Tensor or an Array. Has no effect if tokenize is false.
[options.return_dict]booleantrueWhether to return a dictionary with named outputs. Has no effect if tokenize is false.
[options.tokenizer_kwargs]Object{}Additional options to pass to the tokenizer.
preTrainedTokenizer.parse_response(response, [options]) ⇒ Record.<string, any> | Array.<Record>
Converts a raw model output string into a parsed message dictionary using the tokenizer's
response_schema (or a user-provided schema) to control parsing.
Kind: instance method of PreTrainedTokenizer
Returns: Record.<string, any> | Array.<Record> - The parsed message dict(s).
ParamTypeDefaultDescription
responsestring | ArrayThe decoded output string(s) from the model.
[options]ObjectOptions for parsing.
[options.schema]Record.<string, any> | nullA response schema to use. If not
provided, the tokenizer's response_schema from its config will be used.
PreTrainedTokenizer.from_pretrained(pretrained_model_name_or_path, options) ⇒ Promise.<PreTrainedTokenizer>
Loads a pre-trained tokenizer from the given pretrained_model_name_or_path.
Kind: static method of PreTrainedTokenizer
Returns: Promise.<PreTrainedTokenizer> - A new instance of the PreTrainedTokenizer class.
Throws:
Error Throws an error if the tokenizer.json or tokenizer_config.json files are not found in the
pretrained_model_name_or_path.ParamTypeDescriptionpretrained_model_name_or_pathstringThe path to the pre-trained tokenizer.
optionsPretrainedTokenizerOptionsAdditional options for loading the tokenizer.
tokenizers.loadTokenizer(pretrained_model_name_or_path, options) ⇒ Promise.<Array>
Loads a tokenizer from the specified path.
Kind: static method of tokenizers
Returns: Promise.<Array> - A promise that resolves with information about the loaded tokenizer.
ParamTypeDescription
pretrained_model_name_or_pathstringThe path to the tokenizer directory.
optionsPretrainedTokenizerOptionsAdditional options for loading the tokenizer.
tokenizers.prepareTensorForDecode(tensor) ⇒ Array
Helper function to convert a tensor to a list before decoding.
Kind: static method of tokenizers
Returns: Array - The tensor as a list.
ParamTypeDescription
tensorTensorThe tensor to convert.
tokenizers._build_translation_inputs(self, raw_inputs, tokenizer_options, generate_kwargs) ⇒ Object
Helper function to build translation inputs for an NllbTokenizer or M2M100Tokenizer.
Kind: static method of tokenizers
Returns: Object - Object to be passed to the model.
ParamTypeDescription
selfPreTrainedTokenizerThe tokenizer instance.
raw_inputsstring | ArrayThe text to tokenize.
tokenizer_optionsObjectOptions to be sent to the tokenizer
generate_kwargsObjectGeneration options.
tokenizers~PretrainedTokenizerOptions : PretrainedOptions
Kind: inner typedef of tokenizers
tokenizers~TextContent : Object
Kind: inner typedef of tokenizers
Properties
NameTypeDescription
type'text'The type of content (must be 'text').
textstringThe text content.
tokenizers~ImageContent : Object
Kind: inner typedef of tokenizers
Properties
NameTypeDescription
type'image'The type of content (must be 'image').
[image]string | RawImageOptional URL or instance of the image.
Note: This works for SmolVLM. Qwen2VL and Idefics3 have different implementations.
tokenizers~MessageContent : TextContent | ImageContent | Object
Base type for message content. This is a discriminated union that can be extended with additional content types.
Example: @typedef {TextContent | ImageContent | AudioContent} MessageContent
Kind: inner typedef of tokenizers
tokenizers~Message : Object
Kind: inner typedef of tokenizers
Properties
NameTypeDescription
role'user' | 'assistant' | 'system' | stringThe role of the message.
contentstring | ArrayThe content of the message. Can be a simple string or an array of content objects.
tokenizers~BatchEncoding : Array | Array | Tensor
Holds the output of the tokenizer's call function.
Kind: inner typedef of tokenizers
Properties
NameTypeDescription
input_idsBatchEncodingItemList of token ids to be fed to a model.
attention_maskBatchEncodingItemList of indices specifying which tokens should be attended to by the model.
[token_type_ids]BatchEncodingItemList of token type ids to be fed to a model.
Xet Storage Details
- Size:
- 21.1 kB
- Xet hash:
- 928e86eaf27de9f573e5c4bfa77028528c52212f11a264538566dac24fdd8625
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.