Upstage solar-pro3 tokenizer

  • Vocab size: 196,608
  • Language support: English, Korean, Japanese and more

Please use this tokenizer for tokenizing inputs for the Upstage solar-pro3 model.

You can load it with the transformers library like this:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("upstage/solar-pro3-tokenizer")
text = "Hi, how are you?"
enc = tokenizer.encode(text)
print("Encoded input:")
print(enc)

inv_vocab = {v: k for k, v in tokenizer.get_vocab().items()}
tokens = [inv_vocab[token_id] for token_id in enc]
print("Tokens:")
print(tokens)

number_of_tokens = len(enc)
print("Number of tokens:", number_of_tokens)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support