SeunghyunUP's picture
Update README.md: Add vocab size and language support info (#1)
d2e30d7 verified
metadata
license: other
license_name: upstage-solar-license

Upstage solar-pro2 tokenizer

  • Vocab size: 196,608
  • Language support: English, Korean, Japanese and more

Please use this tokenizer for tokenizing inputs for the Upstage solar-pro2 model.

You can load it with the transformers library like this:

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("upstage/solar-pro2-tokenizer")
text = "Hi, how are you?"
enc = tokenizer.encode(text)
print("Encoded input:")
print(enc)

inv_vocab = {v: k for k, v in tokenizer.get_vocab().items()}
tokens = [inv_vocab[token_id] for token_id in enc]
print("Tokens:")
print(tokens)

number_of_tokens = len(enc)
print("Number of tokens:", number_of_tokens)