North Tokenizer

Shared SentencePiece tokenizer for the North Star model family:

Vocabulary size: 32,000 Format: SentencePiece (.model file)

import sentencepiece as spm
sp = spm.SentencePieceProcessor(model_file="tokenizer.model")
ids = sp.encode("Hello world", out_type=int)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support