Llama3 Insturct Tokenizers.Encoding.offsets is wrong
#180
by AlignLearner - opened
from transformers import AutoTokenizer
t = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")
print(t("今天天气好", add_special_tokens=False)[0].offsets)
[(0, 2), (2, 3), (3, 4), (4, 5)]
If it encodes Chinese characters, it's output is correct.
AlignLearner changed discussion status to closed