What BPE pre-tokenizer did you use?

#1
by sapbot - opened

I assumed it's qwen3 but it seems to be something other. Because there is no such thing as qwen3 in llama.cpp tokenizers.

PreTrainedTokenizerFast

Not worked. I found myself it's qwen2.

sapbot changed discussion status to closed

Sign up or log in to comment