tokenizer.json missing

#10
by s3dev-ai - opened

How did you convert this model to GGUF?

I have tried using llama_cpp, but get an error saying the tokenizer.json file is missing. And in fact, when re-examining the Mistral repo, the file is missing. What do I need to do differently to convert this lovely model to GGUF?

Can I just use the tokenizer file(s) from the base model?

Sign up or log in to comment