This model is crap

#3
by Dimon321 - opened

image

Wouldn't judge until the arch is fully confirmed and correct, the template and the tokenizer

Now you can try again it seems working https://github.com/ggml-org/llama.cpp/pull/18936

The template would be handled by --jinja option and tokenizer is embedded in gguf, afaik. Yep, already tried latest main from llama and another gguf - same problems. It is something else.

flash2

Sign up or log in to comment