llama cpp
#8
by
moon005
- opened
now only if someone does convert this to gguf? it will be so awesome to run it with llama cpp.
thats only when it will be added to list by llama-cpp. I see by size that it could fit inside GPU, maybe Q8 quality in 16Gb VRAM, and almost original BF16 will fit in 24Gb VRAM cards.
It can be used as webbrowser companion, kinda only Brave supports local models (transmitted by Ollama or oobabooga or etc). I don't see the way to make same functionality in original non-GGUF format.