Post
4089
I've created a space for chatting with Gemma 2 using llama.cpp
- ๐๏ธ Choose between 27B IT and 9b IT models
- ๐ Fast inference using llama.cpp
- gokaygokay/Gemma-2-llamacpp
- ๐๏ธ Choose between 27B IT and 9b IT models
- ๐ Fast inference using llama.cpp
- gokaygokay/Gemma-2-llamacpp