How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="AI-Engine/gemma-2-9b-it-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

GGUF llama.cpp quantized version of:

Recommended Prompt Format (Gemma)

<start_of_turn>model
Provide some context and/or instructions to the model.<end_of_turn>model
<start_of_turn>user
The user’s message goes here<end_of_turn> 
<start_of_turn>model
AI message goes here<end_of_turn>model

Quant Version: b3405 with imatrix

Downloads last month
47
GGUF
Model size
9B params
Architecture
gemma2
Hardware compatibility
Log In to add your hardware

2-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AI-Engine/gemma-2-9b-it-GGUF

Quantized
(155)
this model