llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)gemma-2-27b-gguf
gemma-2-27b-gguf is a GGUF Q4_K_M int4 quantized version of Google's Gemma-2-27B with Instruct Training (IT), providing an inference implementation, optimized for AI PCs.
gemma-2-27b-gguf is a leading open source foundation model from Google.
Model Description
- Developed by: Google
- Model type: gemma-2-27b
- Parameters: 27 billion
- Model Parent: google/gemma-2-27b-it
- Language(s) (NLP): English
- License: Apache 2.0
- Uses: General purpose chat
- RAG Benchmark Accuracy Score: NA
- Quantization: int4
Model Card Contact
- Downloads last month
- 44
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="llmware/gemma-2-27b-instruct-gguf", filename="gemma-2-27b-it-Q4_K_M.gguf", )