Model Summary

This repository hosts quantized versions of the Gemma 3 27B instruct model.

Format: GGUF
Converter: llama.cpp 7841fc723e059d1fd9640e5c0ef19050fcc7c698
Quantizer: LM-Kit.NET 2025.3.4

For more detailed information on the base model, please visit the following link

Downloads last month
165
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support