GGUF
llama.cpp

Quantized Models?

#4
by PFnove - opened

Almost none of the people interested in running a 7B model want or need to run the f32 model (vram isn't infinite on consumer GPUs)
It would be nice to have some models with quantized weights in GGUF format (most people don't want to download a file over 30gb just to quantize it down to 8 or 4 bits)
float32 weights only make sense on the gemma-2b and gemma-7b models (the ones that aren't already instruction-tuned or in GGUF, a format that doesn't allow finetuning)

Hi @PFnove , Sorry for late response,

You're right , the float32 versions are not the workhorses of the consumer LLM space. They are the foundational checkpoints , which serve as the source material for creating the more accessible, quantized versions. This is why you often see models released in a full-precision format first—it allows the community to create a variety of quantized versions optimized for different hardware and use cases.

The Hugging Face Hub is the central repository for pre-quantized models. You'll find that many community members, particularly those from projects like llama.cpp and creators like TheBloke, have already done the work of converting and quantizing popular models. You can filter the models by the GGUF tag to find a comprehensive list.

Thank you.

Sign up or log in to comment