Llama-3.2-3B-GGUF-4bit

This model was fine-tuned using QuantLLM.

Hyperparameters

The following hyperparameters were used during training:

  • format: gguf
  • base_model: Llama-3.2-3B
Downloads last month
55
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support