How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF:# Run inference directly in the terminal:
llama-cli -hf theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF:# Run inference directly in the terminal:
./llama-cli -hf theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF:Use Docker
docker model run hf.co/theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF:Quick Links
Pythonified-Llama-3.2-3B-Instruct - GGUF Quantized
Quantized GGUF versions of Pythonified-Llama-3.2-3B-Instruct for use with llama.cpp and other GGUF-compatible inference engines.
Original Model
- Base model: meta-llama/Llama-3.2-3B-Instruct
- Fine-tuned model: theprint/Pythonified-Llama-3.2-3B-Instruct
- Quantized by: theprint
Available Quantizations
Pythonified-Llama-3.2-3B-Instruct-f16.gguf(6135.6 MB) - 16-bit float (original precision, largest file)Pythonified-Llama-3.2-3B-Instruct-q3_k_m.gguf(1609.0 MB) - 3-bit quantization (medium quality)Pythonified-Llama-3.2-3B-Instruct-q4_k_m.gguf(1925.8 MB) - 4-bit quantization (medium, recommended for most use cases)Pythonified-Llama-3.2-3B-Instruct-q5_k_m.gguf(2214.6 MB) - 5-bit quantization (medium, good quality)Pythonified-Llama-3.2-3B-Instruct-q6_k.gguf(2521.4 MB) - 6-bit quantization (high quality)Pythonified-Llama-3.2-3B-Instruct-q8_0.gguf(3263.4 MB) - 8-bit quantization (very high quality)
Usage
With llama.cpp
# Download recommended quantization
wget https://huggingface.co/theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF/resolve/main/Pythonified-Llama-3.2-3B-Instruct-q4_k_m.gguf
# Run inference
./llama.cpp/main -m Pythonified-Llama-3.2-3B-Instruct-q4_k_m.gguf \
-p "Your prompt here" \
-n 256 \
--temp 0.7 \
--top-p 0.9
With other GGUF tools
These files are compatible with:
- llama.cpp
- Ollama (import as custom model)
- KoboldCpp
- text-generation-webui
Quantization Info
Recommended: q4_k_m provides the best balance of size, speed, and quality for most use cases.
For maximum quality: Use q8_0 or f16
For maximum speed/smallest size: Use q3_k_m or q4_k_s
License
apache-2.0
Citation
@misc{pythonified_llama_3.2_3b_instruct_gguf,
title={Pythonified-Llama-3.2-3B-Instruct GGUF Quantized Models},
author={theprint},
year={2025},
publisher={Hugging Face},
url={https://huggingface.co/theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF}
}
- Downloads last month
- 4
Hardware compatibility
Log In to add your hardware
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF
Base model
meta-llama/Llama-3.2-3B-Instruct
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF:# Run inference directly in the terminal: llama-cli -hf theprint/Pythonified-Llama-3.2-3B-Instruct-GGUF: