Model Summary

This repository hosts quantized versions of the Phi-4-mini-instruct model.

Format: GGUF
Converter: llama.cpp 06c2b1561d8b882bc018554591f8c35eb04ad30e
Quantizer: LM-Kit.NET 2025.3.1

For more detailed information on the base model, please visit the following link

Downloads last month
282
GGUF
Model size
4B params
Architecture
phi3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support