Text Generation
GGUF
How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="ddh0/phi-2-GGUF-fp16",
	filename="phi-2-fp16.gguf",
)
output = llm(
	"Once upon a time,",
	max_tokens=512,
	echo=True
)
print(output)

This is Microsoft's Phi-2, converted to GGUF without quantization. No other changes were made.

The model was converted using convert-hf-to-gguf.py from Georgi Gerganov's llama.cpp repo, release b1671.

All credit belongs to Microsoft for training and releasing this model. Thank you!

Downloads last month
17
GGUF
Model size
3B params
Architecture
phi2
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support