This is Microsoft's Phi-2, converted to GGUF without quantization. No other changes were made.
The model was converted using convert-hf-to-gguf.py from Georgi Gerganov's llama.cpp repo, release b1671.
convert-hf-to-gguf.py
b1671
All credit belongs to Microsoft for training and releasing this model. Thank you!
We're not able to determine the quantization variants.