| | --- |
| | license: mit |
| | language: |
| | - en |
| | base_model: |
| | - microsoft/Phi-3-mini-128k-instruct |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | ## GGUF quantized version of Phi-3 Model (128k-instruct mini) |
| |
|
| | project original [source](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) (base model) |
| |
|
| | Q_2 (not nice) |
| | |
| | Q_3 (acceptable) |
| |
|
| | Q_4 family is recommanded (good for running with CPU as well) |
| | |
| | Q_5 (good in general) |
| |
|
| | Q_6 is good also; if you want a better result; take this one instead of Q_5 |
| |
|
| | Q_8 which is very good; need a reasonable size of RAM otherwise you might expect a long wait |
| | |
| | 16-bit and 32-bit are also provided here for research perspectives; since the file size (16bit) is similar to the original safetensors; once you have a GPU, go ahead with the safetensors, pretty much the same |
| | |
| | ### how to run it |
| | |
| | use any connector for interacting with gguf; i.e., [gguf-connector](https://pypi.org/project/gguf-connector/) |
| | |
| | welcome to ai era |