File size: 6,973 Bytes
289eeaf 84dd135 289eeaf e9b03ee 06bacdd e9b03ee 06bacdd e9b03ee 289eeaf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
---
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
widget:
- text: Hello who are you?
example_title: Identity
- text: What can you do?
example_title: Capabilities
- text: Create a fastapi endpoint to retrieve the weather given a zip code.
example_title: Coding
tags:
- convAI
- conversational
- TensorBlock
- GGUF
pipeline_tag: text-generation
base_model: abacaj/phi-2-super
model-index:
- name: phi-2-super
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Instruction Following Eval
type: wis-k/instruction-following-eval
metrics:
- type: acc
value: 0.2717
name: prompt_level_loose_acc
source:
url: https://github.com/huggingface/lighteval
name: LightEval
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## abacaj/phi-2-super - GGUF
This repo contains GGUF format model files for [abacaj/phi-2-super](https://huggingface.co/abacaj/phi-2-super).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π Try it now! π</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
<|endoftext|>[INST] {prompt} [/INST]
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [phi-2-super-Q2_K.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q2_K.gguf) | Q2_K | 1.110 GB | smallest, significant quality loss - not recommended for most purposes |
| [phi-2-super-Q3_K_S.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q3_K_S.gguf) | Q3_K_S | 1.251 GB | very small, high quality loss |
| [phi-2-super-Q3_K_M.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q3_K_M.gguf) | Q3_K_M | 1.426 GB | very small, high quality loss |
| [phi-2-super-Q3_K_L.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q3_K_L.gguf) | Q3_K_L | 1.575 GB | small, substantial quality loss |
| [phi-2-super-Q4_0.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q4_0.gguf) | Q4_0 | 1.602 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [phi-2-super-Q4_K_S.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q4_K_S.gguf) | Q4_K_S | 1.619 GB | small, greater quality loss |
| [phi-2-super-Q4_K_M.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q4_K_M.gguf) | Q4_K_M | 1.738 GB | medium, balanced quality - recommended |
| [phi-2-super-Q5_0.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q5_0.gguf) | Q5_0 | 1.933 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [phi-2-super-Q5_K_S.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q5_K_S.gguf) | Q5_K_S | 1.933 GB | large, low quality loss - recommended |
| [phi-2-super-Q5_K_M.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q5_K_M.gguf) | Q5_K_M | 2.003 GB | large, very low quality loss - recommended |
| [phi-2-super-Q6_K.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q6_K.gguf) | Q6_K | 2.285 GB | very large, extremely low quality loss |
| [phi-2-super-Q8_0.gguf](https://huggingface.co/tensorblock/phi-2-super-GGUF/blob/main/phi-2-super-Q8_0.gguf) | Q8_0 | 2.958 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/phi-2-super-GGUF --include "phi-2-super-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/phi-2-super-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|