File size: 8,804 Bytes
0294aae 20f9877 0294aae fc7c68a 3b02d4e 326db0e 3b02d4e 326db0e 3b02d4e 0294aae fc7c68a 0294aae fc7c68a 0294aae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 |
---
language:
- en
license: apache-2.0
library_name: transformers
datasets:
- cerebras/SlimPajama-627B
metrics:
- accuracy
base_model: keeeeenw/MicroLlama
tags:
- TensorBlock
- GGUF
model-index:
- name: MicroLlama
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 19.85
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 2.83
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 0.0
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 1.45
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 4.79
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 1.53
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=keeeeenw/MicroLlama
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
[](https://tensorblock.co)
[](https://twitter.com/tensorblock_aoi)
[](https://discord.gg/Ej5NmeHFf2)
[](https://github.com/TensorBlock)
[](https://t.me/TensorBlock)
## keeeeenw/MicroLlama - GGUF
This repo contains GGUF format model files for [keeeeenw/MicroLlama](https://huggingface.co/keeeeenw/MicroLlama).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Our projects
<table border="1" cellspacing="0" cellpadding="10">
<tr>
<th colspan="2" style="font-size: 25px;">Forge</th>
</tr>
<tr>
<th colspan="2">
<img src="https://imgur.com/faI5UKh.jpeg" alt="Forge Project" width="900"/>
</th>
</tr>
<tr>
<th colspan="2">An OpenAI-compatible multi-provider routing layer.</th>
</tr>
<tr>
<th colspan="2">
<a href="https://github.com/TensorBlock/forge" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π Try it now! π</a>
</th>
</tr>
<tr>
<th style="font-size: 25px;">Awesome MCP Servers</th>
<th style="font-size: 25px;">TensorBlock Studio</th>
</tr>
<tr>
<th><img src="https://imgur.com/2Xov7B7.jpeg" alt="MCP Servers" width="450"/></th>
<th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Studio" width="450"/></th>
</tr>
<tr>
<th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
<th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
</tr>
<tr>
<th>
<a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
<th>
<a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
display: inline-block;
padding: 8px 16px;
background-color: #FF7F50;
color: white;
text-decoration: none;
border-radius: 6px;
font-weight: bold;
font-family: sans-serif;
">π See what we built π</a>
</th>
</tr>
</table>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [MicroLlama-Q2_K.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q2_K.gguf) | Q2_K | 0.117 GB | smallest, significant quality loss - not recommended for most purposes |
| [MicroLlama-Q3_K_S.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q3_K_S.gguf) | Q3_K_S | 0.135 GB | very small, high quality loss |
| [MicroLlama-Q3_K_M.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q3_K_M.gguf) | Q3_K_M | 0.145 GB | very small, high quality loss |
| [MicroLlama-Q3_K_L.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q3_K_L.gguf) | Q3_K_L | 0.155 GB | small, substantial quality loss |
| [MicroLlama-Q4_0.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q4_0.gguf) | Q4_0 | 0.168 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [MicroLlama-Q4_K_S.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q4_K_S.gguf) | Q4_K_S | 0.169 GB | small, greater quality loss |
| [MicroLlama-Q4_K_M.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q4_K_M.gguf) | Q4_K_M | 0.177 GB | medium, balanced quality - recommended |
| [MicroLlama-Q5_0.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q5_0.gguf) | Q5_0 | 0.200 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [MicroLlama-Q5_K_S.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q5_K_S.gguf) | Q5_K_S | 0.200 GB | large, low quality loss - recommended |
| [MicroLlama-Q5_K_M.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q5_K_M.gguf) | Q5_K_M | 0.204 GB | large, very low quality loss - recommended |
| [MicroLlama-Q6_K.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q6_K.gguf) | Q6_K | 0.233 GB | very large, extremely low quality loss |
| [MicroLlama-Q8_0.gguf](https://huggingface.co/tensorblock/MicroLlama-GGUF/blob/main/MicroLlama-Q8_0.gguf) | Q8_0 | 0.302 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/MicroLlama-GGUF --include "MicroLlama-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/MicroLlama-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|