Upload folder using huggingface_hub
Browse files- Meta-llama-3.1-8b-instruct-Q2_K.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q3_K_L.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q3_K_M.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q3_K_S.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q4_0.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q4_K_M.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q4_K_S.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q5_0.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q5_K_M.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q5_K_S.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q6_K.gguf +3 -0
- Meta-llama-3.1-8b-instruct-Q8_0.gguf +3 -0
- README.md +17 -23
Meta-llama-3.1-8b-instruct-Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:28161fda40663a94a65f75834f10a43a15ee0c0f7e9201bed4ca6428ec8fbd40
|
| 3 |
+
size 3179131872
|
Meta-llama-3.1-8b-instruct-Q3_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1f5a0b52a83bc90642329a1890069af9f733cde2e4f2fe75a22eaa0905980fc6
|
| 3 |
+
size 4321956832
|
Meta-llama-3.1-8b-instruct-Q3_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:280f33a1e2cba9c742c6e905eb846a22d03904559c024871615122ff660dc17a
|
| 3 |
+
size 4018918368
|
Meta-llama-3.1-8b-instruct-Q3_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:13ddf99cd59601b21fd0d9579470f7a89550455d596f93d8dfccbdecf6ef8365
|
| 3 |
+
size 3664499680
|
Meta-llama-3.1-8b-instruct-Q4_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c6260f4ce9dfe18d5c6decc67d0a20ab99602c6c0bf2a5e21febc9494b4de92b
|
| 3 |
+
size 4661212128
|
Meta-llama-3.1-8b-instruct-Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:46bb560b83fc79bf9ceb4c98d508cde09c8a2afc46d2b0283d3b9d509a7fc52d
|
| 3 |
+
size 4920734688
|
Meta-llama-3.1-8b-instruct-Q4_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:30e1fcef327047b248590865bced30b83a468b310e6017a0e9f01bc392454914
|
| 3 |
+
size 4692669408
|
Meta-llama-3.1-8b-instruct-Q5_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f8f6a1a9197ab0248d9302958c1eceb2584bc9c9610eca1aad9710faea8c41af
|
| 3 |
+
size 5599294432
|
Meta-llama-3.1-8b-instruct-Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0eda468798b687267263484f087e7f47732c225e9d4a7c13ad5559ea96174a3a
|
| 3 |
+
size 5732987872
|
Meta-llama-3.1-8b-instruct-Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b742fc46c13e4a58971a30f5f374590f3d2d007afb9a2997cbc3909ba0b3a99c
|
| 3 |
+
size 5599294432
|
Meta-llama-3.1-8b-instruct-Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:94558b798fc38c0e047de2723f201ddef996e52857e6b6ebb2be7759f1c58432
|
| 3 |
+
size 6596006880
|
Meta-llama-3.1-8b-instruct-Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a608aa4c21f3e93495040f0832c475997da0e685f0edc16c0cd9a46787f33c39
|
| 3 |
+
size 8540771296
|
README.md
CHANGED
|
@@ -8,9 +8,6 @@ language:
|
|
| 8 |
- hi
|
| 9 |
- es
|
| 10 |
- th
|
| 11 |
-
- zh
|
| 12 |
-
- ko
|
| 13 |
-
- ja
|
| 14 |
pipeline_tag: text-generation
|
| 15 |
tags:
|
| 16 |
- facebook
|
|
@@ -191,7 +188,7 @@ extra_gated_fields:
|
|
| 191 |
extra_gated_description: The information you provide will be collected, stored, processed
|
| 192 |
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
|
| 193 |
extra_gated_button_content: Submit
|
| 194 |
-
base_model:
|
| 195 |
---
|
| 196 |
|
| 197 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
@@ -205,9 +202,9 @@ base_model: aifeifei798/Meta-Llama-3.1-8B-Instruct
|
|
| 205 |
</div>
|
| 206 |
</div>
|
| 207 |
|
| 208 |
-
##
|
| 209 |
|
| 210 |
-
This repo contains GGUF format model files for [
|
| 211 |
|
| 212 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
| 213 |
|
|
@@ -222,9 +219,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
| 222 |
```
|
| 223 |
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
| 224 |
|
| 225 |
-
Cutting Knowledge Date: December 2023
|
| 226 |
-
Today Date: 26 Jul 2024
|
| 227 |
-
|
| 228 |
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
| 229 |
|
| 230 |
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
|
@@ -234,18 +228,18 @@ Today Date: 26 Jul 2024
|
|
| 234 |
|
| 235 |
| Filename | Quant type | File Size | Description |
|
| 236 |
| -------- | ---------- | --------- | ----------- |
|
| 237 |
-
| [Meta-
|
| 238 |
-
| [Meta-
|
| 239 |
-
| [Meta-
|
| 240 |
-
| [Meta-
|
| 241 |
-
| [Meta-
|
| 242 |
-
| [Meta-
|
| 243 |
-
| [Meta-
|
| 244 |
-
| [Meta-
|
| 245 |
-
| [Meta-
|
| 246 |
-
| [Meta-
|
| 247 |
-
| [Meta-
|
| 248 |
-
| [Meta-
|
| 249 |
|
| 250 |
|
| 251 |
## Downloading instruction
|
|
@@ -261,11 +255,11 @@ pip install -U "huggingface_hub[cli]"
|
|
| 261 |
Then, downoad the individual model file the a local directory
|
| 262 |
|
| 263 |
```shell
|
| 264 |
-
huggingface-cli download tensorblock/Meta-
|
| 265 |
```
|
| 266 |
|
| 267 |
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
|
| 268 |
|
| 269 |
```shell
|
| 270 |
-
huggingface-cli download tensorblock/Meta-
|
| 271 |
```
|
|
|
|
| 8 |
- hi
|
| 9 |
- es
|
| 10 |
- th
|
|
|
|
|
|
|
|
|
|
| 11 |
pipeline_tag: text-generation
|
| 12 |
tags:
|
| 13 |
- facebook
|
|
|
|
| 188 |
extra_gated_description: The information you provide will be collected, stored, processed
|
| 189 |
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
|
| 190 |
extra_gated_button_content: Submit
|
| 191 |
+
base_model: Crystalcareai/Meta-llama-3.1-8b-instruct
|
| 192 |
---
|
| 193 |
|
| 194 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
|
|
| 202 |
</div>
|
| 203 |
</div>
|
| 204 |
|
| 205 |
+
## Crystalcareai/Meta-llama-3.1-8b-instruct - GGUF
|
| 206 |
|
| 207 |
+
This repo contains GGUF format model files for [Crystalcareai/Meta-llama-3.1-8b-instruct](https://huggingface.co/Crystalcareai/Meta-llama-3.1-8b-instruct).
|
| 208 |
|
| 209 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
| 210 |
|
|
|
|
| 219 |
```
|
| 220 |
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
| 221 |
|
|
|
|
|
|
|
|
|
|
| 222 |
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
| 223 |
|
| 224 |
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
|
|
|
| 228 |
|
| 229 |
| Filename | Quant type | File Size | Description |
|
| 230 |
| -------- | ---------- | --------- | ----------- |
|
| 231 |
+
| [Meta-llama-3.1-8b-instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q2_K.gguf) | Q2_K | 3.179 GB | smallest, significant quality loss - not recommended for most purposes |
|
| 232 |
+
| [Meta-llama-3.1-8b-instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q3_K_S.gguf) | Q3_K_S | 3.664 GB | very small, high quality loss |
|
| 233 |
+
| [Meta-llama-3.1-8b-instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q3_K_M.gguf) | Q3_K_M | 4.019 GB | very small, high quality loss |
|
| 234 |
+
| [Meta-llama-3.1-8b-instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q3_K_L.gguf) | Q3_K_L | 4.322 GB | small, substantial quality loss |
|
| 235 |
+
| [Meta-llama-3.1-8b-instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q4_0.gguf) | Q4_0 | 4.661 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
| 236 |
+
| [Meta-llama-3.1-8b-instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q4_K_S.gguf) | Q4_K_S | 4.693 GB | small, greater quality loss |
|
| 237 |
+
| [Meta-llama-3.1-8b-instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q4_K_M.gguf) | Q4_K_M | 4.921 GB | medium, balanced quality - recommended |
|
| 238 |
+
| [Meta-llama-3.1-8b-instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q5_0.gguf) | Q5_0 | 5.599 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
| 239 |
+
| [Meta-llama-3.1-8b-instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q5_K_S.gguf) | Q5_K_S | 5.599 GB | large, low quality loss - recommended |
|
| 240 |
+
| [Meta-llama-3.1-8b-instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q5_K_M.gguf) | Q5_K_M | 5.733 GB | large, very low quality loss - recommended |
|
| 241 |
+
| [Meta-llama-3.1-8b-instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q6_K.gguf) | Q6_K | 6.596 GB | very large, extremely low quality loss |
|
| 242 |
+
| [Meta-llama-3.1-8b-instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Meta-llama-3.1-8b-instruct-GGUF/blob/main/Meta-llama-3.1-8b-instruct-Q8_0.gguf) | Q8_0 | 8.541 GB | very large, extremely low quality loss - not recommended |
|
| 243 |
|
| 244 |
|
| 245 |
## Downloading instruction
|
|
|
|
| 255 |
Then, downoad the individual model file the a local directory
|
| 256 |
|
| 257 |
```shell
|
| 258 |
+
huggingface-cli download tensorblock/Meta-llama-3.1-8b-instruct-GGUF --include "Meta-llama-3.1-8b-instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
|
| 259 |
```
|
| 260 |
|
| 261 |
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
|
| 262 |
|
| 263 |
```shell
|
| 264 |
+
huggingface-cli download tensorblock/Meta-llama-3.1-8b-instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
| 265 |
```
|