Upload folder using huggingface_hub
Browse files- README.md +19 -26
- ko-gemma-2-9b-it-Q2_K.gguf +2 -2
- ko-gemma-2-9b-it-Q3_K_L.gguf +2 -2
- ko-gemma-2-9b-it-Q3_K_M.gguf +2 -2
- ko-gemma-2-9b-it-Q3_K_S.gguf +2 -2
- ko-gemma-2-9b-it-Q4_0.gguf +2 -2
- ko-gemma-2-9b-it-Q4_K_M.gguf +2 -2
- ko-gemma-2-9b-it-Q4_K_S.gguf +2 -2
- ko-gemma-2-9b-it-Q5_0.gguf +2 -2
- ko-gemma-2-9b-it-Q5_K_M.gguf +2 -2
- ko-gemma-2-9b-it-Q5_K_S.gguf +2 -2
- ko-gemma-2-9b-it-Q6_K.gguf +2 -2
- ko-gemma-2-9b-it-Q8_0.gguf +2 -2
README.md
CHANGED
|
@@ -1,19 +1,14 @@
|
|
| 1 |
---
|
| 2 |
-
license: gemma
|
| 3 |
library_name: transformers
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
pipeline_tag: text-generation
|
| 5 |
-
|
| 6 |
-
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
|
| 7 |
-
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
|
| 8 |
-
Face and click below. Requests are processed immediately.
|
| 9 |
-
extra_gated_button_content: Acknowledge license
|
| 10 |
tags:
|
| 11 |
-
- conversational
|
| 12 |
- TensorBlock
|
| 13 |
- GGUF
|
| 14 |
-
base_model: rtzr/ko-gemma-2-9b-it
|
| 15 |
-
language:
|
| 16 |
-
- ko
|
| 17 |
---
|
| 18 |
|
| 19 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
@@ -27,13 +22,12 @@ language:
|
|
| 27 |
</div>
|
| 28 |
</div>
|
| 29 |
|
| 30 |
-
##
|
| 31 |
|
| 32 |
-
This repo contains GGUF format model files for [
|
| 33 |
|
| 34 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
| 35 |
|
| 36 |
-
|
| 37 |
<div style="text-align: left; margin: 20px 0;">
|
| 38 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
| 39 |
Run them on the TensorBlock client using your local machine ↗
|
|
@@ -42,7 +36,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
| 42 |
|
| 43 |
## Prompt template
|
| 44 |
|
| 45 |
-
|
| 46 |
```
|
| 47 |
<bos>{system_prompt}<start_of_turn>user
|
| 48 |
{prompt}<end_of_turn>
|
|
@@ -53,18 +46,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
| 53 |
|
| 54 |
| Filename | Quant type | File Size | Description |
|
| 55 |
| -------- | ---------- | --------- | ----------- |
|
| 56 |
-
| [ko-gemma-2-9b-it-Q2_K.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q2_K.gguf) | Q2_K | 3.
|
| 57 |
-
| [ko-gemma-2-9b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q3_K_S.gguf) | Q3_K_S | 4.
|
| 58 |
-
| [ko-gemma-2-9b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q3_K_M.gguf) | Q3_K_M | 4.
|
| 59 |
-
| [ko-gemma-2-9b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q3_K_L.gguf) | Q3_K_L |
|
| 60 |
-
| [ko-gemma-2-9b-it-Q4_0.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q4_0.gguf) | Q4_0 | 5.
|
| 61 |
-
| [ko-gemma-2-9b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q4_K_S.gguf) | Q4_K_S | 5.
|
| 62 |
-
| [ko-gemma-2-9b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q4_K_M.gguf) | Q4_K_M | 5.
|
| 63 |
-
| [ko-gemma-2-9b-it-Q5_0.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q5_0.gguf) | Q5_0 | 6.
|
| 64 |
-
| [ko-gemma-2-9b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q5_K_S.gguf) | Q5_K_S | 6.
|
| 65 |
-
| [ko-gemma-2-9b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q5_K_M.gguf) | Q5_K_M | 6.
|
| 66 |
-
| [ko-gemma-2-9b-it-Q6_K.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q6_K.gguf) | Q6_K | 7.
|
| 67 |
-
| [ko-gemma-2-9b-it-Q8_0.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q8_0.gguf) | Q8_0 | 9.
|
| 68 |
|
| 69 |
|
| 70 |
## Downloading instruction
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
+
license: llama3
|
| 4 |
+
language:
|
| 5 |
+
- ko
|
| 6 |
+
- en
|
| 7 |
pipeline_tag: text-generation
|
| 8 |
+
base_model: davidkim205/ko-gemma-2-9b-it
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
tags:
|
|
|
|
| 10 |
- TensorBlock
|
| 11 |
- GGUF
|
|
|
|
|
|
|
|
|
|
| 12 |
---
|
| 13 |
|
| 14 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
|
|
| 22 |
</div>
|
| 23 |
</div>
|
| 24 |
|
| 25 |
+
## davidkim205/ko-gemma-2-9b-it - GGUF
|
| 26 |
|
| 27 |
+
This repo contains GGUF format model files for [davidkim205/ko-gemma-2-9b-it](https://huggingface.co/davidkim205/ko-gemma-2-9b-it).
|
| 28 |
|
| 29 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
| 30 |
|
|
|
|
| 31 |
<div style="text-align: left; margin: 20px 0;">
|
| 32 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
| 33 |
Run them on the TensorBlock client using your local machine ↗
|
|
|
|
| 36 |
|
| 37 |
## Prompt template
|
| 38 |
|
|
|
|
| 39 |
```
|
| 40 |
<bos>{system_prompt}<start_of_turn>user
|
| 41 |
{prompt}<end_of_turn>
|
|
|
|
| 46 |
|
| 47 |
| Filename | Quant type | File Size | Description |
|
| 48 |
| -------- | ---------- | --------- | ----------- |
|
| 49 |
+
| [ko-gemma-2-9b-it-Q2_K.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q2_K.gguf) | Q2_K | 3.805 GB | smallest, significant quality loss - not recommended for most purposes |
|
| 50 |
+
| [ko-gemma-2-9b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q3_K_S.gguf) | Q3_K_S | 4.338 GB | very small, high quality loss |
|
| 51 |
+
| [ko-gemma-2-9b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q3_K_M.gguf) | Q3_K_M | 4.762 GB | very small, high quality loss |
|
| 52 |
+
| [ko-gemma-2-9b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q3_K_L.gguf) | Q3_K_L | 5.132 GB | small, substantial quality loss |
|
| 53 |
+
| [ko-gemma-2-9b-it-Q4_0.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q4_0.gguf) | Q4_0 | 5.443 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
| 54 |
+
| [ko-gemma-2-9b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q4_K_S.gguf) | Q4_K_S | 5.479 GB | small, greater quality loss |
|
| 55 |
+
| [ko-gemma-2-9b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q4_K_M.gguf) | Q4_K_M | 5.761 GB | medium, balanced quality - recommended |
|
| 56 |
+
| [ko-gemma-2-9b-it-Q5_0.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q5_0.gguf) | Q5_0 | 6.484 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
| 57 |
+
| [ko-gemma-2-9b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q5_K_S.gguf) | Q5_K_S | 6.484 GB | large, low quality loss - recommended |
|
| 58 |
+
| [ko-gemma-2-9b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q5_K_M.gguf) | Q5_K_M | 6.647 GB | large, very low quality loss - recommended |
|
| 59 |
+
| [ko-gemma-2-9b-it-Q6_K.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q6_K.gguf) | Q6_K | 7.589 GB | very large, extremely low quality loss |
|
| 60 |
+
| [ko-gemma-2-9b-it-Q8_0.gguf](https://huggingface.co/tensorblock/ko-gemma-2-9b-it-GGUF/blob/main/ko-gemma-2-9b-it-Q8_0.gguf) | Q8_0 | 9.827 GB | very large, extremely low quality loss - not recommended |
|
| 61 |
|
| 62 |
|
| 63 |
## Downloading instruction
|
ko-gemma-2-9b-it-Q2_K.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7b692a076416f046713e123c58257bdbda3838dd171cf0a1e0a7b70ff641a978
|
| 3 |
+
size 3805397792
|
ko-gemma-2-9b-it-Q3_K_L.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:72f527f55794124a1e16374e76fb88c416becf0a39f7e7dff7d5630711c2d7a2
|
| 3 |
+
size 5132452640
|
ko-gemma-2-9b-it-Q3_K_M.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:aa935b4a776ea8b3fdde758c538b177f185e3eb7afbbb7aad4f66a42f845f0b0
|
| 3 |
+
size 4761781024
|
ko-gemma-2-9b-it-Q3_K_S.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a758b9983a4dd9435fdd97aa970476aa241cfe140c2d921871aad6e7d08bd131
|
| 3 |
+
size 4337664800
|
ko-gemma-2-9b-it-Q4_0.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1972b2c9e699ad02bab2c33bf96d6014a1f10d5bf2909d48562dd82fa0905185
|
| 3 |
+
size 5443142432
|
ko-gemma-2-9b-it-Q4_K_M.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b7b5d6c76494e960eb5ffa379748ff9eaea6729346147533fbe09c228cf7865d
|
| 3 |
+
size 5761057568
|
ko-gemma-2-9b-it-Q4_K_S.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0122db2ab6186c5cd6059f61cbcd3485c8e400142642049d32e7f4a4c124f7c3
|
| 3 |
+
size 5478925088
|
ko-gemma-2-9b-it-Q5_0.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5e716b8f5485c7a0d172d021131b308063db70f1492f36a9dd4dbbcb6aa8b90d
|
| 3 |
+
size 6483591968
|
ko-gemma-2-9b-it-Q5_K_M.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ce6fca7abf72cec5858e146c8ff5ee3818778c2790c892624832765ef7555bed
|
| 3 |
+
size 6647366432
|
ko-gemma-2-9b-it-Q5_K_S.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1bd3d9c621eaa1066cde78140a55f057b652eba687fc5a01d9a60cde8ce60587
|
| 3 |
+
size 6483591968
|
ko-gemma-2-9b-it-Q6_K.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:9aaf00fbabac6e047be5c1027c026a1cb012ef06e9144976479d68d412305b0b
|
| 3 |
+
size 7589069600
|
ko-gemma-2-9b-it-Q8_0.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a4845f8deb76aa5b838cd7dc0abdf6f6fae0d806869e0b600c1756c793a971ef
|
| 3 |
+
size 9827148576
|