Upload folder using huggingface_hub
Browse files- README.md +12 -12
- gemma-3-1b-it-Q2_K.gguf +2 -2
- gemma-3-1b-it-Q3_K_L.gguf +2 -2
- gemma-3-1b-it-Q3_K_M.gguf +2 -2
- gemma-3-1b-it-Q3_K_S.gguf +2 -2
- gemma-3-1b-it-Q4_0.gguf +2 -2
- gemma-3-1b-it-Q4_K_M.gguf +2 -2
- gemma-3-1b-it-Q4_K_S.gguf +2 -2
- gemma-3-1b-it-Q5_0.gguf +2 -2
- gemma-3-1b-it-Q5_K_M.gguf +2 -2
- gemma-3-1b-it-Q5_K_S.gguf +2 -2
- gemma-3-1b-it-Q6_K.gguf +2 -2
- gemma-3-1b-it-Q8_0.gguf +2 -2
README.md
CHANGED
|
@@ -86,18 +86,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
| 86 |
|
| 87 |
| Filename | Quant type | File Size | Description |
|
| 88 |
| -------- | ---------- | --------- | ----------- |
|
| 89 |
-
| [gemma-3-1b-it-Q2_K.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q2_K.gguf) | Q2_K | 0.
|
| 90 |
-
| [gemma-3-1b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_S.gguf) | Q3_K_S | 0.
|
| 91 |
-
| [gemma-3-1b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_M.gguf) | Q3_K_M | 0.
|
| 92 |
-
| [gemma-3-1b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_L.gguf) | Q3_K_L | 0.
|
| 93 |
-
| [gemma-3-1b-it-Q4_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_0.gguf) | Q4_0 | 0.
|
| 94 |
-
| [gemma-3-1b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_S.gguf) | Q4_K_S | 0.
|
| 95 |
-
| [gemma-3-1b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_M.gguf) | Q4_K_M | 0.
|
| 96 |
-
| [gemma-3-1b-it-Q5_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_0.gguf) | Q5_0 | 0.
|
| 97 |
-
| [gemma-3-1b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_S.gguf) | Q5_K_S | 0.
|
| 98 |
-
| [gemma-3-1b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_M.gguf) | Q5_K_M | 0.
|
| 99 |
-
| [gemma-3-1b-it-Q6_K.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q6_K.gguf) | Q6_K |
|
| 100 |
-
| [gemma-3-1b-it-Q8_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q8_0.gguf) | Q8_0 |
|
| 101 |
|
| 102 |
|
| 103 |
## Downloading instruction
|
|
|
|
| 86 |
|
| 87 |
| Filename | Quant type | File Size | Description |
|
| 88 |
| -------- | ---------- | --------- | ----------- |
|
| 89 |
+
| [gemma-3-1b-it-Q2_K.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q2_K.gguf) | Q2_K | 0.690 GB | smallest, significant quality loss - not recommended for most purposes |
|
| 90 |
+
| [gemma-3-1b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_S.gguf) | Q3_K_S | 0.689 GB | very small, high quality loss |
|
| 91 |
+
| [gemma-3-1b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_M.gguf) | Q3_K_M | 0.722 GB | very small, high quality loss |
|
| 92 |
+
| [gemma-3-1b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_L.gguf) | Q3_K_L | 0.752 GB | small, substantial quality loss |
|
| 93 |
+
| [gemma-3-1b-it-Q4_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_0.gguf) | Q4_0 | 0.720 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
| 94 |
+
| [gemma-3-1b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_S.gguf) | Q4_K_S | 0.781 GB | small, greater quality loss |
|
| 95 |
+
| [gemma-3-1b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_M.gguf) | Q4_K_M | 0.806 GB | medium, balanced quality - recommended |
|
| 96 |
+
| [gemma-3-1b-it-Q5_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_0.gguf) | Q5_0 | 0.808 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
| 97 |
+
| [gemma-3-1b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_S.gguf) | Q5_K_S | 0.836 GB | large, low quality loss - recommended |
|
| 98 |
+
| [gemma-3-1b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_M.gguf) | Q5_K_M | 0.851 GB | large, very low quality loss - recommended |
|
| 99 |
+
| [gemma-3-1b-it-Q6_K.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q6_K.gguf) | Q6_K | 1.012 GB | very large, extremely low quality loss |
|
| 100 |
+
| [gemma-3-1b-it-Q8_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q8_0.gguf) | Q8_0 | 1.069 GB | very large, extremely low quality loss - not recommended |
|
| 101 |
|
| 102 |
|
| 103 |
## Downloading instruction
|
gemma-3-1b-it-Q2_K.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cb1965a0f31936821a3439c1fe956bcefdbeb4e2400f6b1820c84aa710fe8b22
|
| 3 |
+
size 689814528
|
gemma-3-1b-it-Q3_K_L.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1350d10b430d34483dfa87e917a476a3d530d452cbd64a4636146558a4afe4e2
|
| 3 |
+
size 751575552
|
gemma-3-1b-it-Q3_K_M.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:590af47b0f4d0a1e9211485a29b63497237e60e0b271c4fb4a158bddb19c9892
|
| 3 |
+
size 722416128
|
gemma-3-1b-it-Q3_K_S.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8082eac641bf7c627cbe547efec192cf541347f9ddf001487e1616cdea498e7e
|
| 3 |
+
size 688856064
|
gemma-3-1b-it-Q4_0.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:02391f14b077692104ee78fc017d9fdc2773798964ee44cbc6f9d4e7be0f4add
|
| 3 |
+
size 720425472
|
gemma-3-1b-it-Q4_K_M.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:10bb04674cce0bc67c26701c66c3fdbd4c27ac2cede780646d89dd354b2bdbd8
|
| 3 |
+
size 806058240
|
gemma-3-1b-it-Q4_K_S.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4c0aed52830c098836b2507330e44b6db7abe50ff60547c6295db3825eb8cfa9
|
| 3 |
+
size 780993024
|
gemma-3-1b-it-Q5_0.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b5d6056106b6845c0ca6b5ca304be7eb72d2548e72914195f43ad9fbd2f48772
|
| 3 |
+
size 807645696
|
gemma-3-1b-it-Q5_K_M.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:4c4f5a156e330b4596e09307983dbcfc5e6f57dfd323f7e561b438722c693a56
|
| 3 |
+
size 851345664
|
gemma-3-1b-it-Q5_K_S.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:35a55569f0223e86fb1c659ef4a361981aa84fe94da2d66cc379093ae221c011
|
| 3 |
+
size 836399616
|
gemma-3-1b-it-Q6_K.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:99eb286083dc55ed6f09807a9aae4094d37226de23f8342dbafb0c4df0d8bd6b
|
| 3 |
+
size 1011738624
|
gemma-3-1b-it-Q8_0.gguf
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:1d42f7df50680d899c25adc8d198c6eecdb0c787fe80306f7eabb2a751526004
|
| 3 |
+
size 1069306368
|