morriszms commited on
Commit
5a2ee70
·
verified ·
1 Parent(s): 11e3031

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -86,18 +86,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
86
 
87
  | Filename | Quant type | File Size | Description |
88
  | -------- | ---------- | --------- | ----------- |
89
- | [gemma-3-1b-it-Q2_K.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q2_K.gguf) | Q2_K | 0.007 GB | smallest, significant quality loss - not recommended for most purposes |
90
- | [gemma-3-1b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_S.gguf) | Q3_K_S | 0.007 GB | very small, high quality loss |
91
- | [gemma-3-1b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_M.gguf) | Q3_K_M | 0.007 GB | very small, high quality loss |
92
- | [gemma-3-1b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_L.gguf) | Q3_K_L | 0.007 GB | small, substantial quality loss |
93
- | [gemma-3-1b-it-Q4_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_0.gguf) | Q4_0 | 0.007 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
94
- | [gemma-3-1b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_S.gguf) | Q4_K_S | 0.007 GB | small, greater quality loss |
95
- | [gemma-3-1b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_M.gguf) | Q4_K_M | 0.007 GB | medium, balanced quality - recommended |
96
- | [gemma-3-1b-it-Q5_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_0.gguf) | Q5_0 | 0.007 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
97
- | [gemma-3-1b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_S.gguf) | Q5_K_S | 0.007 GB | large, low quality loss - recommended |
98
- | [gemma-3-1b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_M.gguf) | Q5_K_M | 0.007 GB | large, very low quality loss - recommended |
99
- | [gemma-3-1b-it-Q6_K.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q6_K.gguf) | Q6_K | 0.007 GB | very large, extremely low quality loss |
100
- | [gemma-3-1b-it-Q8_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q8_0.gguf) | Q8_0 | 0.007 GB | very large, extremely low quality loss - not recommended |
101
 
102
 
103
  ## Downloading instruction
 
86
 
87
  | Filename | Quant type | File Size | Description |
88
  | -------- | ---------- | --------- | ----------- |
89
+ | [gemma-3-1b-it-Q2_K.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q2_K.gguf) | Q2_K | 0.690 GB | smallest, significant quality loss - not recommended for most purposes |
90
+ | [gemma-3-1b-it-Q3_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_S.gguf) | Q3_K_S | 0.689 GB | very small, high quality loss |
91
+ | [gemma-3-1b-it-Q3_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_M.gguf) | Q3_K_M | 0.722 GB | very small, high quality loss |
92
+ | [gemma-3-1b-it-Q3_K_L.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q3_K_L.gguf) | Q3_K_L | 0.752 GB | small, substantial quality loss |
93
+ | [gemma-3-1b-it-Q4_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_0.gguf) | Q4_0 | 0.720 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
94
+ | [gemma-3-1b-it-Q4_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_S.gguf) | Q4_K_S | 0.781 GB | small, greater quality loss |
95
+ | [gemma-3-1b-it-Q4_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q4_K_M.gguf) | Q4_K_M | 0.806 GB | medium, balanced quality - recommended |
96
+ | [gemma-3-1b-it-Q5_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_0.gguf) | Q5_0 | 0.808 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
97
+ | [gemma-3-1b-it-Q5_K_S.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_S.gguf) | Q5_K_S | 0.836 GB | large, low quality loss - recommended |
98
+ | [gemma-3-1b-it-Q5_K_M.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q5_K_M.gguf) | Q5_K_M | 0.851 GB | large, very low quality loss - recommended |
99
+ | [gemma-3-1b-it-Q6_K.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q6_K.gguf) | Q6_K | 1.012 GB | very large, extremely low quality loss |
100
+ | [gemma-3-1b-it-Q8_0.gguf](https://huggingface.co/tensorblock/google_gemma-3-1b-it-GGUF/blob/main/gemma-3-1b-it-Q8_0.gguf) | Q8_0 | 1.069 GB | very large, extremely low quality loss - not recommended |
101
 
102
 
103
  ## Downloading instruction
gemma-3-1b-it-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3b4431b9861ea8cd7adabc16e21e1d56923b7ebc54a18217652f42d2fcc34b8d
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb1965a0f31936821a3439c1fe956bcefdbeb4e2400f6b1820c84aa710fe8b22
3
+ size 689814528
gemma-3-1b-it-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:61402c4bfae82abdf5fe5426fe7641114ff922cd8e2d6a453563124e71f53bb4
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1350d10b430d34483dfa87e917a476a3d530d452cbd64a4636146558a4afe4e2
3
+ size 751575552
gemma-3-1b-it-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a7c576e9ff8748c05ed638ef838f56122f89acb16851d1c226d2dfe4aa4b091
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:590af47b0f4d0a1e9211485a29b63497237e60e0b271c4fb4a158bddb19c9892
3
+ size 722416128
gemma-3-1b-it-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bcd1bfc5003c182a9e66538b254c89aa0401560feee367bbb65cc22ae41f1ee2
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8082eac641bf7c627cbe547efec192cf541347f9ddf001487e1616cdea498e7e
3
+ size 688856064
gemma-3-1b-it-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ec97c0ceb4dcb2d281fab0bb9c636403708480deea0890d95fe40f97b6c3c791
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02391f14b077692104ee78fc017d9fdc2773798964ee44cbc6f9d4e7be0f4add
3
+ size 720425472
gemma-3-1b-it-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9859272cea878b82962865d990f8a9c8fcac5c7b6efb042aa917044dcf3ddfd1
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10bb04674cce0bc67c26701c66c3fdbd4c27ac2cede780646d89dd354b2bdbd8
3
+ size 806058240
gemma-3-1b-it-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4d17fdfa2be97d5c2cebbd5d55ffbc778edf394db9b5f8d191284b168a171715
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c0aed52830c098836b2507330e44b6db7abe50ff60547c6295db3825eb8cfa9
3
+ size 780993024
gemma-3-1b-it-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6f3907f3e77f13e1a8269a0e3ddcb93e9062c5280131b6ef86ef9e2ffedffc3d
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5d6056106b6845c0ca6b5ca304be7eb72d2548e72914195f43ad9fbd2f48772
3
+ size 807645696
gemma-3-1b-it-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2588ca775396d979b8d0070f62f7a26232a74b3b7bf5e168234b6334bf868401
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c4f5a156e330b4596e09307983dbcfc5e6f57dfd323f7e561b438722c693a56
3
+ size 851345664
gemma-3-1b-it-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3aa5ca5890ccc411878c51681c2403df5bef55a934408404f798d8e52348262f
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35a55569f0223e86fb1c659ef4a361981aa84fe94da2d66cc379093ae221c011
3
+ size 836399616
gemma-3-1b-it-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:79d4ac30aca411389fa6bd9d7d39f39c234d002c2142f82f703bfa222b4469a6
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99eb286083dc55ed6f09807a9aae4094d37226de23f8342dbafb0c4df0d8bd6b
3
+ size 1011738624
gemma-3-1b-it-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e9f1e847b066cc2ad481b5fea7f5963a5c0de536b9d35af96025c86c3c740fce
3
- size 6512960
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d42f7df50680d899c25adc8d198c6eecdb0c787fe80306f7eabb2a751526004
3
+ size 1069306368