morriszms commited on
Commit
8fb5c98
·
verified ·
1 Parent(s): 5e80e4c

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,16 +1,16 @@
1
  ---
2
- language:
3
- - en
4
- library_name: transformers
5
  license: gemma
 
 
 
 
 
 
 
 
6
  tags:
7
- - unsloth
8
- - transformers
9
- - gemma2
10
- - gemma
11
  - TensorBlock
12
  - GGUF
13
- base_model: unsloth/gemma-2-2b
14
  ---
15
 
16
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -24,13 +24,12 @@ base_model: unsloth/gemma-2-2b
24
  </div>
25
  </div>
26
 
27
- ## unsloth/gemma-2-2b - GGUF
28
 
29
- This repo contains GGUF format model files for [unsloth/gemma-2-2b](https://huggingface.co/unsloth/gemma-2-2b).
30
 
31
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
32
 
33
-
34
  <div style="text-align: left; margin: 20px 0;">
35
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
36
  Run them on the TensorBlock client using your local machine ↗
@@ -39,7 +38,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
39
 
40
  ## Prompt template
41
 
42
-
43
  ```
44
 
45
  ```
@@ -48,18 +46,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
48
 
49
  | Filename | Quant type | File Size | Description |
50
  | -------- | ---------- | --------- | ----------- |
51
- | [gemma-2-2b-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q2_K.gguf) | Q2_K | 1.145 GB | smallest, significant quality loss - not recommended for most purposes |
52
- | [gemma-2-2b-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_S.gguf) | Q3_K_S | 1.267 GB | very small, high quality loss |
53
- | [gemma-2-2b-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_M.gguf) | Q3_K_M | 1.361 GB | very small, high quality loss |
54
- | [gemma-2-2b-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_L.gguf) | Q3_K_L | 1.444 GB | small, substantial quality loss |
55
- | [gemma-2-2b-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_0.gguf) | Q4_0 | 1.518 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
56
- | [gemma-2-2b-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_K_S.gguf) | Q4_K_S | 1.526 GB | small, greater quality loss |
57
- | [gemma-2-2b-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_K_M.gguf) | Q4_K_M | 1.591 GB | medium, balanced quality - recommended |
58
- | [gemma-2-2b-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_0.gguf) | Q5_0 | 1.753 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
59
- | [gemma-2-2b-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_K_S.gguf) | Q5_K_S | 1.753 GB | large, low quality loss - recommended |
60
- | [gemma-2-2b-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_K_M.gguf) | Q5_K_M | 1.791 GB | large, very low quality loss - recommended |
61
- | [gemma-2-2b-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q6_K.gguf) | Q6_K | 2.004 GB | very large, extremely low quality loss |
62
- | [gemma-2-2b-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q8_0.gguf) | Q8_0 | 2.593 GB | very large, extremely low quality loss - not recommended |
63
 
64
 
65
  ## Downloading instruction
 
1
  ---
 
 
 
2
  license: gemma
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ extra_gated_heading: Access Gemma on Hugging Face
6
+ extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
7
+ agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
8
+ Face and click below. Requests are processed immediately.
9
+ extra_gated_button_content: Acknowledge license
10
+ base_model: google/gemma-2-2b
11
  tags:
 
 
 
 
12
  - TensorBlock
13
  - GGUF
 
14
  ---
15
 
16
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
24
  </div>
25
  </div>
26
 
27
+ ## google/gemma-2-2b - GGUF
28
 
29
+ This repo contains GGUF format model files for [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b).
30
 
31
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
32
 
 
33
  <div style="text-align: left; margin: 20px 0;">
34
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
35
  Run them on the TensorBlock client using your local machine ↗
 
38
 
39
  ## Prompt template
40
 
 
41
  ```
42
 
43
  ```
 
46
 
47
  | Filename | Quant type | File Size | Description |
48
  | -------- | ---------- | --------- | ----------- |
49
+ | [gemma-2-2b-Q2_K.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q2_K.gguf) | Q2_K | 1.230 GB | smallest, significant quality loss - not recommended for most purposes |
50
+ | [gemma-2-2b-Q3_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_S.gguf) | Q3_K_S | 1.361 GB | very small, high quality loss |
51
+ | [gemma-2-2b-Q3_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_M.gguf) | Q3_K_M | 1.462 GB | very small, high quality loss |
52
+ | [gemma-2-2b-Q3_K_L.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q3_K_L.gguf) | Q3_K_L | 1.550 GB | small, substantial quality loss |
53
+ | [gemma-2-2b-Q4_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_0.gguf) | Q4_0 | 1.630 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
54
+ | [gemma-2-2b-Q4_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_K_S.gguf) | Q4_K_S | 1.639 GB | small, greater quality loss |
55
+ | [gemma-2-2b-Q4_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q4_K_M.gguf) | Q4_K_M | 1.709 GB | medium, balanced quality - recommended |
56
+ | [gemma-2-2b-Q5_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_0.gguf) | Q5_0 | 1.883 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
57
+ | [gemma-2-2b-Q5_K_S.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_K_S.gguf) | Q5_K_S | 1.883 GB | large, low quality loss - recommended |
58
+ | [gemma-2-2b-Q5_K_M.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q5_K_M.gguf) | Q5_K_M | 1.923 GB | large, very low quality loss - recommended |
59
+ | [gemma-2-2b-Q6_K.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q6_K.gguf) | Q6_K | 2.151 GB | very large, extremely low quality loss |
60
+ | [gemma-2-2b-Q8_0.gguf](https://huggingface.co/tensorblock/gemma-2-2b-GGUF/blob/main/gemma-2-2b-Q8_0.gguf) | Q8_0 | 2.784 GB | very large, extremely low quality loss - not recommended |
61
 
62
 
63
  ## Downloading instruction
gemma-2-2b-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c1d2d326a65e2b9d76bad8f742b807c793236b088ce9d5de624c139d5cfa8a9c
3
- size 1229829152
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d57f2c8ef03a95b8e86f7e839012e141915e12ca859b5992c6139e4391cbfb6
3
+ size 1229829056
gemma-2-2b-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:42f426f88a12ab36b4a23bb40bb31f4761458b6f04f871dcbf649c5cf8c55c3c
3
- size 1550435360
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:333733df9d8d7d78aaa4af7e05f7db1b55b57e4dd82a21b4a3d47c9de912cbd0
3
+ size 1550435264
gemma-2-2b-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:278c2dde9f620920b72c3bccc19d37afbf8a58086def05a6656f0914ae7b7e52
3
- size 1461666848
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74bf7b691e0f9d7cc35c8922abbfe92b7a282bf98af99809652246bc80e54d2f
3
+ size 1461666752
gemma-2-2b-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ba5602ec975eab385a2d410ddd76cadba36716fcdc0dc1cd8522378c97a9e894
3
- size 1360659488
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8658c195cdb83e68c0f3fccede29532116b84c85b1d8ea41b77e8624c4585d21
3
+ size 1360659392
gemma-2-2b-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:61734ed61135a46bc779737f3b20f3351009d4807614eea3e1fa4bdf36e4cceb
3
- size 1629508640
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75e4705213920f8b893f9ab83c278fd15a9a833fe8d12b644180d0478f79ff68
3
+ size 1629508544
gemma-2-2b-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:97cc411f0280a44ea8e275aa440b7a3b13dd8dee341ce6dcb328741ab9c8c804
3
- size 1708581920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9854dbb88aaff799a3e306cae0c01543ed3ba4be98bac9e2c8a65d20aead3201
3
+ size 1708581824
gemma-2-2b-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e7b50148edd2996d728a1cd18c630f00c069cf8ea7786c5b8f7aff6f1e23d68a
3
- size 1638650912
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:692579218ebfe8a576c16f9d23c75e1904e73e8cadf6e3bd998efcec554bc22e
3
+ size 1638650816
gemma-2-2b-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8e22222e2fb7b034b9e103f4867cd002d75a7bbc97d954b1840ca312813ed9be
3
- size 1882543136
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f9e00c51594b8633540711c06e35616d9db103f97c3de361f03c22584d88e71
3
+ size 1882543040
gemma-2-2b-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3f3cb9076931b2c887a1d336f3046513c8a6ee6c0a363203c412031f317c44df
3
- size 1923277856
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cabf95abbe2759fbac76466e8c56de516a7c7d7711aacfc4c6b5866e73f52e8f
3
+ size 1923277760
gemma-2-2b-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:547146a53a5eaa9c6e5e0548536a46e97cf0153e959d70e5479adcf91782674a
3
- size 1882543136
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e6c1681d487dcd8a4453c2f6d2cdfc9295e9e7ae73d9926ace4966a5ffbb74d
3
+ size 1882543040
gemma-2-2b-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:49bb3d17a904250ce9c7922e7b9df8aa33dc92973845e504932a36d05fb17d3c
3
- size 2151392288
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d6c70012e370cc7440febdffcafd093960e639f0a00a6f74064b75afe6d054a
3
+ size 2151392192
gemma-2-2b-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:01e933425f8ca55608a8ce5af163399a687413d900a05ce9250ff24b2844768d
3
- size 2784494624
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f091f9022e89e7ff77d67cd1870571e6c7f7fe243c76285a7aedbef9922a083
3
+ size 2784494528