bartowski commited on
Commit
479752a
·
verified ·
1 Parent(s): 96a73b4

Llamacpp quants

Browse files
.gitattributes CHANGED
@@ -33,3 +33,19 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ mamba-2.8b-hf-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ mamba-2.8b-hf-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ mamba-2.8b-hf-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
39
+ mamba-2.8b-hf-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
40
+ mamba-2.8b-hf-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
41
+ mamba-2.8b-hf-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
42
+ mamba-2.8b-hf-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
43
+ mamba-2.8b-hf-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
44
+ mamba-2.8b-hf-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
45
+ mamba-2.8b-hf-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
46
+ mamba-2.8b-hf-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
47
+ mamba-2.8b-hf-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
48
+ mamba-2.8b-hf-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
49
+ mamba-2.8b-hf-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
50
+ mamba-2.8b-hf-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
51
+ mamba-2.8b-hf-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ quantized_by: bartowski
5
+ pipeline_tag: text-generation
6
+ ---
7
+
8
+ ## Llamacpp Quantizations of mamba-2.8b-hf
9
+
10
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2536">b2536</a> for quantization.
11
+
12
+ Original model: https://huggingface.co/state-spaces/mamba-2.8b-hf
13
+
14
+ Download a file (not the whole branch) from below:
15
+
16
+ | Filename | Quant type | File Size | Description |
17
+ | -------- | ---------- | --------- | ----------- |
18
+ | [mamba-2.8b-hf-Q8_0.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q8_0.gguf) | Q8_0 | 3.30GB | Extremely high quality, generally unneeded but max available quant. |
19
+ | [mamba-2.8b-hf-Q6_K.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q6_K.gguf) | Q6_K | 2.66GB | Very high quality, near perfect, *recommended*. |
20
+ | [mamba-2.8b-hf-Q5_K_M.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q5_K_M.gguf) | Q5_K_M | 2.32GB | High quality, very usable. |
21
+ | [mamba-2.8b-hf-Q5_K_S.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q5_K_S.gguf) | Q5_K_S | 2.32GB | High quality, very usable. |
22
+ | [mamba-2.8b-hf-Q5_0.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q5_0.gguf) | Q5_0 | 2.32GB | High quality, older format, generally not recommended. |
23
+ | [mamba-2.8b-hf-Q4_K_M.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q4_K_M.gguf) | Q4_K_M | 2.01GB | Good quality, uses about 4.83 bits per weight. |
24
+ | [mamba-2.8b-hf-Q4_K_S.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q4_K_S.gguf) | Q4_K_S | 2.01GB | Slightly lower quality with small space savings. |
25
+ | [mamba-2.8b-hf-IQ4_NL.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-IQ4_NL.gguf) | IQ4_NL | 2.01GB | Decent quality, similar to Q4_K_S, new method of quanting, |
26
+ | [mamba-2.8b-hf-IQ4_XS.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-IQ4_XS.gguf) | IQ4_XS | 1.93GB | Decent quality, new method with similar performance to Q4. |
27
+ | [mamba-2.8b-hf-Q4_0.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q4_0.gguf) | Q4_0 | 2.01GB | Decent quality, older format, generally not recommended. |
28
+ | [mamba-2.8b-hf-Q3_K_L.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q3_K_L.gguf) | Q3_K_L | 1.68GB | Lower quality but usable, good for low RAM availability. |
29
+ | [mamba-2.8b-hf-Q3_K_M.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q3_K_M.gguf) | Q3_K_M | 1.68GB | Even lower quality. |
30
+ | [mamba-2.8b-hf-IQ3_M.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-IQ3_M.gguf) | IQ3_M | 1.68GB | Medium-low quality, new method with decent performance. |
31
+ | [mamba-2.8b-hf-IQ3_S.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-IQ3_S.gguf) | IQ3_S | 1.68GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
32
+ | [mamba-2.8b-hf-Q3_K_S.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q3_K_S.gguf) | Q3_K_S | 1.68GB | Low quality, not recommended. |
33
+ | [mamba-2.8b-hf-Q2_K.gguf](https://huggingface.co/bartowski/mamba-2.8b-hf-GGUF/blob/main/mamba-2.8b-hf-Q2_K.gguf) | Q2_K | 1.42GB | Extremely low quality, *not* recommended.
34
+
35
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
mamba-2.8b-hf-IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2f21079ca31d36ae531d00e8e2dbe1b021c54a823035507dea98eb04603ea79
3
+ size 1680921664
mamba-2.8b-hf-IQ3_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec9ee02ef5b1c461cb0c742c81e4d77c8d07241b25f6758aae92619c52849143
3
+ size 1680921664
mamba-2.8b-hf-IQ4_NL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8218d4ca9e9a3f26ef5029eaee5491ee4258e8f154113423ede593854ad6e8e
3
+ size 2015155264
mamba-2.8b-hf-IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82c35ff73bc3474c52f065f9ea09eb1065afaf29bed20e011bfa655c1a372d63
3
+ size 1936512064
mamba-2.8b-hf-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8353dce97978d749de32bce8f42d94dae2879ee98f1dfb9dc6d78f2c10588f6
3
+ size 1425331264
mamba-2.8b-hf-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c807986a8e3de2b0cbe2953c5bab9b3c31844e47d5bc012eedc082c5436ca45a
3
+ size 1680921664
mamba-2.8b-hf-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:339f89e87653036d99b20e343a70db0afb8cbbd2383f7e70c2668abede1ea72c
3
+ size 1680921664
mamba-2.8b-hf-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d29cca5d426204f7b70f543e5616d2f8f058286689ff7b25aabb01f97782d95
3
+ size 1680921664
mamba-2.8b-hf-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7e86b6419aba06c84223c01ca83d03d3abb8bc47fde220324e3b20990cd9f76
3
+ size 2015155264
mamba-2.8b-hf-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fb59e6e6c46503dda44e983a17fb84f7707d9b32e9f704c38a0696cc516324c
3
+ size 2015155264
mamba-2.8b-hf-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:420ced62d13e123e80ce59f534ccc6447bf294cd247a9785513460586e8ebfda
3
+ size 2015155264
mamba-2.8b-hf-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71bd0e1fe115eb78a63115b636eb1fbd877bbb2b9b1d4319c71bae2680908843
3
+ size 2329728064
mamba-2.8b-hf-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b39d56e9ba57aa6dd1c20415dd441575726436025945e7e250ec3bd4911f7be
3
+ size 2329728064
mamba-2.8b-hf-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3faf566ebb2977eea0c1dc2473f3bb358b957d4483e5cd2bccc8620564854eba
3
+ size 2329728064
mamba-2.8b-hf-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb7efee5f1888190418118cc8c7ed7d34ac12117e76250ce348e507ec63f0731
3
+ size 2663961664
mamba-2.8b-hf-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9ac31699c0289221d83292e87878416452cf13c0be7383bc0310203fd3b8c998
3
+ size 3304620064