Upload folder using huggingface_hub
Browse files- .gitattributes +12 -0
- MergeBench-gemma-2-2b_coding-IQ4_XS.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q2_K.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q3_K_L.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q3_K_M.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q3_K_S.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q4_K_M.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q4_K_S.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q5_K_M.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q5_K_S.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q6_K.gguf +3 -0
- MergeBench-gemma-2-2b_coding-Q8_0.gguf +3 -0
- README.md +47 -0
- featherless-quants.png +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
MergeBench-gemma-2-2b_coding-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
MergeBench-gemma-2-2b_coding-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
MergeBench-gemma-2-2b_coding-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
| 39 |
+
MergeBench-gemma-2-2b_coding-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 40 |
+
MergeBench-gemma-2-2b_coding-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 41 |
+
MergeBench-gemma-2-2b_coding-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 42 |
+
MergeBench-gemma-2-2b_coding-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 43 |
+
MergeBench-gemma-2-2b_coding-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
| 44 |
+
MergeBench-gemma-2-2b_coding-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
| 45 |
+
MergeBench-gemma-2-2b_coding-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
| 46 |
+
MergeBench-gemma-2-2b_coding-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
| 47 |
+
featherless-quants.png filter=lfs diff=lfs merge=lfs -text
|
MergeBench-gemma-2-2b_coding-IQ4_XS.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f89ffe68a1c91fed3c46ab5e727d616bbba909b01a5463a43615edc0a06120ba
|
| 3 |
+
size 1576203232
|
MergeBench-gemma-2-2b_coding-Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:67c8f1a5028ff21fdee5a04bbd0ce37424b3e124a14156d6132efdb766c3c9cb
|
| 3 |
+
size 1229829088
|
MergeBench-gemma-2-2b_coding-Q3_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5c8ad38c4bcf7d275c5ed884d95a856ac21685f9b69a700d72005639c7035b7e
|
| 3 |
+
size 1550435296
|
MergeBench-gemma-2-2b_coding-Q3_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ccfa9353392f6031c51364072aa23e9fbe1c2ce4842b64a4a377f8dbe3fe220
|
| 3 |
+
size 1461666784
|
MergeBench-gemma-2-2b_coding-Q3_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:955b64bc24a95647344f006ba6ae4fdd75e9d382ac21611bec2269247481b414
|
| 3 |
+
size 1360659424
|
MergeBench-gemma-2-2b_coding-Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f3670eb98c648671023b84687588efacd660ad077eab06eddd286d8c1e177e6
|
| 3 |
+
size 1708581856
|
MergeBench-gemma-2-2b_coding-Q4_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bca080daad34203f921099b14552cc354c313d423862308f1371f26abe88e43f
|
| 3 |
+
size 1638650848
|
MergeBench-gemma-2-2b_coding-Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:06cc635013f8268781a65784f7bcaa8c2b8af4b7fa6958833f54750aa05a7f86
|
| 3 |
+
size 1923277792
|
MergeBench-gemma-2-2b_coding-Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d78bd8dc1a95d14a0486895a2b8d9d24ad354a48f058139014a6a6be89ce5042
|
| 3 |
+
size 1882543072
|
MergeBench-gemma-2-2b_coding-Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bd13e2b7ceef7a89eac32b449a7d79998d0fc7366a96cee46b892d19b7510633
|
| 3 |
+
size 2151392224
|
MergeBench-gemma-2-2b_coding-Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d3b269a31d317bcbbbfb267268b7b98e43f6784017d2afc370889a1b1e97befc
|
| 3 |
+
size 2784494560
|
README.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
base_model: MergeBench/gemma-2-2b_coding
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
quantized_by: featherless-ai-quants
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# MergeBench/gemma-2-2b_coding GGUF Quantizations ๐
|
| 8 |
+
|
| 9 |
+

|
| 10 |
+
|
| 11 |
+
*Optimized GGUF quantization files for enhanced model performance*
|
| 12 |
+
|
| 13 |
+
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
## Available Quantizations ๐
|
| 17 |
+
|
| 18 |
+
| Quantization Type | File | Size |
|
| 19 |
+
|-------------------|------|------|
|
| 20 |
+
| IQ4_XS | [MergeBench-gemma-2-2b_coding-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-IQ4_XS.gguf) | 1503.18 MB |
|
| 21 |
+
| Q2_K | [MergeBench-gemma-2-2b_coding-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q2_K.gguf) | 1172.86 MB |
|
| 22 |
+
| Q3_K_L | [MergeBench-gemma-2-2b_coding-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q3_K_L.gguf) | 1478.61 MB |
|
| 23 |
+
| Q3_K_M | [MergeBench-gemma-2-2b_coding-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q3_K_M.gguf) | 1393.95 MB |
|
| 24 |
+
| Q3_K_S | [MergeBench-gemma-2-2b_coding-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q3_K_S.gguf) | 1297.63 MB |
|
| 25 |
+
| Q4_K_M | [MergeBench-gemma-2-2b_coding-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q4_K_M.gguf) | 1629.43 MB |
|
| 26 |
+
| Q4_K_S | [MergeBench-gemma-2-2b_coding-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q4_K_S.gguf) | 1562.74 MB |
|
| 27 |
+
| Q5_K_M | [MergeBench-gemma-2-2b_coding-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q5_K_M.gguf) | 1834.18 MB |
|
| 28 |
+
| Q5_K_S | [MergeBench-gemma-2-2b_coding-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q5_K_S.gguf) | 1795.33 MB |
|
| 29 |
+
| Q6_K | [MergeBench-gemma-2-2b_coding-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q6_K.gguf) | 2051.73 MB |
|
| 30 |
+
| Q8_0 | [MergeBench-gemma-2-2b_coding-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/MergeBench-gemma-2-2b_coding-GGUF/blob/main/MergeBench-gemma-2-2b_coding-Q8_0.gguf) | 2655.50 MB |
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+
## โก Powered by [Featherless AI](https://featherless.ai)
|
| 36 |
+
|
| 37 |
+
### Key Features
|
| 38 |
+
|
| 39 |
+
- ๐ฅ **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
|
| 40 |
+
- ๐ ๏ธ **Zero Infrastructure** - No server setup or maintenance required
|
| 41 |
+
- ๐ **Vast Compatibility** - Support for 2400+ models and counting
|
| 42 |
+
- ๐ **Affordable Pricing** - Starting at just $10/month
|
| 43 |
+
|
| 44 |
+
---
|
| 45 |
+
|
| 46 |
+
**Links:**
|
| 47 |
+
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models)
|
featherless-quants.png
ADDED
|
Git LFS Details
|