Upload folder using huggingface_hub
Browse files- CodeGemma-7b-Q2_K.gguf +3 -0
- CodeGemma-7b-Q3_K_L.gguf +3 -0
- CodeGemma-7b-Q3_K_M.gguf +3 -0
- CodeGemma-7b-Q3_K_S.gguf +3 -0
- CodeGemma-7b-Q4_0.gguf +3 -0
- CodeGemma-7b-Q4_K_M.gguf +3 -0
- CodeGemma-7b-Q4_K_S.gguf +3 -0
- CodeGemma-7b-Q5_0.gguf +3 -0
- CodeGemma-7b-Q5_K_M.gguf +3 -0
- CodeGemma-7b-Q5_K_S.gguf +3 -0
- CodeGemma-7b-Q6_K.gguf +3 -0
- CodeGemma-7b-Q8_0.gguf +3 -0
- README.md +25 -26
CodeGemma-7b-Q2_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:651930b98af30c5bbe1f6ef9ab485ced0deaf849baee12dba87f89cd576a5b4e
|
| 3 |
+
size 3481447328
|
CodeGemma-7b-Q3_K_L.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5ef3ff8b0d9487114cc14bfbbe6d195250d77dfcb0031c5ad769283a594ba1f4
|
| 3 |
+
size 4709067680
|
CodeGemma-7b-Q3_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:88f8342f91de7a6ae177b9ec89e9f567d023cd0fef0e2159ad39c5b3d0f20214
|
| 3 |
+
size 4369329056
|
CodeGemma-7b-Q3_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8d4eaf3554028da512283c137ec0a5a076cfeff541a2579dad533c712e41c134
|
| 3 |
+
size 3982404512
|
CodeGemma-7b-Q4_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:89f6d84773349f84d1666a891a487ec6019857d6689d5272a911d0fc4118076e
|
| 3 |
+
size 5011844000
|
CodeGemma-7b-Q4_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d5d5bbf040bbb804368cccb32082ce5ff18b445ea9321309cb174b31ebe283a9
|
| 3 |
+
size 5329759136
|
CodeGemma-7b-Q4_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:b3df62b00992341b68e8fb1efd35fb34c7a4e28c73dd74feb1aeea36111ce27c
|
| 3 |
+
size 5046447008
|
CodeGemma-7b-Q5_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f1a6439b43fe26d62ec79493ca8eefa946ec8d8d35e3260bc6f3d6cb3d0d20ec
|
| 3 |
+
size 5980728224
|
CodeGemma-7b-Q5_K_M.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c90b8e3715fd450537d479c5db3fd7aa04d3973aa73be9570e55f26f086630bc
|
| 3 |
+
size 6144502688
|
CodeGemma-7b-Q5_K_S.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0ec6d718ab16f6b11bcfc35f8ee8eb7b4348dd7d76e700623ff26bde56542199
|
| 3 |
+
size 5980728224
|
CodeGemma-7b-Q6_K.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f76577857c2ca2c2912382832b6be647a8ec2bc48a6a2989d7e2c1f744ee20b
|
| 3 |
+
size 7010167712
|
CodeGemma-7b-Q8_0.gguf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:596348c1cdfa437e6ba9c4e75af57f1dbc92c2cfaf4958a2aa97203f1878b815
|
| 3 |
+
size 9077844896
|
README.md
CHANGED
|
@@ -1,16 +1,15 @@
|
|
| 1 |
---
|
| 2 |
-
library_name: transformers
|
| 3 |
-
license: gemma
|
| 4 |
-
license_link: https://ai.google.dev/gemma/terms
|
| 5 |
-
extra_gated_heading: Access CodeGemma on Hugging Face
|
| 6 |
-
extra_gated_prompt: To access CodeGemma on Hugging Face, you’re required to review
|
| 7 |
-
and agree to Google’s usage license. To do this, please ensure you’re logged-in
|
| 8 |
-
to Hugging Face and click below. Requests are processed immediately.
|
| 9 |
-
extra_gated_button_content: Acknowledge license
|
| 10 |
-
base_model: google/codegemma-7b
|
| 11 |
tags:
|
|
|
|
|
|
|
| 12 |
- TensorBlock
|
| 13 |
- GGUF
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
---
|
| 15 |
|
| 16 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
@@ -24,11 +23,11 @@ tags:
|
|
| 24 |
</div>
|
| 25 |
</div>
|
| 26 |
|
| 27 |
-
##
|
| 28 |
|
| 29 |
-
This repo contains GGUF format model files for [
|
| 30 |
|
| 31 |
-
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit
|
| 32 |
|
| 33 |
<div style="text-align: left; margin: 20px 0;">
|
| 34 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
|
@@ -46,18 +45,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
| 46 |
|
| 47 |
| Filename | Quant type | File Size | Description |
|
| 48 |
| -------- | ---------- | --------- | ----------- |
|
| 49 |
-
| [
|
| 50 |
-
| [
|
| 51 |
-
| [
|
| 52 |
-
| [
|
| 53 |
-
| [
|
| 54 |
-
| [
|
| 55 |
-
| [
|
| 56 |
-
| [
|
| 57 |
-
| [
|
| 58 |
-
| [
|
| 59 |
-
| [
|
| 60 |
-
| [
|
| 61 |
|
| 62 |
|
| 63 |
## Downloading instruction
|
|
@@ -73,11 +72,11 @@ pip install -U "huggingface_hub[cli]"
|
|
| 73 |
Then, downoad the individual model file the a local directory
|
| 74 |
|
| 75 |
```shell
|
| 76 |
-
huggingface-cli download tensorblock/
|
| 77 |
```
|
| 78 |
|
| 79 |
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
|
| 80 |
|
| 81 |
```shell
|
| 82 |
-
huggingface-cli download tensorblock/
|
| 83 |
```
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
tags:
|
| 3 |
+
- code
|
| 4 |
+
- gemma
|
| 5 |
- TensorBlock
|
| 6 |
- GGUF
|
| 7 |
+
library_name: transformers
|
| 8 |
+
pipeline_tag: text-generation
|
| 9 |
+
license: other
|
| 10 |
+
license_name: gemma-terms-of-use
|
| 11 |
+
license_link: https://ai.google.dev/gemma/terms
|
| 12 |
+
base_model: TechxGenus/CodeGemma-7b
|
| 13 |
---
|
| 14 |
|
| 15 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
|
|
| 23 |
</div>
|
| 24 |
</div>
|
| 25 |
|
| 26 |
+
## TechxGenus/CodeGemma-7b - GGUF
|
| 27 |
|
| 28 |
+
This repo contains GGUF format model files for [TechxGenus/CodeGemma-7b](https://huggingface.co/TechxGenus/CodeGemma-7b).
|
| 29 |
|
| 30 |
+
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
| 31 |
|
| 32 |
<div style="text-align: left; margin: 20px 0;">
|
| 33 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
|
|
|
| 45 |
|
| 46 |
| Filename | Quant type | File Size | Description |
|
| 47 |
| -------- | ---------- | --------- | ----------- |
|
| 48 |
+
| [CodeGemma-7b-Q2_K.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q2_K.gguf) | Q2_K | 3.481 GB | smallest, significant quality loss - not recommended for most purposes |
|
| 49 |
+
| [CodeGemma-7b-Q3_K_S.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q3_K_S.gguf) | Q3_K_S | 3.982 GB | very small, high quality loss |
|
| 50 |
+
| [CodeGemma-7b-Q3_K_M.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q3_K_M.gguf) | Q3_K_M | 4.369 GB | very small, high quality loss |
|
| 51 |
+
| [CodeGemma-7b-Q3_K_L.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q3_K_L.gguf) | Q3_K_L | 4.709 GB | small, substantial quality loss |
|
| 52 |
+
| [CodeGemma-7b-Q4_0.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q4_0.gguf) | Q4_0 | 5.012 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
| 53 |
+
| [CodeGemma-7b-Q4_K_S.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q4_K_S.gguf) | Q4_K_S | 5.046 GB | small, greater quality loss |
|
| 54 |
+
| [CodeGemma-7b-Q4_K_M.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q4_K_M.gguf) | Q4_K_M | 5.330 GB | medium, balanced quality - recommended |
|
| 55 |
+
| [CodeGemma-7b-Q5_0.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q5_0.gguf) | Q5_0 | 5.981 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
| 56 |
+
| [CodeGemma-7b-Q5_K_S.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q5_K_S.gguf) | Q5_K_S | 5.981 GB | large, low quality loss - recommended |
|
| 57 |
+
| [CodeGemma-7b-Q5_K_M.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q5_K_M.gguf) | Q5_K_M | 6.145 GB | large, very low quality loss - recommended |
|
| 58 |
+
| [CodeGemma-7b-Q6_K.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q6_K.gguf) | Q6_K | 7.010 GB | very large, extremely low quality loss |
|
| 59 |
+
| [CodeGemma-7b-Q8_0.gguf](https://huggingface.co/tensorblock/CodeGemma-7b-GGUF/blob/main/CodeGemma-7b-Q8_0.gguf) | Q8_0 | 9.078 GB | very large, extremely low quality loss - not recommended |
|
| 60 |
|
| 61 |
|
| 62 |
## Downloading instruction
|
|
|
|
| 72 |
Then, downoad the individual model file the a local directory
|
| 73 |
|
| 74 |
```shell
|
| 75 |
+
huggingface-cli download tensorblock/CodeGemma-7b-GGUF --include "CodeGemma-7b-Q2_K.gguf" --local-dir MY_LOCAL_DIR
|
| 76 |
```
|
| 77 |
|
| 78 |
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
|
| 79 |
|
| 80 |
```shell
|
| 81 |
+
huggingface-cli download tensorblock/CodeGemma-7b-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
| 82 |
```
|