| | --- |
| | license: gemma |
| | tags: |
| | - gguf |
| | - text-generation |
| | - gemma |
| | - quantized |
| | model_type: llama |
| | quantized_by: c516a |
| | base_model: google/gemma-3-12b-it |
| | --- |
| | |
| | ## βοΈ License and Usage |
| |
|
| | This repository contains quantized variants of the Gemma language model developed by Google. |
| |
|
| | * π§ **Model source:** [Google / Gemma](https://ai.google.dev/gemma/terms) |
| | * πͺ **Quantized by:** c516a |
| |
|
| | ### Terms of Use |
| |
|
| | These quantized models are: |
| |
|
| | * Provided under the same terms as the original Google Gemma models. |
| | * Intended only for **non-commercial use**, **research**, and **experimentation**. |
| | * Redistributed without modification to the underlying model weights, except for **format (GGUF)** and **quantization level**. |
| |
|
| | By using this repository or its contents, you agree to: |
| |
|
| | * Comply with the [Gemma License Terms](https://ai.google.dev/gemma/terms), |
| | * Not use the model or its derivatives for any **commercial purposes** without a separate license from Google, |
| | * Acknowledge Google as the original model creator. |
| |
|
| | > π’ **Disclaimer:** This repository is not affiliated with Google. |
| |
|
| | --- |
| |
|
| | ## π¦ Model Downloads |
| |
|
| | All quantized model files are hosted externally for convenience. |
| | You can download them from: |
| |
|
| | π **[https://modelbakery.nincs.net/c516a](https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it)** |
| |
|
| | π git clone https://modelbakery.nincs.net/c516a/quantized-codegemma-7b-it.git |
| |
|
| | ### File list |
| |
|
| | Each `.gguf` file has a corresponding `.txt` file that contains the same download URL for clarity. |
| |
|
| | Example: |
| |
|
| | * `codegemma-7b-it.Q4_K_M.gguf` (binary file) |
| | * `codegemma-7b-it.Q4_K_M.gguf.txt` β contains: |
| |
|
| | ``` |
| | Download: https://modelbakery.nincs.net/users/c516a/projects/quantized-codegemma-7b-it/codegemma-7b-it.Q4_K_M.gguf |
| | ``` |
| |
|
| | --- |
| |
|
| | ## π Notes |
| |
|
| | These models were quantized locally using `llama.cpp` and tested on RTX 3050 / 5950X / 64GB RAM setups. |
| |
|
| | If you find them useful, feel free to star the project or fork it to share improvements! |
| |
|
| | ## π₯ Model Files |
| |
|
| | Model weights are not stored directly on this repository due to size constraints. |
| |
|
| | Instead, each `.txt` file in the `models/` folder contains a direct download link to the corresponding `.gguf` model file hosted at: |
| |
|
| | β‘οΈ https://modelbakery.nincs.net/c516a/projects/quantized-codegemma-7b-it |
| | >>>>>>> c687980 (Add download instructions for GGUF files hosted externally) |
| |
|