Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- GetSoloTech/Code-Reasoning
|
| 4 |
+
base_model: cublya/Gemma3-Code-Reasoning-4B
|
| 5 |
+
pipeline_tag: text-generation
|
| 6 |
+
library_name: transformers
|
| 7 |
+
tags:
|
| 8 |
+
- code-generation
|
| 9 |
+
- competitive-programming
|
| 10 |
+
- code-reasoning
|
| 11 |
+
- programming
|
| 12 |
+
- algorithms
|
| 13 |
+
- problem-solving
|
| 14 |
+
- llama-cpp
|
| 15 |
+
- gguf-my-repo
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# lainlives/Gemma3-Code-Reasoning-4B
|
| 19 |
+
This model contains GGUF format model files for [`cublya/Gemma3-Code-Reasoning-4B`](https://huggingface.co/cublya/Gemma3-Code-Reasoning-4B).
|
| 20 |
+
|
| 21 |
+
### Available Quants
|
| 22 |
+
The following files were generated and uploaded to this repo:
|
| 23 |
+
`Q4_0`, `Q4_K_S`, `Q4_K_M`, `Q5_0`, `Q5_K_S`, `Q5_K_M`, `Q6_K`, `Q8_0`, `f16`, `bf16`
|
| 24 |
+
|
| 25 |
+
### Use with llama.cpp
|
| 26 |
+
|
| 27 |
+
CLI:
|
| 28 |
+
```bash
|
| 29 |
+
llama-cli --hf-repo lainlives/Gemma3-Code-Reasoning-4B --hf-file Gemma3-Code-Reasoning-4B-Q4_K_M.gguf -p "The meaning to life and the universe is"
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
Server:
|
| 33 |
+
```bash
|
| 34 |
+
llama-server --hf-repo lainlives/Gemma3-Code-Reasoning-4B --hf-file Gemma3-Code-Reasoning-4B-Q4_K_M.gguf -c 2048
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
### Or ollama
|
| 38 |
+
|
| 39 |
+
CLI:
|
| 40 |
+
```bash
|
| 41 |
+
ollama run https://hf.co/lainlives/Gemma3-Code-Reasoning-4B:Q4_K_M
|
| 42 |
+
```
|