Update README.md
Browse files
README.md
CHANGED
|
@@ -18,6 +18,45 @@ base_model:
|
|
| 18 |
- uaytug/ucoder-mini
|
| 19 |
---
|
| 20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
# uCoder Mini
|
| 22 |
|
| 23 |
> **Important:** The model is unable to produce accurate and high-quality answers to general knowledge, creative writing, or non-coding tasks, and to questions asked in languages other than English. The answers to your questions in these areas may not be satisfactory because this model was specifically trained for **coding and mathematical reasoning tasks** (competitive programming, LeetCode, algorithm problems, etc.).
|
|
@@ -268,7 +307,7 @@ def binary_search(arr, target):
|
|
| 268 |
|
| 269 |
| Setup | VRAM Required | Notes |
|
| 270 |
|-------|---------------|-------|
|
| 271 |
-
| FP16/BF16 | ~
|
| 272 |
|
| 273 |
## Citation
|
| 274 |
|
|
|
|
| 18 |
- uaytug/ucoder-mini
|
| 19 |
---
|
| 20 |
|
| 21 |
+
# uCoder-8b-base-GGUF
|
| 22 |
+
|
| 23 |
+
Quantized GGUF models converted from [uaytug/ucoder-mini](https://huggingface.co/uaytug/ucoder-mini).
|
| 24 |
+
|
| 25 |
+
Converted using the latest llama.cpp (CUDA-accelerated quantization).
|
| 26 |
+
|
| 27 |
+
### Available Files
|
| 28 |
+
|
| 29 |
+
**16-bit**
|
| 30 |
+
- `ucoder-mini-BF16.gguf` → **Highest precision float (similar to original, ~16 GB)**
|
| 31 |
+
|
| 32 |
+
**8-bit**
|
| 33 |
+
- `ucoder-mini-Q8_0.gguf` → **Near-lossless**
|
| 34 |
+
|
| 35 |
+
**6-bit**
|
| 36 |
+
- `ucoder-mini-Q6_K.gguf`
|
| 37 |
+
|
| 38 |
+
**5-bit**
|
| 39 |
+
- `ucoder-mini-Q5_K_S.gguf`
|
| 40 |
+
- `ucoder-mini-Q5_K_M.gguf` → **Great quality**
|
| 41 |
+
|
| 42 |
+
**4-bit** (most popular range)
|
| 43 |
+
- `ucoder-mini-Q4_K_M.gguf` → **Recommended balance**
|
| 44 |
+
- `ucoder-mini-Q4_K_S.gguf`
|
| 45 |
+
- `ucoder-mini-Q4_1.gguf`
|
| 46 |
+
- `ucoder-mini-IQ4_XS.gguf`
|
| 47 |
+
- `ucoder-mini-IQ4_NL.gguf`
|
| 48 |
+
|
| 49 |
+
**3-bit**
|
| 50 |
+
- `ucoder-mini-Q3_K_S.gguf`
|
| 51 |
+
- `ucoder-mini-Q3_K_M.gguf`
|
| 52 |
+
- `ucoder-mini-IQ3_XXS.gguf`
|
| 53 |
+
|
| 54 |
+
**2-bit**
|
| 55 |
+
- `ucoder-mini-Q2_K.gguf`
|
| 56 |
+
- `ucoder-mini-IQ2_M.gguf`
|
| 57 |
+
|
| 58 |
+
## Original Model Information
|
| 59 |
+
|
| 60 |
# uCoder Mini
|
| 61 |
|
| 62 |
> **Important:** The model is unable to produce accurate and high-quality answers to general knowledge, creative writing, or non-coding tasks, and to questions asked in languages other than English. The answers to your questions in these areas may not be satisfactory because this model was specifically trained for **coding and mathematical reasoning tasks** (competitive programming, LeetCode, algorithm problems, etc.).
|
|
|
|
| 307 |
|
| 308 |
| Setup | VRAM Required | Notes |
|
| 309 |
|-------|---------------|-------|
|
| 310 |
+
| FP16/BF16 | ~3 GB | Full precision inference |
|
| 311 |
|
| 312 |
## Citation
|
| 313 |
|