Update README.md
Browse files
README.md
CHANGED
|
@@ -24,6 +24,8 @@ language:
|
|
| 24 |

|
| 25 |

|
| 26 |

|
|
|
|
|
|
|
| 27 |
|
| 28 |
</div>
|
| 29 |
|
|
@@ -94,11 +96,14 @@ Expected Output:
|
|
| 94 |
</tool_call>
|
| 95 |
```
|
| 96 |
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
|
|
|
|
|
|
|
|
|
| 102 |
### ⚙️ Training Details
|
| 103 |
| Parameter | Value |
|
| 104 |
|---|---|
|
|
|
|
| 24 |

|
| 25 |

|
| 26 |

|
| 27 |
+
[](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-GGUF)
|
| 28 |
+
[-orange?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-i1-GGUF)
|
| 29 |
|
| 30 |
</div>
|
| 31 |
|
|
|
|
| 96 |
</tool_call>
|
| 97 |
```
|
| 98 |
|
| 99 |
+
## 📥 Download GGUF (Quantized)
|
| 100 |
+
Thanks to **[mradermacher](https://huggingface.co/mradermacher)**, this model is available in high-performance GGUF formats for local inference (llama.cpp, Ollama, LM Studio).
|
| 101 |
+
|
| 102 |
+
| Version | Description | Recommended For | Link |
|
| 103 |
+
| :--- | :--- | :--- | :--- |
|
| 104 |
+
| **Standard GGUF** | Traditional static quantization. | General testing & broad compatibility. | [**Download**](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-GGUF) |
|
| 105 |
+
| **Imatrix GGUF** | **(Best Quality)** Importance Matrix tuned. Higher accuracy at small sizes. | **Low VRAM** devices (Android/Pi) or max quality needs. | [**Download**](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-i1-GGUF) |
|
| 106 |
+
|
| 107 |
### ⚙️ Training Details
|
| 108 |
| Parameter | Value |
|
| 109 |
|---|---|
|