Novachrono93 commited on
Commit
707c010
·
verified ·
1 Parent(s): 4c7d2ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -5
README.md CHANGED
@@ -24,6 +24,8 @@ language:
24
  ![Liquid AI](https://img.shields.io/badge/Architecture-Liquid%20Neural%20Network-cyan?style=for-the-badge)
25
  ![Function Calling](https://img.shields.io/badge/Task-SOTA%20Function%20Calling-orange?style=for-the-badge)
26
  ![Size](https://img.shields.io/badge/Params-1.2B-green?style=for-the-badge)
 
 
27
 
28
  </div>
29
 
@@ -94,11 +96,14 @@ Expected Output:
94
  </tool_call>
95
  ```
96
 
97
- ### 2. Using GGUF (llama.cpp / Ollama)
98
- This model is available in GGUF format in the companion repository: NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-GGUF
99
- * Recommended: q4_k_m.gguf (Balanced Speed/Quality - ~800MB)
100
- * Max Quality: f16.gguf (Lossless - ~2.5GB)
101
- * Max Speed: q2_k.gguf (Extreme Speed - ~400MB)
 
 
 
102
  ### ⚙️ Training Details
103
  | Parameter | Value |
104
  |---|---|
 
24
  ![Liquid AI](https://img.shields.io/badge/Architecture-Liquid%20Neural%20Network-cyan?style=for-the-badge)
25
  ![Function Calling](https://img.shields.io/badge/Task-SOTA%20Function%20Calling-orange?style=for-the-badge)
26
  ![Size](https://img.shields.io/badge/Params-1.2B-green?style=for-the-badge)
27
+ [![GGUF Available](https://img.shields.io/badge/GGUF-Standard-yellow?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-GGUF)
28
+ [![Imatrix GGUF Available](https://img.shields.io/badge/GGUF-Imatrix_(High_Quality)-orange?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-i1-GGUF)
29
 
30
  </div>
31
 
 
96
  </tool_call>
97
  ```
98
 
99
+ ## 📥 Download GGUF (Quantized)
100
+ Thanks to **[mradermacher](https://huggingface.co/mradermacher)**, this model is available in high-performance GGUF formats for local inference (llama.cpp, Ollama, LM Studio).
101
+
102
+ | Version | Description | Recommended For | Link |
103
+ | :--- | :--- | :--- | :--- |
104
+ | **Standard GGUF** | Traditional static quantization. | General testing & broad compatibility. | [**Download**](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-GGUF) |
105
+ | **Imatrix GGUF** | **(Best Quality)** Importance Matrix tuned. Higher accuracy at small sizes. | **Low VRAM** devices (Android/Pi) or max quality needs. | [**Download**](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-i1-GGUF) |
106
+
107
  ### ⚙️ Training Details
108
  | Parameter | Value |
109
  |---|---|