Update README.md
Browse files
README.md
CHANGED
|
@@ -38,14 +38,14 @@ This model has been **specifically engineered for robust Function Calling**, all
|
|
| 38 |
* **ChatML Native:** Uses the standard `<|im_start|>` format for easy integration.
|
| 39 |
* **GGUF Ready:** Available in all quantization levels (from 16-bit down to 2-bit).
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
<div align="center">
|
| 44 |
<img src="./nova_benchmark.jpg" alt="Nova-LFM Benchmark Chart" width="100%" />
|
| 45 |
</div>
|
| 46 |
|
| 47 |
> **Note:** The "Blind Test" metric (58%) represents the model's raw semantic accuracy without any tool definitions provided (Zero-Shot). The "Syntax Reliability" (97%) measures the model's ability to generate valid, crash-free JSON structure, which matches GPT-4o class performance.
|
| 48 |
-
>
|
| 49 |
|
| 50 |
---
|
| 51 |
|
|
@@ -92,13 +92,14 @@ Expected Output:
|
|
| 92 |
<tool_call>
|
| 93 |
{"name": "calculate_circle_area", "arguments": {"radius": 5}}
|
| 94 |
</tool_call>
|
|
|
|
| 95 |
|
| 96 |
-
2. Using GGUF (llama.cpp / Ollama)
|
| 97 |
This model is available in GGUF format in the companion repository: NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-GGUF
|
| 98 |
* Recommended: q4_k_m.gguf (Balanced Speed/Quality - ~800MB)
|
| 99 |
* Max Quality: f16.gguf (Lossless - ~2.5GB)
|
| 100 |
* Max Speed: q2_k.gguf (Extreme Speed - ~400MB)
|
| 101 |
-
⚙️ Training Details
|
| 102 |
| Parameter | Value |
|
| 103 |
|---|---|
|
| 104 |
| Base Model | LiquidAI/LFM2.5-1.2B-Instruct |
|
|
|
|
| 38 |
* **ChatML Native:** Uses the standard `<|im_start|>` format for easy integration.
|
| 39 |
* **GGUF Ready:** Available in all quantization levels (from 16-bit down to 2-bit).
|
| 40 |
|
| 41 |
+
# 📊 Performance Benchmark
|
| 42 |
|
| 43 |
<div align="center">
|
| 44 |
<img src="./nova_benchmark.jpg" alt="Nova-LFM Benchmark Chart" width="100%" />
|
| 45 |
</div>
|
| 46 |
|
| 47 |
> **Note:** The "Blind Test" metric (58%) represents the model's raw semantic accuracy without any tool definitions provided (Zero-Shot). The "Syntax Reliability" (97%) measures the model's ability to generate valid, crash-free JSON structure, which matches GPT-4o class performance.
|
| 48 |
+
>
|
| 49 |
|
| 50 |
---
|
| 51 |
|
|
|
|
| 92 |
<tool_call>
|
| 93 |
{"name": "calculate_circle_area", "arguments": {"radius": 5}}
|
| 94 |
</tool_call>
|
| 95 |
+
```
|
| 96 |
|
| 97 |
+
### 2. Using GGUF (llama.cpp / Ollama)
|
| 98 |
This model is available in GGUF format in the companion repository: NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-GGUF
|
| 99 |
* Recommended: q4_k_m.gguf (Balanced Speed/Quality - ~800MB)
|
| 100 |
* Max Quality: f16.gguf (Lossless - ~2.5GB)
|
| 101 |
* Max Speed: q2_k.gguf (Extreme Speed - ~400MB)
|
| 102 |
+
### ⚙️ Training Details
|
| 103 |
| Parameter | Value |
|
| 104 |
|---|---|
|
| 105 |
| Base Model | LiquidAI/LFM2.5-1.2B-Instruct |
|