File size: 5,732 Bytes
8c94c07
5e4fa6d
 
8c94c07
5e4fa6d
 
8c94c07
5e4fa6d
 
 
 
 
 
 
8c94c07
 
 
 
5e4fa6d
 
 
 
 
 
 
 
707c010
 
5e4fa6d
 
 
 
 
 
 
 
 
 
 
 
 
 
4c7d2ca
c781bde
 
 
 
 
 
4c7d2ca
c781bde
5e4fa6d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c94c07
5e4fa6d
 
 
 
8c94c07
5e4fa6d
 
 
 
4c7d2ca
8c94c07
707c010
 
 
 
 
 
 
 
4c7d2ca
5e4fa6d
 
 
 
 
 
 
 
 
 
 
 
 
 
2fd0ce7
5e4fa6d
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
license: apache-2.0
base_model: LiquidAI/LFM2.5-1.2B-Instruct
tags:
- function-calling
- liquid-neural-network
- unsloth
- tool-use
- gguf
- conversational
datasets:
- NovachronoAI/Nova-Synapse-Function-Calling
library_name: transformers
pipeline_tag: text-generation
language:
- en
---

# 🌊 LFM 2.5 1.2B - Nova Synapse (Function Calling)

<div align="center">

![Unsloth Fine-tuning](https://img.shields.io/badge/Fine--Tuned%20with-Unsloth-blue?style=for-the-badge)
![Liquid AI](https://img.shields.io/badge/Architecture-Liquid%20Neural%20Network-cyan?style=for-the-badge)
![Function Calling](https://img.shields.io/badge/Task-SOTA%20Function%20Calling-orange?style=for-the-badge)
![Size](https://img.shields.io/badge/Params-1.2B-green?style=for-the-badge)
[![GGUF Available](https://img.shields.io/badge/GGUF-Standard-yellow?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-GGUF)
[![Imatrix GGUF Available](https://img.shields.io/badge/GGUF-Imatrix_(High_Quality)-orange?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-i1-GGUF)

</div>

## πŸš€ Model Overview
**LFM2.5-1.2B-Nova-Function-Calling** is a specialized fine-tune of Liquid AI's revolutionary **Liquid Neural Network (LFM 2.5)**. Despite its small size (1.2B parameters), this model rivals 7B+ class models in specific tasks due to its hybrid architecture.

This model has been **specifically engineered for robust Function Calling**, allowing it to seamlessly convert natural language user queries into structured JSON inputs for tools, APIs, and software agents.

### 🌟 Key Features
* **Hyper-Efficient:** Runs on extremely low-resource hardware (phones, Raspberry Pi, older laptops) thanks to the 1.2B Liquid architecture.
* **Precision Tuned:** Achieved a training loss of **2.63**, mastering structured JSON syntax without overfitting.
* **ChatML Native:** Uses the standard `<|im_start|>` format for easy integration.
* **GGUF Ready:** Available in all quantization levels (from 16-bit down to 2-bit).

# πŸ“Š Performance Benchmark

<div align="center">
  <img src="./nova_benchmark.jpg" alt="Nova-LFM Benchmark Chart" width="100%" />
</div>

> **Note:** The "Blind Test" metric (58%) represents the model's raw semantic accuracy without any tool definitions provided (Zero-Shot). The "Syntax Reliability" (97%) measures the model's ability to generate valid, crash-free JSON structure, which matches GPT-4o class performance.
>

---

## πŸ“š Dataset
This model was trained on **[NovachronoAI/Nova-Synapse-Function-Calling](https://huggingface.co/datasets/NovachronoAI/Nova-Synapse-Function-Calling)**.

* **Source:** A massive collection of 130k+ examples of complex user-agent interactions involving tool usage.
* **Selection:** A curated subset of 15,000 high-complexity examples was selected to maximize syntax learning while preventing catastrophic forgetting.
* **Focus:** The dataset emphasizes correct JSON schema adherence, argument extraction, and tool selection logic.

---

## πŸ’» Quick Start (Inference)

### 1. Using Transformers
You need the latest `transformers` and `unsloth` libraries to run Liquid architectures.

```python
from unsloth import FastLanguageModel
import torch

# Load the model
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "NovachronoAI/LFM2.5-1.2B-Nova-Function-Calling-Full", # or use the GGUF repo
    max_seq_length = 4096,
    dtype = None,
    load_in_4bit = True,
)
FastLanguageModel.for_inference(model)

# Define the Prompt (ChatML Format)
prompt = """<|im_start|>user
I need to calculate the area of a circle with a radius of 5.
<|im_end|>
<|im_start|>assistant
"""

# Generate
inputs = tokenizer([prompt], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)
print(tokenizer.batch_decode(outputs)[0].split("<|im_start|>assistant")[-1])

Expected Output:
<tool_call>
{"name": "calculate_circle_area", "arguments": {"radius": 5}}
</tool_call>
```

## πŸ“₯ Download GGUF (Quantized)
Thanks to **[mradermacher](https://huggingface.co/mradermacher)**, this model is available in high-performance GGUF formats for local inference (llama.cpp, Ollama, LM Studio).

| Version | Description | Recommended For | Link |
| :--- | :--- | :--- | :--- |
| **Standard GGUF** | Traditional static quantization. | General testing & broad compatibility. | [**Download**](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-GGUF) |
| **Imatrix GGUF** | **(Best Quality)** Importance Matrix tuned. Higher accuracy at small sizes. | **Low VRAM** devices (Android/Pi) or max quality needs. | [**Download**](https://huggingface.co/mradermacher/LFM2.5-1.2B-Nova-Function-Calling-i1-GGUF) |

### βš™οΈ Training Details
| Parameter | Value |
|---|---|
| Base Model | LiquidAI/LFM2.5-1.2B-Instruct |
| Framework | Unsloth + Hugging Face TRL |
| Hardware | NVIDIA Tesla T4 (Kaggle) |
| Epochs | ~2 (600 Steps) |
| Learning Rate | 2e-4 |
| Scheduler | Linear |
| Quantization | 4-bit (QLoRA) |
Training Trajectory
The model showed rapid adaptation to the JSON syntax, dropping from a random-guess loss of 11.6 to a highly capable 2.63.
 * Start: Loss 11.68 (Step 10)
 * Convergence: Loss ~3.0 (Step 160)
 * Final: Loss 2.63 (Step 600)

πŸ“œ License
This model is fine-tuned from LiquidAI/LFM2.5-1.2B-Instruct. Please refer to the original Liquid AI license terms for commercial use. The fine-tuning dataset and adapters are released under Apache 2.0.
<div align="center">
Built with ❀️ by <b>NovachronoAI</b> using <a href="https://github.com/unslothai/unsloth">Unsloth</a>
</div>