mlabonne commited on
Commit
7963c7d
·
verified ·
1 Parent(s): 80d2084

Add files using upload-large-folder tool

Browse files
Files changed (1) hide show
  1. README.md +252 -0
README.md ADDED
@@ -0,0 +1,252 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ license_name: lfm1.0
5
+ license_link: LICENSE
6
+ language:
7
+ - en
8
+ - ar
9
+ - zh
10
+ - fr
11
+ - de
12
+ - ja
13
+ - ko
14
+ - es
15
+ pipeline_tag: text-generation
16
+ tags:
17
+ - liquid
18
+ - lfm2.5
19
+ - edge
20
+ base_model: LiquidAI/LFM2.5-1.2B-Base
21
+ ---
22
+
23
+ <div align="center">
24
+ <img
25
+ src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png"
26
+ alt="Liquid AI"
27
+ style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
28
+ />
29
+ <div style="display: flex; justify-content: center; gap: 0.5em; margin-bottom: 1em;">
30
+ <a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a> •
31
+ <a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a> •
32
+ <a href="https://leap.liquid.ai/"><strong>LEAP</strong></a>
33
+ </div>
34
+ </div>
35
+
36
+ # LFM2.5-1.2B-Thinking
37
+
38
+ LFM2.5 is a new family of hybrid models designed for **on-device deployment**. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.
39
+
40
+ - **Best-in-class performance**: A 1.2B model rivaling much larger models, bringing high-quality AI to your pocket.
41
+ - **Fast edge inference**: 239 tok/s decode on AMD CPU, 82 tok/s on mobile NPU. Runs under 1GB of memory with day-one support for llama.cpp, MLX, and vLLM.
42
+ - **Scaled training**: Extended pre-training from 10T to 28T tokens and large-scale multi-stage reinforcement learning.
43
+
44
+ ![LFM2.5-1.2B - Benchmarks-Light](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/KfNudLXnOZxAhlLp_1QVo.png)
45
+
46
+ Find more information about LFM2.5 in our [blog post](https://www.liquid.ai/blog/introducing-lfm2-5-the-next-generation-of-on-device-ai).
47
+
48
+ ## 🗒️ Model Details
49
+
50
+ | Model | Parameters | Description |
51
+ |-------|------------|-------------|
52
+ | [LFM2.5-1.2B-Base](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Base) | 1.2B | Pre-trained base model for fine-tuning |
53
+ | [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct) | 1.2B | General-purpose instruction-tuned model |
54
+ | [**LFM2.5-1.2B-Thinking**](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking) | 1.2B | General-purpose reasoning model |
55
+ | [LFM2.5-1.2B-JP](https://huggingface.co/LiquidAI/LFM2.5-1.2B-JP) | 1.2B | Japanese-optimized chat model |
56
+ | [LFM2.5-VL-1.6B](https://huggingface.co/LiquidAI/LFM2.5-VL-1.6B) | 1.6B | Vision-language model with fast inference |
57
+ | [LFM2.5-Audio-1.5B](https://huggingface.co/LiquidAI/LFM2.5-Audio-1.5B) | 1.5B | Audio-language model for speech and text I/O |
58
+
59
+ LFM2.5-1.2B-Thinking is a general-purpose text-only model with the following features:
60
+
61
+ - **Number of parameters**: 1.17B
62
+ - **Number of layers**: 16 (10 double-gated LIV convolution blocks + 6 GQA blocks)
63
+ - **Training budget**: 28T tokens
64
+ - **Context length**: 32,768 tokens
65
+ - **Vocabulary size**: 65,536
66
+ - **Languages**: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish
67
+ - **Generation parameters**:
68
+ - `temperature: 0.05`
69
+ - `top_k: 50`
70
+ - `repetition_penalty: 1.05`
71
+
72
+ | Model | Description |
73
+ |-------|-------------|
74
+ | [**LFM2.5-1.2B-Thinking**](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking) | Original model checkpoint in native format. Best for fine-tuning or inference with Transformers and vLLM. |
75
+ | [LFM2.5-1.2B-Thinking-GGUF](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking-GGUF) | Quantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage. |
76
+ | [LFM2.5-1.2B-Thinking-ONNX](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking-ONNX) | ONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile). |
77
+ | [LFM2.5-1.2B-Thinking-MLX](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking-MLX-8bit) | MLX format for Apple Silicon. Optimized for fast inference on Mac devices using the MLX framework. |
78
+
79
+ We recommend using it for agentic tasks, data extraction, and RAG. It is not recommended for knowledge-intensive tasks and programming.
80
+
81
+ ### Chat Template
82
+
83
+ LFM2.5 uses a ChatML-like format. See the [Chat Template documentation](https://docs.liquid.ai/lfm/key-concepts/chat-template) for details. Example:
84
+
85
+ ```
86
+ <|startoftext|><|im_start|>system
87
+ You are a helpful assistant trained by Liquid AI.<|im_end|>
88
+ <|im_start|>user
89
+ What is C. elegans?<|im_end|>
90
+ <|im_start|>assistant
91
+ ```
92
+
93
+ You can use [`tokenizer.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_templating#using-applychattemplate) to format your messages automatically.
94
+
95
+ ### Tool Use
96
+
97
+ LFM2.5 supports function calling as follows:
98
+
99
+ 1. **Function definition**: We recommend providing the list of tools as a JSON object in the system prompt. You can also use the [`tokenizer.apply_chat_template()`](https://huggingface.co/docs/transformers/en/chat_extras#passing-tools) function with tools.
100
+ 2. **Function call**: By default, LFM2.5 writes Pythonic function calls (a Python list between `<|tool_call_start|>` and `<|tool_call_end|>` special tokens), as the assistant answer. You can override this behavior by asking the model to output JSON function calls in the system prompt.
101
+ 3. **Function execution**: The function call is executed, and the result is returned as a "tool" role.
102
+ 4. **Final answer**: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.
103
+
104
+ See the [Tool Use documentation](https://docs.liquid.ai/lfm/key-concepts/tool-use) for the full guide. Example:
105
+
106
+ ```
107
+ <|startoftext|><|im_start|>system
108
+ List of tools: [{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|im_end|>
109
+ <|im_start|>user
110
+ What is the current status of candidate ID 12345?<|im_end|>
111
+ <|im_start|>assistant
112
+ <|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
113
+ <|im_start|>tool
114
+ [{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023-11-20"}]<|im_end|>
115
+ <|im_start|>assistant
116
+ The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023-11-20.<|im_end|>
117
+ ```
118
+
119
+ ## 🏃 Inference
120
+
121
+ LFM2.5 is supported by many inference frameworks. See the [Inference documentation](https://docs.liquid.ai/lfm/inference/transformers) for the full list.
122
+
123
+ | Name | Description | Docs | Notebook |
124
+ |------|-------------|------|:--------:|
125
+ | [Transformers](https://github.com/huggingface/transformers) | Simple inference with direct access to model internals. | <a href="https://docs.liquid.ai/lfm/inference/transformers">Link</a> | <a href="https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
126
+ | [vLLM](https://github.com/vllm-project/vllm) | High-throughput production deployments with GPU. | <a href="https://docs.liquid.ai/lfm/inference/vllm">Link</a> | <a href="https://colab.research.google.com/drive/1VfyscuHP8A3we_YpnzuabYJzr5ju0Mit?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
127
+ | [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | <a href="https://docs.liquid.ai/lfm/inference/llama-cpp">Link</a> | <a href="https://colab.research.google.com/drive/1ohLl3w47OQZA4ELo46i5E4Z6oGWBAyo8?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
128
+ | [MLX](https://github.com/ml-explore/mlx) | Apple's machine learning framework optimized for Apple Silicon. | <a href="https://docs.liquid.ai/lfm/inference/mlx">Link</a> | — |
129
+ | [LM Studio](https://lmstudio.ai/) | Desktop application for running LLMs locally. | <a href="https://docs.liquid.ai/lfm/inference/lm-studio">Link</a> | — |
130
+
131
+ Here's a quick start example with Transformers:
132
+
133
+ ```python
134
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
135
+
136
+ model_id = "LiquidAI/LFM2.5-1.2B-Thinking"
137
+ model = AutoModelForCausalLM.from_pretrained(
138
+ model_id,
139
+ device_map="auto",
140
+ dtype="bfloat16",
141
+ # attn_implementation="flash_attention_2" <- uncomment on compatible GPU
142
+ )
143
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
144
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
145
+
146
+ prompt = "What is C. elegans?"
147
+
148
+ input_ids = tokenizer.apply_chat_template(
149
+ [{"role": "user", "content": prompt}],
150
+ add_generation_prompt=True,
151
+ return_tensors="pt",
152
+ tokenize=True,
153
+ ).to(model.device)
154
+
155
+ output = model.generate(
156
+ input_ids,
157
+ do_sample=True,
158
+ temperature=0.1,
159
+ top_k=50,
160
+ top_p=0.1,
161
+ repetition_penalty=1.05,
162
+ max_new_tokens=512,
163
+ streamer=streamer,
164
+ )
165
+ ```
166
+
167
+ ## 🔧 Fine-Tuning
168
+
169
+ We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results.
170
+
171
+ | Name | Description | Docs | Notebook |
172
+ |------|-------------|------|----------|
173
+ | CPT ([Unsloth](https://github.com/unslothai/unsloth)) | Continued Pre-Training using Unsloth for text completion. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/10fm7eNMezs-DSn36mF7vAsNYlOsx9YZO?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
174
+ | CPT ([Unsloth](https://github.com/unslothai/unsloth)) | Continued Pre-Training using Unsloth for translation. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/1gaP8yTle2_v35Um8Gpu9239fqbU7UgY8?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
175
+ | SFT ([Unsloth](https://github.com/unslothai/unsloth)) | Supervised Fine-Tuning with LoRA using Unsloth. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/1vGRg4ksRj__6OLvXkHhvji_Pamv801Ss?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
176
+ | SFT ([TRL](https://github.com/huggingface/trl)) | Supervised Fine-Tuning with LoRA using TRL. | <a href="https://docs.liquid.ai/lfm/fine-tuning/trl">Link</a> | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
177
+ | DPO ([TRL](https://github.com/huggingface/trl)) | Direct Preference Optimization with LoRA using TRL. | <a href="https://docs.liquid.ai/lfm/fine-tuning/trl">Link</a> | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
178
+ | GRPO ([Unsloth](https://github.com/unslothai/unsloth)) | GRPO with LoRA using Unsloth. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/1mIikXFaGvcW4vXOZXLbVTxfBRw_XsXa5?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
179
+
180
+ ## 📊 Performance
181
+
182
+ ### Benchmarks
183
+
184
+ We compared LFM2.5-1.2B-Thinking with relevant sub-2B models on a diverse suite of benchmarks.
185
+
186
+ | Model | GPQA Diamond | MMLU-Pro | IFEval | IFBench | Multi-IF | GSM8K | MATH-500 | AIME25 | BFCLv3 |
187
+ | -------------------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- | ----------------- |
188
+ | **LFM2.5-1.2B-Thinking** | 37.86<br>(± 0.83) | 49.65<br>(± 0.18) | 88.42<br>(± 0.35) | 44.85<br>(± 0.73) | 69.33<br>(± 0.09) | 85.60<br>(± 0.00) | 87.96<br>(± 0.72) | 31.73<br>(± 1.81) | 56.97<br>(± 0.30) |
189
+ | Qwen3-1.7B (thinking mode) | 36.93<br>(± 2.07) | 56.68<br>(± 1.29) | 71.65<br>(± 0.13) | 25.88<br>(± 0.30) | 60.33<br>(± 0.02) | 85.60<br>(± 1.13) | 81.92<br>(± 2.99) | 36.27<br>(± 1.24) | 55.41<br>(± 0.04) |
190
+ | LFM2.5-1.2B-Instruct | 38.89 | 44.35 | 86.23 | 47.33 | 60.98 | 64.52 | 63.20 | 14.00 | 49.12 |
191
+ | Qwen3-1.7B (instruct mode) | 34.85 | 42.91 | 73.68 | 21.33 | 56.48 | 33.66 | 70.40 | 9.33 | 46.30 |
192
+ | Granite-4.0-H-1B | 24.34 | 27.64 | 80.08 | 24.93 | 47.56 | 69.60 | 47.20 | 1 | 50.69 |
193
+ | Granite-4.0-1B | 24.24 | 33.53 | 79.61 | 21 | 43.65 | 73.42 | 44.80 | 3.33 | 52.43 |
194
+ | Gemma 3 1B IT | 24.24 | 14.04 | 63.25 | 20.47 | 44.31 | 42.15 | 45.20 | 1 | 16.64 |
195
+ | Llama 3.2 1B Instruct | 16.57 | 20.80 | 52.37 | 15.93 | 30.16 | 39.04 | 23.40 | 0.33 | 21.44 |
196
+
197
+ GPQA, MMLU-Pro, IFBench, and AIME25 follow [ArtificialAnalysis's methodology](https://artificialanalysis.ai/methodology/intelligence-benchmarking). For IFEval and Multi-IF, we report the average score across strict and loose prompt and instruction accuracies. For BFCLv3, we report the final weighted average score with a custom Liquid handler to support our tool use template.
198
+
199
+ Based on the same methodology, we report the average score and standard deviation across five runs with `temperature=0.6` for thinking models. For instruct models, we report scores using greedy decoding.
200
+
201
+ ### Response length
202
+
203
+ In comparison with Qwen3-1.7B (thinking mode), it requires fewer output tokens while offering higher overall performance.
204
+
205
+ ![LFM2.5-1.2B - Average Response Length](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/Gcq_HUYLVC779xOuut2EI.png)
206
+
207
+ ### Inference speed
208
+
209
+ LFM2.5-1.2B-Thinking offers extremely fast inference speed on CPUs with a low memory profile compared to similar-sized models.
210
+
211
+ ![LFM2.5-1.2B - Inference Performance](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/4ODY8nGws22vICfcMTxNx.png)
212
+
213
+ In addition, we are partnering with AMD, Qualcomm, Nexa AI, and FastFlowLM to bring the LFM2.5 family to NPUs. These optimized models are available through our partners, enabling highly efficient on-device inference.
214
+
215
+ We report the following numbers with 1K prefill and 100 decode tokens:
216
+
217
+ | Device | Inference | Framework | Model | Prefill (tok/s) | Decode (tok/s) | Memory |
218
+ | ---------------------------------------------------- | --------- | ---------------- | -------------------- | --------------- | -------------- | ------ |
219
+ | AMD Ryzen AI 395+ | NPU | FastFlowLM | LFM2.5-1.2B-Thinking | 1373 | 60 | 1700MB |
220
+ | AMD Ryzen AI 9 HX 370 | NPU | FastFlowLM | LFM2.5-1.2B-Thinking | 1256 | 57 | 1700MB |
221
+ | AMD Ryzen AI 9 HX 370 | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 2975 | 116 | 856MB |
222
+ | Qualcomm Snapdragon® X Elite | NPU | NexaML | LFM2.5-1.2B-Thinking | 2591 | 63 | 0.9GB |
223
+ | Qualcomm Snapdragon® Gen4 (ROG Phone9 Pro) | NPU | NexaML | LFM2.5-1.2B-Thinking | 4391 | 82 | 0.9GB |
224
+ | Qualcomm Dragonwing IQ9 (IQ-9075) (IoT) | NPU | NexaML | LFM2.5-1.2B-Thinking | 2143 | 53 | 0.9 GB |
225
+ | Qualcomm Snapdragon® Gen4 (Samsung Galaxy S25 Ultra) | CPU | llama.cpp (Q4_0) | LFM2.5-1.2B-Thinking | 335 | 70 | 719MB |
226
+
227
+ These capabilities unlock new deployment scenarios across various devices, including vehicles, mobile devices, laptops, IoT devices, and embedded systems.
228
+
229
+ ## Contact
230
+
231
+ For enterprise solutions and edge deployment, contact [sales@liquid.ai](mailto:sales@liquid.ai).
232
+
233
+ ## Citation
234
+
235
+ ```bibtex
236
+ @article{liquidAI2026thinking,
237
+ author = {Liquid AI},
238
+ title = {LFM2.5-1.2B-Thinking: On-Device Reasoning Under 1GB},
239
+ journal = {Liquid AI Blog},
240
+ year = {2026},
241
+ note = {www.liquid.ai/blog/lfm2-5-1-2b-thinking-on-device-reasoning-under-1gb},
242
+ }
243
+ ```
244
+
245
+ ```bibtex
246
+ @article{liquidai2025lfm2,
247
+ title={LFM2 Technical Report},
248
+ author={Liquid AI},
249
+ journal={arXiv preprint arXiv:2511.23404},
250
+ year={2025}
251
+ }
252
+ ```