--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en - ar - zh - fr - de - ja - ko - es - pt pipeline_tag: text-generation tags: - liquid - lfm2.5 - edge ---
# LFM2.5-350M-Base LFM2.5 is a new family of hybrid models designed for **on-device deployment**. It builds on the LFM2 architecture with extended pre-training and reinforcement learning. Find more information about LFM2.5-350M in our [blog post](https://www.liquid.ai/blog/lfm2-5-350m-no-size-left-behind). ## 🗒️ Model Details | Model | Parameters | Description | |-------|------------|-------------| | [**LFM2.5-350M-Base**](https://huggingface.co/LiquidAI/LFM2.5-350M-Base) | 350M | Pre-trained base model for fine-tuning | | [LFM2.5-350M](https://huggingface.co/LiquidAI/LFM2.5-350M) | 350M | General-purpose instruction-tuned model | LFM2.5-350M is a general-purpose text-only model with the following features: - **Number of parameters**: 350M - **Number of layers**: 16 (10 double-gated LIV convolution blocks + 6 GQA blocks) - **Training budget**: 28T tokens - **Context length**: 32,768 tokens - **Vocabulary size**: 65,536 - **Knowledge cutoff**: Mid-2024 - **Languages**: English, Arabic, Chinese, French, German, Japanese, Korean, Portuguese, Spanish This pre-trained checkpoint is only recommended for tasks that require heavy fine-tuning, like language-specific (e.g., Japanese) or domain-specific (e.g., medical) assistants, training on proprietary data, or experimenting with novel post-training approaches. ## 🏃 Inference LFM2.5 is supported by many inference frameworks. See the [Inference documentation](https://docs.liquid.ai/lfm/inference/transformers) for the full list. | Name | Description | Docs | Notebook | |------|-------------|------|:--------:| | [Transformers](https://github.com/huggingface/transformers) | Simple inference with direct access to model internals. | Link |
|
| [vLLM](https://github.com/vllm-project/vllm) | High-throughput production deployments with GPU. | Link |
|
| [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | Link |
|
| [MLX](https://github.com/ml-explore/mlx) | Apple's machine learning framework optimized for Apple Silicon. | Link | — |
| [LM Studio](https://lmstudio.ai/) | Desktop application for running LLMs locally. | Link | — |
Here's a quick start example with Transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_id = "LiquidAI/LFM2.5-350M-Base"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
dtype="bfloat16",
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.1,
top_k=50,
repetition_penalty=1.05,
max_new_tokens=512,
streamer=streamer,
)
```
## 🔧 Fine-Tuning
We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results.
| Name | Description | Docs | Notebook |
|------|-------------|------|----------|
| CPT ([Unsloth](https://github.com/unslothai/unsloth)) | Continued Pre-Training using Unsloth for text completion. | Link |
|
| CPT ([Unsloth](https://github.com/unslothai/unsloth)) | Continued Pre-Training using Unsloth for translation. | Link |
|
| SFT ([Unsloth](https://github.com/unslothai/unsloth)) | Supervised Fine-Tuning with LoRA using Unsloth. | Link |
|
| SFT ([TRL](https://github.com/huggingface/trl)) | Supervised Fine-Tuning with LoRA using TRL. | Link |
|
| DPO ([TRL](https://github.com/huggingface/trl)) | Direct Preference Optimization with LoRA using TRL. | Link |
|
| GRPO ([Unsloth](https://github.com/unslothai/unsloth)) | GRPO with LoRA using Unsloth. | Link |
|
| GRPO ([TRL](https://github.com/huggingface/trl)) | GRPO with LoRA using TRL. | Link |
|
## 📬 Contact
- Got questions or want to connect? [Join our Discord community](https://discord.com/invite/liquid-ai)
- If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).
## Citation
```bibtex
@article{liquidAI2026350M,
author = {Liquid AI},
title = {LFM2.5-350M: No Size Left Behind},
journal = {Liquid AI Blog},
year = {2026},
note = {www.liquid.ai/blog/lfm2-5-350m-no-size-left-behind},
}
```
```bibtex
@article{liquidai2025lfm2,
title={LFM2 Technical Report},
author={Liquid AI},
journal={arXiv preprint arXiv:2511.23404},
year={2025}
}
```