--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en - ja pipeline_tag: text-generation tags: - liquid - lfm2.5 - edge base_model: LiquidAI/LFM2.5-1.2B-Base ---
|
| [vLLM](https://github.com/vllm-project/vllm) | High-throughput production deployments with GPU. | Link |
|
| [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | Link |
|
Here's a quick start example with `transformers`:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_id = "LiquidAI/LFM2.5-1.2B-JP"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
dtype="bfloat16",
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_new_tokens=512,
streamer=streamer,
)
```
## ð§ Fine-Tuning
We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results.
| Name | Description | Docs | Notebook |
|------|-------------|------|----------|
| SFT ([Unsloth](https://github.com/unslothai/unsloth)) | Supervised Fine-Tuning with LoRA using Unsloth. | Link |
|
| SFT ([TRL](https://github.com/huggingface/trl)) | Supervised Fine-Tuning with LoRA using TRL. | Link |
|
| DPO ([TRL](https://github.com/huggingface/trl)) | Direct Preference Optimization with LoRA using TRL. | Link |
|
## ð Performance
| Model | [JMMLU](https://arxiv.org/pdf/2402.14531) | [M-IFEval (ja)](https://arxiv.org/pdf/2502.04688) | [GSM8K (ja)](https://huggingface.co/datasets/SakanaAI/gsm8k-ja-test_250-1319) |
|-------|------|----------|----------|
| **LFM2.5-1.2B-JP** | 50.7 | 58.1 | 56.0 |
| **[LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct)** | 47.7 | 41.8 | 46.8 |
| Qwen3-1.7B (Instruct mode) | 47.7 | 40.3 | 46.0 |
| Llama 3.2 1B Instruct | 34.0 | 24.1 | 25.2 |
| TinySwallow-1.5B-Instruct | 48.0 | 36.5 | 47.2 |
| Gemma-2-Llama-Swallow-2b-it-v0.1 | 48.1 | 33.4 | 34.4 |
| Gemma-3-1b-it | 34.5 | 26.3 | 33.6 |
| Granite-4.0-h-1b | 42.2 | 39.3 | 42.8 |
| Sarashina2.2-1b-instruct-v0.1 | 40.2 | 21.9 | 44.4 |
**Evaluation Notes**
- All results are **zero-shot** evaluations using **greedy decoding**.
- **M-IFEval (ja)** scores correspond to the **loose evaluation setting**.
- **JMMLU** was evaluated using a prompt format in a similar style to the [ArtificialAnalysis methodology](https://artificialanalysis.ai/methodology/intelligence-benchmarking#multiple-choice-questions) (with corresponding parsing logic). The Japanese prompt template used is shown below:
```
PROMPT_TEMPLATE = """äļããããéļæåéĄãŦįããĶãã ãããåįãŪæåūãŪčĄãŦãįãïž{valid_options}ããŪãããŦåšåããĶãã ããïžäūïžãįãïžXãïžã
{question}
{options}"""
```
## Contact
For enterprise solutions and edge deployment, contact [sales@liquid.ai](mailto:sales@liquid.ai).
## Citation
```bibtex
@article{liquidai2025lfm2,
title={LFM2 Technical Report},
author={Liquid AI},
journal={arXiv preprint arXiv:2511.23404},
year={2025}
}
```