|
|
--- |
|
|
library_name: transformers |
|
|
license: other |
|
|
license_name: lfm1.0 |
|
|
license_link: LICENSE |
|
|
language: |
|
|
- en |
|
|
- ja |
|
|
pipeline_tag: text-generation |
|
|
tags: |
|
|
- liquid |
|
|
- lfm2.5 |
|
|
- edge |
|
|
base_model: LiquidAI/LFM2.5-1.2B-Base |
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
<img |
|
|
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png" |
|
|
alt="Liquid AI" |
|
|
style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;" |
|
|
/> |
|
|
<div style="display: flex; justify-content: center; gap: 0.5em; margin-bottom: 1em;"> |
|
|
<a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a> โข |
|
|
<a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a> โข |
|
|
<a href="https://leap.liquid.ai/"><strong>LEAP</strong></a> |
|
|
</div> |
|
|
</div> |
|
|
|
|
|
# LFM2.5-1.2B-JP |
|
|
|
|
|
LFM2.5-1.2B-JP is a chat model specifically optimized for Japanese. While LFM2 already supported Japanese as one of eight languages, LFM2.5-JP pushes state-of-the-art on Japanese knowledge and instruction-following at its scale. This model is ideal for developers building Japanese-language applications where cultural and linguistic nuance matter. |
|
|
|
|
|
Find more information about LFM2.5 in our [blog post](https://www.liquid.ai/blog/introducing-lfm2-5-the-next-generation-of-on-device-ai). |
|
|
|
|
|
|
|
|
## ๐ Inference |
|
|
|
|
|
LFM2.5 is supported by many inference frameworks. See the [Inference documentation](https://docs.liquid.ai/lfm/inference/transformers) for the full list. |
|
|
|
|
|
| Name | Description | Docs | Notebook | |
|
|
|------|-------------|------|----------| |
|
|
| [Transformers](https://github.com/huggingface/transformers) | Simple inference with direct access to model internals. | <a href="https://docs.liquid.ai/lfm/inference/transformers">Link</a> | <a href="https://colab.research.google.com/drive/1_q3jQ6LtyiuPzFZv7Vw8xSfPU5FwkKZY?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> | |
|
|
| [vLLM](https://github.com/vllm-project/vllm) | High-throughput production deployments with GPU. | <a href="https://docs.liquid.ai/lfm/inference/vllm">Link</a> | <a href="https://colab.research.google.com/drive/1VfyscuHP8A3we_YpnzuabYJzr5ju0Mit?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> | |
|
|
| [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | <a href="https://docs.liquid.ai/lfm/inference/llama-cpp">Link</a> | <a href="https://colab.research.google.com/drive/1ohLl3w47OQZA4ELo46i5E4Z6oGWBAyo8?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> | |
|
|
|
|
|
Here's a quick start example with `transformers`: |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer |
|
|
|
|
|
model_id = "LiquidAI/LFM2.5-1.2B-JP" |
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
|
model_id, |
|
|
device_map="auto", |
|
|
dtype="bfloat16", |
|
|
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU |
|
|
) |
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
|
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) |
|
|
|
|
|
prompt = "What is C. elegans?" |
|
|
|
|
|
input_ids = tokenizer.apply_chat_template( |
|
|
[{"role": "user", "content": prompt}], |
|
|
add_generation_prompt=True, |
|
|
return_tensors="pt", |
|
|
tokenize=True, |
|
|
).to(model.device) |
|
|
|
|
|
output = model.generate( |
|
|
input_ids, |
|
|
do_sample=True, |
|
|
temperature=0.3, |
|
|
min_p=0.15, |
|
|
repetition_penalty=1.05, |
|
|
max_new_tokens=512, |
|
|
streamer=streamer, |
|
|
) |
|
|
``` |
|
|
|
|
|
## ๐ง Fine-Tuning |
|
|
|
|
|
We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results. |
|
|
|
|
|
| Name | Description | Docs | Notebook | |
|
|
|------|-------------|------|----------| |
|
|
| SFT ([Unsloth](https://github.com/unslothai/unsloth)) | Supervised Fine-Tuning with LoRA using Unsloth. | <a href="https://docs.liquid.ai/lfm/fine-tuning/unsloth">Link</a> | <a href="https://colab.research.google.com/drive/1HROdGaPFt1tATniBcos11-doVaH7kOI3?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> | |
|
|
| SFT ([TRL](https://github.com/huggingface/trl)) | Supervised Fine-Tuning with LoRA using TRL. | <a href="https://docs.liquid.ai/lfm/fine-tuning/trl">Link</a> | <a href="https://colab.research.google.com/drive/1j5Hk_SyBb2soUsuhU0eIEA9GwLNRnElF?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> | |
|
|
| DPO ([TRL](https://github.com/huggingface/trl)) | Direct Preference Optimization with LoRA using TRL. | <a href="https://docs.liquid.ai/lfm/fine-tuning/trl">Link</a> | <a href="https://colab.research.google.com/drive/1MQdsPxFHeZweGsNx4RH7Ia8lG8PiGE1t?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> | |
|
|
|
|
|
## ๐ Performance |
|
|
|
|
|
| Model | [JMMLU](https://arxiv.org/pdf/2402.14531) | [M-IFEval (ja)](https://arxiv.org/pdf/2502.04688) | [GSM8K (ja)](https://huggingface.co/datasets/SakanaAI/gsm8k-ja-test_250-1319) | |
|
|
|-------|------|----------|----------| |
|
|
| **LFM2.5-1.2B-JP** | 50.7 | 58.1 | 56.0 | |
|
|
| **[LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct)** | 47.7 | 41.8 | 46.8 | |
|
|
| Qwen3-1.7B (Instruct mode) | 47.7 | 40.3 | 46.0 | |
|
|
| Llama 3.2 1B Instruct | 34.0 | 24.1 | 25.2 | |
|
|
| TinySwallow-1.5B-Instruct | 48.0 | 36.5 | 47.2 | |
|
|
| Gemma-2-Llama-Swallow-2b-it-v0.1 | 48.1 | 33.4 | 34.4 | |
|
|
| Gemma-3-1b-it | 34.5 | 26.3 | 33.6 | |
|
|
| Granite-4.0-h-1b | 42.2 | 39.3 | 42.8 | |
|
|
| Sarashina2.2-1b-instruct-v0.1 | 40.2 | 21.9 | 44.4 | |
|
|
|
|
|
**Evaluation Notes** |
|
|
|
|
|
- All results are **zero-shot** evaluations using **greedy decoding**. |
|
|
- **M-IFEval (ja)** scores correspond to the **loose evaluation setting**. |
|
|
- **JMMLU** was evaluated using a prompt format in a similar style to the [ArtificialAnalysis methodology](https://artificialanalysis.ai/methodology/intelligence-benchmarking#multiple-choice-questions) (with corresponding parsing logic). The Japanese prompt template used is shown below: |
|
|
``` |
|
|
PROMPT_TEMPLATE = """ไธใใใใ้ธๆๅ้กใซ็ญใใฆใใ ใใใๅ็ญใฎๆๅพใฎ่กใซใ็ญใ๏ผ{valid_options}ใใฎใใใซๅบๅใใฆใใ ใใ๏ผไพ๏ผใ็ญใ๏ผXใ๏ผใ |
|
|
|
|
|
{question} |
|
|
|
|
|
{options}""" |
|
|
``` |
|
|
|
|
|
## Contact |
|
|
|
|
|
For enterprise solutions and edge deployment, contact [sales@liquid.ai](mailto:sales@liquid.ai). |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@article{liquidai2025lfm2, |
|
|
title={LFM2 Technical Report}, |
|
|
author={Liquid AI}, |
|
|
journal={arXiv preprint arXiv:2511.23404}, |
|
|
year={2025} |
|
|
} |
|
|
``` |