metadata
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ja
pipeline_tag: text-generation
tags:
- liquid
- lfm2.5
- edge
base_model: LiquidAI/LFM2.5-1.2B-Base
LFM2.5-1.2B-JP
LFM2.5-1.2B-JP is a chat model specifically optimized for Japanese. While LFM2 already supported Japanese as one of eight languages, LFM2.5-JP pushes state-of-the-art on Japanese knowledge and instruction-following at its scale. This model is ideal for developers building Japanese-language applications where cultural and linguistic nuance matter.
Find more information about LFM2.5 in our blog post.
๐ Inference
LFM2.5 is supported by many inference frameworks. See the Inference documentation for the full list.
| Name | Description | Docs | Notebook |
|---|---|---|---|
| Transformers | Simple inference with direct access to model internals. | Link | ![]() |
| vLLM | High-throughput production deployments with GPU. | Link | ![]() |
| llama.cpp | Cross-platform inference with CPU offloading. | Link | ![]() |
Here's a quick start example with transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_id = "LiquidAI/LFM2.5-1.2B-JP"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
dtype="bfloat16",
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_new_tokens=512,
streamer=streamer,
)
๐ง Fine-Tuning
We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results.
| Name | Description | Docs | Notebook |
|---|---|---|---|
| SFT (Unsloth) | Supervised Fine-Tuning with LoRA using Unsloth. | Link | ![]() |
| SFT (TRL) | Supervised Fine-Tuning with LoRA using TRL. | Link | ![]() |
| DPO (TRL) | Direct Preference Optimization with LoRA using TRL. | Link | ![]() |
๐ Performance
| Model | JMMLU | M-IFEval (ja) | GSM8K (ja) |
|---|---|---|---|
| LFM2.5-1.2B-JP | 50.7 | 58.1 | 56.0 |
| LFM2.5-1.2B-Instruct | 47.7 | 41.8 | 46.8 |
| Qwen3-1.7B (Instruct mode) | 47.7 | 40.3 | 46.0 |
| Llama 3.2 1B Instruct | 34.0 | 24.1 | 25.2 |
| TinySwallow-1.5B-Instruct | 48.0 | 36.5 | 47.2 |
| Gemma-2-Llama-Swallow-2b-it-v0.1 | 48.1 | 33.4 | 34.4 |
| Gemma-3-1b-it | 34.5 | 26.3 | 33.6 |
| Granite-4.0-h-1b | 42.2 | 39.3 | 42.8 |
| Sarashina2.2-1b-instruct-v0.1 | 40.2 | 21.9 | 44.4 |
Evaluation Notes
- All results are zero-shot evaluations using greedy decoding.
- M-IFEval (ja) scores correspond to the loose evaluation setting.
- JMMLU was evaluated using a prompt format in a similar style to the ArtificialAnalysis methodology (with corresponding parsing logic). The Japanese prompt template used is shown below:
PROMPT_TEMPLATE = """ไธใใใใ้ธๆๅ้กใซ็ญใใฆใใ ใใใๅ็ญใฎๆๅพใฎ่กใซใ็ญใ๏ผ{valid_options}ใใฎใใใซๅบๅใใฆใใ ใใ๏ผไพ๏ผใ็ญใ๏ผXใ๏ผใ
{question}
{options}"""
Contact
For enterprise solutions and edge deployment, contact sales@liquid.ai.
Citation
@article{liquidai2025lfm2,
title={LFM2 Technical Report},
author={Liquid AI},
journal={arXiv preprint arXiv:2511.23404},
year={2025}
}
