--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en - ja pipeline_tag: text-generation tags: - liquid - lfm2.5 - edge base_model: LiquidAI/LFM2.5-1.2B-Base ---
Liquid AI
Try LFM â€Ē Documentation â€Ē LEAP
# LFM2.5-1.2B-JP LFM2.5-1.2B-JP is a chat model specifically optimized for Japanese. While LFM2 already supported Japanese as one of eight languages, LFM2.5-JP pushes state-of-the-art on Japanese knowledge and instruction-following at its scale. This model is ideal for developers building Japanese-language applications where cultural and linguistic nuance matter. Find more information about LFM2.5 in our [blog post](https://www.liquid.ai/blog/introducing-lfm2-5-the-next-generation-of-on-device-ai). ## 🏃 Inference LFM2.5 is supported by many inference frameworks. See the [Inference documentation](https://docs.liquid.ai/lfm/inference/transformers) for the full list. | Name | Description | Docs | Notebook | |------|-------------|------|----------| | [Transformers](https://github.com/huggingface/transformers) | Simple inference with direct access to model internals. | Link | Colab link | | [vLLM](https://github.com/vllm-project/vllm) | High-throughput production deployments with GPU. | Link | Colab link | | [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | Link | Colab link | Here's a quick start example with `transformers`: ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer model_id = "LiquidAI/LFM2.5-1.2B-JP" model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", dtype="bfloat16", # attn_implementation="flash_attention_2" <- uncomment on compatible GPU ) tokenizer = AutoTokenizer.from_pretrained(model_id) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt = "What is C. elegans?" input_ids = tokenizer.apply_chat_template( [{"role": "user", "content": prompt}], add_generation_prompt=True, return_tensors="pt", tokenize=True, ).to(model.device) output = model.generate( input_ids, do_sample=True, temperature=0.3, min_p=0.15, repetition_penalty=1.05, max_new_tokens=512, streamer=streamer, ) ``` ## 🔧 Fine-Tuning We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results. | Name | Description | Docs | Notebook | |------|-------------|------|----------| | SFT ([Unsloth](https://github.com/unslothai/unsloth)) | Supervised Fine-Tuning with LoRA using Unsloth. | Link | Colab link | | SFT ([TRL](https://github.com/huggingface/trl)) | Supervised Fine-Tuning with LoRA using TRL. | Link | Colab link | | DPO ([TRL](https://github.com/huggingface/trl)) | Direct Preference Optimization with LoRA using TRL. | Link | Colab link | ## 📊 Performance | Model | [JMMLU](https://arxiv.org/pdf/2402.14531) | [M-IFEval (ja)](https://arxiv.org/pdf/2502.04688) | [GSM8K (ja)](https://huggingface.co/datasets/SakanaAI/gsm8k-ja-test_250-1319) | |-------|------|----------|----------| | **LFM2.5-1.2B-JP** | 50.7 | 58.1 | 56.0 | | **[LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct)** | 47.7 | 41.8 | 46.8 | | Qwen3-1.7B (Instruct mode) | 47.7 | 40.3 | 46.0 | | Llama 3.2 1B Instruct | 34.0 | 24.1 | 25.2 | | TinySwallow-1.5B-Instruct | 48.0 | 36.5 | 47.2 | | Gemma-2-Llama-Swallow-2b-it-v0.1 | 48.1 | 33.4 | 34.4 | | Gemma-3-1b-it | 34.5 | 26.3 | 33.6 | | Granite-4.0-h-1b | 42.2 | 39.3 | 42.8 | | Sarashina2.2-1b-instruct-v0.1 | 40.2 | 21.9 | 44.4 | **Evaluation Notes** - All results are **zero-shot** evaluations using **greedy decoding**. - **M-IFEval (ja)** scores correspond to the **loose evaluation setting**. - **JMMLU** was evaluated using a prompt format in a similar style to the [ArtificialAnalysis methodology](https://artificialanalysis.ai/methodology/intelligence-benchmarking#multiple-choice-questions) (with corresponding parsing logic). The Japanese prompt template used is shown below: ``` PROMPT_TEMPLATE = """äļŽãˆã‚‰ã‚ŒãŸéļæŠžå•éĄŒãŦį­”ãˆãĶãã ã•ã„ã€‚å›žį­”ãŪ最åūŒãŪ行ãŦã€Œį­”ãˆïžš{valid_options}」ãŪようãŦ凚力しãĶくださいäū‹ïžšã€Œį­”えX」。 {question} {options}""" ``` ## Contact For enterprise solutions and edge deployment, contact [sales@liquid.ai](mailto:sales@liquid.ai). ## Citation ```bibtex @article{liquidai2025lfm2, title={LFM2 Technical Report}, author={Liquid AI}, journal={arXiv preprint arXiv:2511.23404}, year={2025} } ```