LFM2.5-1.2B-Base GGUF Models

Model Generation Details

This model was generated using llama.cpp at commit 05fa625ea.


Click here to get info on choosing the right GGUF model format
Liquid AI
Try LFM β€’ Documentation β€’ LEAP

LFM2.5-1.2B-Base

LFM2.5 is a new family of hybrid models designed for on-device deployment. It builds on the LFM2 architecture with extended pre-training and reinforcement learning.

Find more information about LFM2.5 in our blog post.

πŸ—’οΈ Model Details

Model Parameters Description
LFM2.5-1.2B-Base 1.2B Pre-trained base model for fine-tuning
LFM2.5-1.2B-Instruct 1.2B General-purpose instruction-tuned model
LFM2.5-1.2B-Thinking 1.2B General-purpose reasoning model
LFM2.5-1.2B-JP 1.2B Japanese-optimized chat model
LFM2.5-VL-1.6B 1.6B Vision-language model with fast inference
LFM2.5-Audio-1.5B 1.5B Audio-language model for speech and text I/O

LFM2.5-1.2B-Base is the pre-trained text-only checkpoint, used to create all the LFM2.5-1.2B variants. It has the following features:

  • Number of parameters: 1.17B
  • Number of layers: 16 (10 double-gated LIV convolution blocks + 6 GQA blocks)
  • Training budget: 28T tokens
  • Context length: 32,768 tokens
  • Vocabulary size: 65,536
  • Knowledge cutoff: Mid-2024
  • Languages: English, Arabic, Chinese, French, German, Japanese, Korean, Spanish
Model Description
LFM2.5-1.2B-Base Original model checkpoint in native format. Best for fine-tuning or inference with Transformers and vLLM.
LFM2.5-1.2B-Base-GGUF Quantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage.
LFM2.5-1.2B-Base-ONNX ONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile).

This pre-trained checkpoint is only recommended for tasks that require heavy fine-tuning, like language-specific (e.g., Japanese) or domain-specific (e.g., medical) assistants, training on proprietary data, or experimenting with novel post-training approaches.

πŸƒ Inference

LFM2.5 is supported by many inference frameworks. See the Inference documentation for the full list.

Name Description Docs Notebook
Transformers Simple inference with direct access to model internals. Link Colab link
vLLM High-throughput production deployments with GPU. Link Colab link
llama.cpp Cross-platform inference with CPU offloading. Link Colab link

Here's a quick start example with transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

model_id = "LiquidAI/LFM2.5-1.2B-Base"
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    dtype="bfloat16",
#   attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)

prompt = "What is C. elegans?"

input_ids = tokenizer.apply_chat_template(
    [{"role": "user", "content": prompt}],
    add_generation_prompt=True,
    return_tensors="pt",
    tokenize=True,
).to(model.device)

output = model.generate(
    input_ids,
    do_sample=True,
    temperature=0.3,
    min_p=0.15,
    repetition_penalty=1.05,
    max_new_tokens=512,
    streamer=streamer,
)

πŸ”§ Fine-Tuning

We recommend fine-tuning LFM2.5 for your specific use case to achieve the best results.

Name Description Docs Notebook
CPT (Unsloth) Continued Pre-Training using Unsloth for text completion. Link Colab link
CPT (Unsloth) Continued Pre-Training using Unsloth for translation. Link Colab link
SFT (Unsloth) Supervised Fine-Tuning with LoRA using Unsloth. Link Colab link
SFT (TRL) Supervised Fine-Tuning with LoRA using TRL. Link Colab link
DPO (TRL) Direct Preference Optimization with LoRA using TRL. Link Colab link
GRPO (Unsloth) GRPO with LoRA using Unsloth. Link Colab link

Contact

For enterprise solutions and edge deployment, contact sales@liquid.ai.

Citation

@article{liquidai2025lfm2,
  title={LFM2 Technical Report},
  author={Liquid AI},
  journal={arXiv preprint arXiv:2511.23404},
  year={2025}
}

πŸš€ If you find these models useful

Help me test my AI-Powered Quantum Network Monitor Assistant with quantum-ready security checks:

πŸ‘‰ Quantum Network Monitor

The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : Source Code Quantum Network Monitor. You will also find the code I use to quantize the models if you want to do it yourself GGUFModelBuilder

πŸ’¬ How to test:
Choose an AI assistant type:

  • TurboLLM (GPT-4.1-mini)
  • HugLLM (Hugginface Open-source models)
  • TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:

  • Function calling against live network services
  • How small can a model go while still handling:
    • Automated Nmap security scans
    • Quantum-readiness checks
    • Network Monitoring tasks

🟑 TestLLM – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):

  • βœ… Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs) . No token limited as the cost is low.
  • πŸ”§ Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟒 TurboLLM – Uses gpt-4.1-mini :

  • **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
  • Create custom cmd processors to run .net code on Quantum Network Monitor Agents
  • Real-time network diagnostics and monitoring
  • Security Audits
  • Penetration testing (Nmap/Metasploit)

πŸ”΅ HugLLM – Latest Open-source models:

  • 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.

πŸ’‘ Example commands you could test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a comprehensive security audit on my server"
  4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code on. This is a very flexible and powerful feature. Use with caution!

Final Word

I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.

If you appreciate the work, please consider buying me a coffee β˜•. Your support helps cover service costs and allows me to raise token limits for everyone.

I'm also open to job opportunities or sponsorship.

Thank you! 😊

Downloads last month
363
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Paper for Mungert/LFM2.5-1.2B-Base-GGUF