Instructions to use LiquidAI/LFM2-2.6B-Exp with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LiquidAI/LFM2-2.6B-Exp with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LiquidAI/LFM2-2.6B-Exp") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2-2.6B-Exp") model = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-2.6B-Exp") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LiquidAI/LFM2-2.6B-Exp with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LiquidAI/LFM2-2.6B-Exp" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-2.6B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/LiquidAI/LFM2-2.6B-Exp
- SGLang
How to use LiquidAI/LFM2-2.6B-Exp with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LiquidAI/LFM2-2.6B-Exp" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-2.6B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LiquidAI/LFM2-2.6B-Exp" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-2.6B-Exp", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use LiquidAI/LFM2-2.6B-Exp with Docker Model Runner:
docker model run hf.co/LiquidAI/LFM2-2.6B-Exp
Testing LFM2-2.6B-Exp on Differential Equations
Testing LFM2-2.6B-Exp on Differential Equations
I recently tested LFM2-2.6B-Exp, an experimental language model developed by Liquid AI, to see how well it handles differential equations in a practical, step-by-step setting.
LFM2-2.6B-Exp is notable for how it was trained: it is an RL-first experimental checkpoint, built without supervised fine-tuning warm-up or distillation. Reinforcement learning was applied sequentially, starting with instruction following and later expanding to knowledge and math. This makes it a particularly interesting model to evaluate beyond benchmark scores.
In hands-on testing, the model performed surprisingly well for its size on standard undergraduate-level differential equations—first-order ODEs, second-order linear equations with constant coefficients, and nonhomogeneous problems using undetermined coefficients. It followed instructions closely and produced clear, structured solution steps.
However, the model showed limitations on more subtle methods, such as Laplace transforms with time shifting and variation of parameters, where maintaining mathematical invariants matters more than following a familiar template. In these cases, answers often looked correct structurally but failed under careful verification. This behavior is consistent with an RL-first training approach: strong at producing expected answer forms, but not always robust on deeper theoretical details.
Liquid AI, the company behind this model, is strongly focused on edge AI, developing efficient models designed for deployment outside large data-center environments. Their model lineup spans from very small models (millions of parameters) up to multi-billion-parameter models (reaching into the ~8B range), with LFM2-2.6B-Exp positioned as a compact but ambitious research artifact.
Overall, LFM2-2.6B-Exp demonstrates how far reinforcement learning can push a relatively small model in instruction following and procedural math—while also making its current limits clear.
The model google colab notebook here
https://colab.research.google.com/drive/1QH9d97oc68VJd0xe4vAbvHArQxpk4Ism?usp=sharing
Full article and detailed
Thanks for your post! :)
can you help me in my dataset
A0bd/Forbidden_and_Obscure


