Instructions to use LiquidAI/LFM2-8B-A1B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LiquidAI/LFM2-8B-A1B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LiquidAI/LFM2-8B-A1B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2-8B-A1B") model = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-8B-A1B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LiquidAI/LFM2-8B-A1B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LiquidAI/LFM2-8B-A1B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-8B-A1B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/LiquidAI/LFM2-8B-A1B
- SGLang
How to use LiquidAI/LFM2-8B-A1B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LiquidAI/LFM2-8B-A1B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-8B-A1B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LiquidAI/LFM2-8B-A1B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-8B-A1B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use LiquidAI/LFM2-8B-A1B with Docker Model Runner:
docker model run hf.co/LiquidAI/LFM2-8B-A1B
How is the 2.6b model better than this one in literally every use case I have???
The benchmarks show this moe variant is better and it should be but that's not the case. Hell the Q4_km version of 2.6b performs better somehow.
Yes, this model is stronger overall (especially in code), but maybe not for your particular use cases. Could you tell us more about them?
There is a theory that said MoE model aren't that good in small scale, basically at small parameters count it will be worse in some way than the dense counter part.
But it still gonna be better at big scale.
I could be wrong but it's seems like that
Do you have a reference for this theory? I believe the trade-offs between dense and MoE models are well understood overall. In this specific case, LFM2-2.6B is a very deep model, unlike this MoE. It means that reasoning-heavy tasks might work better with the 2.6B, but that is very use-case-dependent. Overall, LFM2-8B-A1B is a stronger (and faster) model.