Instructions to use LiquidAI/LFM2-350M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use LiquidAI/LFM2-350M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="LiquidAI/LFM2-350M") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2-350M") model = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-350M") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use LiquidAI/LFM2-350M with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "LiquidAI/LFM2-350M" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-350M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/LiquidAI/LFM2-350M
- SGLang
How to use LiquidAI/LFM2-350M with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "LiquidAI/LFM2-350M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-350M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "LiquidAI/LFM2-350M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "LiquidAI/LFM2-350M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use LiquidAI/LFM2-350M with Docker Model Runner:
docker model run hf.co/LiquidAI/LFM2-350M
Show and Tell: Neural Net Cartography with LFM2:0.3B
hey! we're really excited to share something.
we taught the 0.3B variant of LFM2 basic AGL with a tiny dataset. then we refined that further with a "dialectical" fine tune to combine the state 1 and state 2 thinking into the core reasoning. mainly we're trying to answer "how much scaffolding does a semantic language like AGL take to understand wider concepts"
we're gonna do more granular data but we're just super excited to share some some pretty data! public domain code coming in the next few days :)
our training programs merge Tencent SPEAR, Dolci, and PCMind's curriculum patterns. we also have a whole bunch of wild optimizations for training that we'll share in detail soon! for now, some visualizations of the changes to latent space within the 0.3B model across two small fine-tunes!

^ this shows the changes in each category within our basin mapping framework, from base model, to "main (resonance aka state 1 vs state 2 thinking)" training, to the post-training "bimodal (dialectical synthesis of state 1 + state 2)" states.

^ some categories saw laminar flow, as if the training is a gravitational force in the distance. but the trajectory of the Bimodal training phase is DIFFERENT!

^ we also observe that some subjects are simply not uniformly affected by changing thought patterns surrounding deterministic/logical outcomes!

^ and finally, here's a scatter plot of how affected each subject is, based on calculated distance from zero! AGL lives squarely near 0,0 which suggests that AGL has successfully given even this small model a somewhat complex understanding of logically-bounded recursive decomposition, AND a model this small can generate passable responses!
we're calling this SLM Mini-Lab "The Soul Forge", and we plan to release both the source for the various HTML graphs, but also the full training program so anyone can easily play with this. we feel there may be some value in people exploring the cartography of latent semantic space!
current and future SLM work can be found at https://github.com/luna-system/ada-slm/ (public domain/foss). a huge thank you to the LiquidAI team. their hybrid architecture helps so much with seeing some of the data we're seeing!