LFM2-2.6B-Exp-GGUF
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-2.6B-Exp
π How to run LFM2
Example usage with llama.cpp:
llama-cli -hf LiquidAI/LFM2-2.6B-Exp-GGUF
- Downloads last month
- 113
Hardware compatibility
Log In
to add your hardware
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for Xlnk/LFM2-2.6B-Exp-GGuf
Unable to build the model tree, the base model loops to the model itself. Learn more.