💧 LFM2
Collection
LFM2 is a new generation of hybrid models, designed for on-device deployment. • 31 items • Updated
• 141
LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
Find more information about LFM2-24B-A2B in our blog post.
Example usage with llama.cpp:
llama-cli -hf LiquidAI/LFM2-24B-A2B-GGUF
4-bit
5-bit
6-bit
8-bit
16-bit
Base model
LiquidAI/LFM2-24B-A2B