π§ LFM2
Collection
LFM2 is a new generation of hybrid models, designed for on-device deployment. β’ 28 items β’ Updated β’ 154
llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2-700M
Example usage with llama.cpp:
llama-cli -hf LiquidAI/LFM2-700M-GGUF
4-bit
5-bit
6-bit
8-bit
16-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="LiquidAI/LFM2-700M-GGUF", filename="", )