Liquid AI
Try LFM β€’ Docs β€’ LEAP β€’ Discord

LFM2.5-VL-1.6B

Find more details in the original model card: https://huggingface.co/LiquidAI/LFM2.5-VL-1.6B

πŸƒ How to run LFM2.5-VL-1.6B

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2.5-VL-1.6B-GGUF:Q4_0
llama-cli -hf LiquidAI/LFM2.5-VL-1.6B-GGUF:F16

πŸ“¬ Contact

Downloads last month
190,998
GGUF
Model size
1B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for LiquidAI/LFM2.5-VL-1.6B-GGUF

Quantized
(16)
this model

Collection including LiquidAI/LFM2.5-VL-1.6B-GGUF