These are the GGUF's of the model LFM2-VL-1.6B.
Usage Notes:
- Download the latest llama.cpp to use these quantizations.
- Try to use the best quality you can run.
- For the
mmprojfile, the F32 version is recommended for best results (F32 > BF16 > F16).
- Downloads last month
- 21
Hardware compatibility
Log In to add your hardware
8-bit
16-bit
32-bit
Model tree for noctrex/LFM2-VL-1.6B-GGUF
Base model
LiquidAI/LFM2-VL-1.6B