We need more flexible size that fit on all devices
Hi π
Thanks for providing LFM2.5-Audio-1.5B in GGUF format. This is a big win for the llama.cpp ecosystem.
That said, Iβd like to raise a request regarding model size flexibility, specifically from a llama.cpp / local inference perspective.
Current Limitation
The 1.5B model, even quantized, is still:
Too heavy for many CPUs
Not practical for most phones
Hard to run on low-RAM devices (4β8 GB)
Less usable for real-time or embedded scenarios
llama.cpp shines because it runs everywhere, but that advantage is limited when only large checkpoints are available.
Requested Model Size Variants
It would be extremely helpful to have multiple GGUF sizes,
For example:
~75M β ultra-light, mobile & edge-friendly
~150M β phones, low-end laptops
~500M β sweet spot for CPU inference
1.5B β current high-quality version
All exported as GGUF, optimized for llama.cpp.
Why This Matters for llama.cpp:
- Enables CPU-only inference without massive slowdowns
- Makes audio models usable on Android (termux) and older hardware
- Improves adoption in embedded / offline use cases
- Aligns with llama.cppβs goal: run locally, run anywhere
Thank you, yes, we are working on smaller end-to-end audio models (e.g., using our LFM2-350M https://huggingface.co/LiquidAI/LFM2-350M backbone) for general language chat capabilities as well as even smaller task-specific models for ASR/TTS only.
2026 is year of voice models, because everyone like the new model from Qwen-tts clone/design.