We need more flexible size that fit on all devices

#2
by yousef1727 - opened

Hi πŸ‘‹
Thanks for providing LFM2.5-Audio-1.5B in GGUF format. This is a big win for the llama.cpp ecosystem.
That said, I’d like to raise a request regarding model size flexibility, specifically from a llama.cpp / local inference perspective.

Current Limitation
The 1.5B model, even quantized, is still:
Too heavy for many CPUs
Not practical for most phones
Hard to run on low-RAM devices (4–8 GB)
Less usable for real-time or embedded scenarios
llama.cpp shines because it runs everywhere, but that advantage is limited when only large checkpoints are available.

Requested Model Size Variants
It would be extremely helpful to have multiple GGUF sizes,

For example:
~75M – ultra-light, mobile & edge-friendly
~150M – phones, low-end laptops
~500M – sweet spot for CPU inference
1.5B – current high-quality version
All exported as GGUF, optimized for llama.cpp.

Why This Matters for llama.cpp:

  • Enables CPU-only inference without massive slowdowns
  • Makes audio models usable on Android (termux) and older hardware
  • Improves adoption in embedded / offline use cases
  • Aligns with llama.cpp’s goal: run locally, run anywhere
Liquid AI org

Thank you, yes, we are working on smaller end-to-end audio models (e.g., using our LFM2-350M https://huggingface.co/LiquidAI/LFM2-350M backbone) for general language chat capabilities as well as even smaller task-specific models for ASR/TTS only.

2026 is year of voice models, because everyone like the new model from Qwen-tts clone/design.

Sign up or log in to comment