How to use from
Docker Model Runner
docker model run hf.co/DevQuasar/LiquidAI.LFM2-2.6B-Transcript-GGUF:
Quick Links

Quantized version of: LiquidAI/LFM2-2.6B-Transcript

'Make knowledge free for everyone'

Made with

Buy Me a Coffee at ko-fi.com

Downloads last month
8,910
GGUF
Model size
3B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DevQuasar/LiquidAI.LFM2-2.6B-Transcript-GGUF

Quantized
(11)
this model