--- license: other license_name: lfm1.0 license_link: LICENSE language: - en - ar - zh - fr - de - ja - ko - es pipeline_tag: text-generation tags: - liquid - lfm2.5 - edge - llama.cpp - gguf base_model: - LiquidAI/LFM2-2.6B-Transcript ---
Liquid AI
Try LFMDocumentationLEAP
# LFM2-2.6B-Transcript-GGUF Based on [LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B), LFM2-2.6B-Transcript is designed to **private, on-device meeting summarization**. We partnered with AMD to deliver cloud-level summary quality while running entirely locally, ensuring your meeting data never leaves your device. **Highlights**: - Cloud-level summary quality, approaching much larger models - Under 3GB of RAM usage for long meetings - Fast summaries in seconds, not minutes - Runs fully locally across CPU, GPU, and NPU You can find more information about other task-specific models in this [blog post](https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices). ## 🏃 How to run LFM2.5 Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp): ``` llama-cli -hf LiquidAI/LFM2-2.6B-Transcript-GGUF ```