--- library_name: transformers license: other license_name: lfm1.0 license_link: LICENSE language: - en pipeline_tag: text-generation tags: - liquid - lfm2 - edge base_model: LiquidAI/LFM2-2.6B-Transcript ---
Liquid AI
Try LFMDocumentationLEAP
# LFM2-2.6B-Transcript-GGUF Based on [LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B), LFM2-2.6B-Transcript is designed for **private, on-device meeting summarization**. We partnered with AMD to deliver cloud-level summary quality while running entirely locally, ensuring your meeting data never leaves your device. **Highlights**: - Cloud-level summary quality, approaching much larger models - Under 3GB of RAM usage for long meetings - Fast summaries in seconds, not minutes - Runs fully locally across CPU, GPU, and NPU You can find more information about this model [here](https://huggingface.co/LiquidAI/LFM2-2.6B-Transcript). ## 🏃 How to run Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp): ``` llama-cli -hf LiquidAI/LFM2-2.6B-Transcript-GGUF ``` ## 📬 Contact If you are interested in custom solutions with edge deployment, please contact [our sales team](https://www.liquid.ai/contact).