mlabonne's picture
Create README.md
0f9e7d1 verified
|
raw
history blame
1.73 kB
---
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2.5
- edge
- llama.cpp
- gguf
base_model:
- LiquidAI/LFM2-2.6B-Transcript
---
<div align="center">
<img
src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/2b08LKpev0DNEk6DlnWkY.png"
alt="Liquid AI"
style="width: 100%; max-width: 100%; height: auto; display: inline-block; margin-bottom: 0.5em; margin-top: 0.5em;"
/>
<div style="display: flex; justify-content: center; gap: 0.5em; margin-bottom: 1em;">
<a href="https://playground.liquid.ai/"><strong>Try LFM</strong></a> •
<a href="https://docs.liquid.ai/lfm"><strong>Documentation</strong></a> •
<a href="https://leap.liquid.ai/"><strong>LEAP</strong></a>
</div>
</div>
# LFM2-2.6B-Transcript-GGUF
Based on [LFM2-2.6B](https://huggingface.co/LiquidAI/LFM2-2.6B), LFM2-2.6B-Transcript is designed to **private, on-device meeting summarization**. We partnered with AMD to deliver cloud-level summary quality while running entirely locally, ensuring your meeting data never leaves your device.
**Highlights**:
- Cloud-level summary quality, approaching much larger models
- Under 3GB of RAM usage for long meetings
- Fast summaries in seconds, not minutes
- Runs fully locally across CPU, GPU, and NPU
You can find more information about other task-specific models in this [blog post](https://www.liquid.ai/blog/introducing-liquid-nanos-frontier-grade-performance-on-everyday-devices).
## 🏃 How to run LFM2.5
Example usage with [llama.cpp](https://github.com/ggml-org/llama.cpp):
```
llama-cli -hf LiquidAI/LFM2-2.6B-Transcript-GGUF
```