mlabonne's picture
Create README.md
0f9e7d1 verified
|
raw
history blame
1.73 kB
metadata
license: other
license_name: lfm1.0
license_link: LICENSE
language:
  - en
  - ar
  - zh
  - fr
  - de
  - ja
  - ko
  - es
pipeline_tag: text-generation
tags:
  - liquid
  - lfm2.5
  - edge
  - llama.cpp
  - gguf
base_model:
  - LiquidAI/LFM2-2.6B-Transcript

LFM2-2.6B-Transcript-GGUF

Based on LFM2-2.6B, LFM2-2.6B-Transcript is designed to private, on-device meeting summarization. We partnered with AMD to deliver cloud-level summary quality while running entirely locally, ensuring your meeting data never leaves your device.

Highlights:

  • Cloud-level summary quality, approaching much larger models
  • Under 3GB of RAM usage for long meetings
  • Fast summaries in seconds, not minutes
  • Runs fully locally across CPU, GPU, and NPU

You can find more information about other task-specific models in this blog post.

🏃 How to run LFM2.5

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2-2.6B-Transcript-GGUF