yungisimon's picture
Update README.md
3bbeb52 verified
metadata
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
  - en
pipeline_tag: text-generation
tags:
  - liquid
  - lfm2
  - edge
base_model: LiquidAI/LFM2-2.6B-Transcript
Liquid AI
Try LFM β€’ Documentation β€’ LEAP

LFM2-2.6B-Transcript-GGUF

Based on LFM2-2.6B, LFM2-2.6B-Transcript is designed for private, on-device meeting summarization. We partnered with AMD to deliver cloud-level summary quality while running entirely locally, ensuring your meeting data never leaves your device.

Highlights:

  • Cloud-level summary quality, approaching much larger models
  • Under 3GB of RAM usage for long meetings
  • Fast summaries in seconds, not minutes
  • Runs fully locally across CPU, GPU, and NPU

You can find more information about this model here.

πŸƒ How to run

Example usage with llama.cpp:

llama-cli -hf LiquidAI/LFM2-2.6B-Transcript-GGUF

πŸ“¬ Contact

If you are interested in custom solutions with edge deployment, please contact our sales team.