LFM2-2.6B-Transcript GGUF Models

Model Generation Details

This model was generated using llama.cpp at commit 05fa625ea.


Click here to get info on choosing the right GGUF model format
Liquid AI
Try LFM β€’ Documentation β€’ LEAP

LFM2-2.6B-Transcript

Based on LFM2-2.6B, LFM2-2.6B-Transcript is designed for private, on-device meeting summarization. We partnered with AMD to deliver cloud-level summary quality while running entirely locally, ensuring that your meeting data never leaves your device.

Highlights:

  • Cloud-level summary quality, approaching much larger models
  • Under 3GB of RAM usage for long meetings
  • Fast summaries in seconds, not minutes
  • Runs fully locally across CPU, GPU, and NPU

Find more information about LFM2-2.6B-Transcript in AMD's blog post and Liquid's blog post.

ezgif-5a91182b296b4c4a

πŸ“„ Model details

Model Description
LFM2-2.6B-Transcript Original model checkpoint in native format. Best for fine-tuning or inference with Transformers and vLLM.
LFM2-2.6B-Transcript-GGUF Quantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage.
LFM2-2.6B-Transcript-ONNX ONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile).
LFM2-2.6B-Transcript-MLX MLX format for Apple Silicon. Optimized for fast inference on Mac devices using the MLX framework.

Capabilities: The model is trained for long-form transcript summarization (30-60 minute meetings), producing clear, structured outputs including key points, decisions, and action items with consistent tone and formatting.

Use cases:

  • Internal team meetings
  • Sales calls and customer conversations
  • Board meetings and executive briefings
  • Regulated or sensitive environments where data can't leave the device
  • Offline or low-connectivity workflows

Generation parameters: We strongly recommend using a lower temperature with a temperature=0.3.

Supported language: English

⚠️ The model is intended for single-turn conversations with a specific format, described in the following.

Input format: We recommend using the following system prompt:

You are an expert meeting analyst. Analyze the transcript carefully and provide clear, accurate information based on the content.

We use a specific formatting for the input meeting transcripts to summarize as follows:

<user_prompt>

Title (example: Claims Processing training module)
Date (example: July 2, 2021)
Time (example: 1:00 PM)
Duration (example: 45 minutes)
Participants (example: Julie Franco (Training Facilitator), Amanda Newman (Subject Matter Expert))
----------
**Speaker 1**: Message 1 (example: **Julie Franco**: Good morning, everyone. Thanks for joining me today.)
**Speaker 2**: Message 2 (example: **Amanda Newman**: Good morning, Julie. Happy to be here.)
etc.

You can replace <user_prompt> with the following, depending on the desired summary type:

Summary type User prompt
Executive summary Provide a brief executive summary (2-3 sentences) of the key outcomes and decisions from this transcript.
Detailed summary Provide a detailed summary of the transcript, covering all major topics, discussions, and outcomes in paragraph form.
Action items List the specific action items that were assigned during this meeting. Include who is responsible for each item when mentioned.
Key decisions List the key decisions that were made during this meeting. Focus on concrete decisions and outcomes.
Participants List the participants mentioned in this transcript. Include their roles or titles when available.
Topics discussed List the main topics and subjects that were discussed in this meeting.

This is freeform, and you can add several prompts or combine them into a single one, like in the following examples:

Title Input meeting Model output
Budget planning Link Link
Design review Link Link
Coffee chat / social hour Link Link
Procurement / vendor review Link Link
Task force meeting Link Link

πŸš€ Quick Start

The easiest way to try LFM2-2.6B-Transcript is through our command-line tool in the Liquid AI Cookbook.

1. Install uv (if you don't have it already):

uv --version
# uv 0.9.18

2. Run with the sample transcript:

uv run https://raw.githubusercontent.com/Liquid4All/cookbook/refs/heads/main/examples/meeting-summarization/summarize.py

No API keys. No cloud services. No setup. Just pure local inference with real-time token streaming.

3. Use your own transcript:

uv run https://raw.githubusercontent.com/Liquid4All/cookbook/refs/heads/main/examples/meeting-summarization/summarize.py \
  --transcript-file path/to/your/transcript.txt

The tool uses llama.cpp for optimized inference and automatically handles model downloading and compilation for your platform.

πŸƒ Inference

LFM2 is supported by many inference frameworks. See the Inference documentation for the full list.

Name Description Docs Notebook
Transformers Simple inference with direct access to model internals. Link Colab link
vLLM High-throughput production deployments with GPU. Link Colab link
llama.cpp Cross-platform inference with CPU offloading. Link Colab link
MLX Apple's machine learning framework optimized for Apple Silicon. Link β€”
LM Studio Desktop application for running LLMs locally. Link β€”

πŸ“ˆ Performance

Quality

LFM2-2.6B-Transcript was benchmarked using the GAIA Eval-Judge framework on synthetic meeting transcripts across 8 meeting types.

2.6B-AMD Summarization Judge Score

Accuracy ratings from GAIA LLM Judge. Evaluated on 24 synthetic 1K transcripts and 32 synthetic 10K transcripts. Claude Sonnet 4 used for content generation and judging.

Inference Speed

2.6B-Transcript - Ryzen 395- blog

Generated using llama-bench.exe b7250 on an HP Z2 Mini G1a Next Gen AI Desktop Workstation on respective AMD Ryzen device. We compute peak memory used during CPU inference by measuring peak memory usage of the llama-bench.exe process executing the command: llama-bench -m <MODEL> -p 10000 -n 1000 -t 8 -r 3 -ngl 0 The llama-bench executable outputs the average inference times for preprocessing and token generation. The reported inference times are for the iGPU, enabled using the -ngl 99 flag.

Memory Usage

2.6B-Transcript- RAM

Generated using llama-bench.exe b7250 on an HP Z2 Mini G1a Next Gen AI Desktop Workstation with an AMD Ryzen AI Max+ PRO 395 processor. We compute peak memory used during CPU inference by measuring peak memory usage of the llama-bench.exe process executing the command: llama-bench -m <MODEL> -p 10000 -n 1000 -t 8 -r 3 -ngl 0 The llama-bench executable outputs the average inference times for preprocessing and token generation. The reported inference times are for the iGPU, enabled using the -ngl 99 flag

πŸ“¬ Contact

If you are interested in custom solutions with edge deployment, please contact our sales team.


πŸš€ If you find these models useful

Help me test my AI-Powered Quantum Network Monitor Assistant with quantum-ready security checks:

πŸ‘‰ Quantum Network Monitor

The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : Source Code Quantum Network Monitor. You will also find the code I use to quantize the models if you want to do it yourself GGUFModelBuilder

πŸ’¬ How to test:
Choose an AI assistant type:

  • TurboLLM (GPT-4.1-mini)
  • HugLLM (Hugginface Open-source models)
  • TestLLM (Experimental CPU-only)

What I’m Testing

I’m pushing the limits of small open-source models for AI network monitoring, specifically:

  • Function calling against live network services
  • How small can a model go while still handling:
    • Automated Nmap security scans
    • Quantum-readiness checks
    • Network Monitoring tasks

🟑 TestLLM – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):

  • βœ… Zero-configuration setup
  • ⏳ 30s load time (slow inference but no API costs) . No token limited as the cost is low.
  • πŸ”§ Help wanted! If you’re into edge-device AI, let’s collaborate!

Other Assistants

🟒 TurboLLM – Uses gpt-4.1-mini :

  • **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
  • Create custom cmd processors to run .net code on Quantum Network Monitor Agents
  • Real-time network diagnostics and monitoring
  • Security Audits
  • Penetration testing (Nmap/Metasploit)

πŸ”΅ HugLLM – Latest Open-source models:

  • 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.

πŸ’‘ Example commands you could test:

  1. "Give me info on my websites SSL certificate"
  2. "Check if my server is using quantum safe encyption for communication"
  3. "Run a comprehensive security audit on my server"
  4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code on. This is a very flexible and powerful feature. Use with caution!

Final Word

I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is open source. Feel free to use whatever you find helpful.

If you appreciate the work, please consider buying me a coffee β˜•. Your support helps cover service costs and allows me to raise token limits for everyone.

I'm also open to job opportunities or sponsorship.

Thank you! 😊

Downloads last month
394
GGUF
Model size
3B params
Architecture
lfm2
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Mungert/LFM2-2.6B-Transcript-GGUF

Base model

LiquidAI/LFM2-2.6B
Quantized
(21)
this model