LFM2-24B-A2B APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of LFM2-24B-A2B by LiquidAI.

Brought to you by the LocalAI team | APEX Project | Technical Report

Benchmark Results

Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: LFM2-24B-A2B (lfm2_moe) by LiquidAI
  • Layers: 40 (30 convolutional + 10 full attention, hybrid)
  • Experts: 64 routed (4 active per token) + 2 dense layers
  • Total Parameters: 24B
  • Active Parameters: ~2B per token
  • APEX Config: 5+5 symmetric edge gradient across 40 layers
  • Calibration: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia)

Run with LocalAI

local-ai run mudler/LFM2-24B-A2B-APEX-GGUF@LFM2-24B-A2B-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Downloads last month
1,395
GGUF
Model size
24B params
Architecture
lfm2moe
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mudler/LFM2-24B-A2B-APEX-GGUF

Quantized
(23)
this model

Collection including mudler/LFM2-24B-A2B-APEX-GGUF