LFM2.5-1.2B-Thinking-math-heavy

🎯 MATH-optimized | πŸ“¦ Heavy pruning | ⚑ 30% weights pruned

This model is a aggressively pruned version of LiquidAI/LFM2.5-1.2B-Thinking, specialized for MATH tasks using activation-aware weight pruning (Wanda-style).

✨ Key Features

  • Specialization: Optimized for Math tasks
  • Pruning Method: Wanda-style (|W| Γ— |activation|) importance scoring
  • Size Reduction: 30% weights pruned
  • Use Case: Maximum compression while maintaining usability

πŸ“Š Performance Comparison

Category Original Pruned Change
Python 0.0% 0.0% β†’
Html 0.0% 0.0% β†’
Trivia 93.3% 66.7% ↓ 26.7%
Math 100.0% 100.0% ⭐ β†’
Reasoning N/A N/A
Medical 86.7% 80.0% ↓ 6.7%
Linux 86.7% 73.3% ↓ 13.3%
Writing 60.0% 20.0% ↓ 40.0%

Average: 61.0% β†’ 48.6% (-12.4%)

Math Retention: 100.0% of original performance

Comparison Graph

πŸš€ Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CompactAI/LFM2.5-1.2B-Thinking-math-heavy")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/LFM2.5-1.2B-Thinking-math-heavy")

# Example usage
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“‹ Technical Details

Property Value
Base Model LiquidAI/LFM2.5-1.2B-Thinking
Specialization Math
Prune Mode Heavy
Pruning Method Activation-based weight pruning (Wanda)
Weight Reduction 30% weights pruned

πŸ”— Related Models

This model is part of the LFM2.5-1.2B-Thinking pruned model collection. Other variants:

  • Extra-light (minimal pruning)
  • Light
  • Medium-light
  • Medium
  • Medium-heavy
  • Heavy
  • Extra-heavy (maximum compression)

πŸ“œ License

This model inherits the license from the base model LiquidAI/LFM2.5-1.2B-Thinking.


Generated by ZANNPS [Zeto Automatic Neural Network Pruning System]

Downloads last month
-
Safetensors
Model size
1B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CompactAI/LFM2.5-1.2B-Thinking-math-heavy

Finetuned
(60)
this model

Collection including CompactAI/LFM2.5-1.2B-Thinking-math-heavy