LFM2.5-1.2B-Thinking-writing-medium

🎯 WRITING-optimized | πŸ“¦ Medium pruning | ⚑ 20% weights pruned

This model is a moderately pruned version of LiquidAI/LFM2.5-1.2B-Thinking, specialized for WRITING tasks using activation-aware weight pruning (Wanda-style).

✨ Key Features

  • Specialization: Optimized for Writing tasks
  • Pruning Method: Wanda-style (|W| Γ— |activation|) importance scoring
  • Size Reduction: 20% weights pruned
  • Use Case: Balanced trade-off between size and accuracy

πŸ“Š Performance Comparison

Category Original Pruned Change
Python 0.0% 0.0% β†’
Html 0.0% 0.0% β†’
Trivia 93.3% 86.7% ↓ 6.7%
Math 100.0% 100.0% β†’
Reasoning N/A N/A
Medical 86.7% 93.3% ↑ 6.7%
Linux 86.7% 86.7% β†’
Writing 60.0% 60.0% ⭐ β†’

Average: 61.0% β†’ 61.0% (+0.0%)

Writing Retention: 100.0% of original performance

Comparison Graph

πŸš€ Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CompactAI/LFM2.5-1.2B-Thinking-writing-medium")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/LFM2.5-1.2B-Thinking-writing-medium")

# Example usage
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“‹ Technical Details

Property Value
Base Model LiquidAI/LFM2.5-1.2B-Thinking
Specialization Writing
Prune Mode Medium
Pruning Method Activation-based weight pruning (Wanda)
Weight Reduction 20% weights pruned

πŸ”— Related Models

This model is part of the LFM2.5-1.2B-Thinking pruned model collection. Other variants:

  • Extra-light (minimal pruning)
  • Light
  • Medium-light
  • Medium
  • Medium-heavy
  • Heavy
  • Extra-heavy (maximum compression)

πŸ“œ License

This model inherits the license from the base model LiquidAI/LFM2.5-1.2B-Thinking.


Generated by ZANNPS [Zeto Automatic Neural Network Pruning System]

Downloads last month
-
Safetensors
Model size
1B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CompactAI/LFM2.5-1.2B-Thinking-writing-medium

Finetuned
(60)
this model

Collection including CompactAI/LFM2.5-1.2B-Thinking-writing-medium