SmolLM3-3B
Collection
Collection of pruned models based on HuggingFaceTB/SmolLM3-3B
•
56 items
•
Updated
🎯 MATH-optimized | 📦 Light pruning | ⚡ 3% weights pruned
This model is a lightly pruned version of HuggingFaceTB/SmolLM3-3B, specialized for MATH tasks using activation-aware weight pruning (Wanda-style).
| Category | Original | Pruned | Change |
|---|---|---|---|
| Python | 80.0% | 100.0% | ↑ 20.0% |
| Html | 0.0% | 0.0% | → |
| Trivia | 100.0% | 100.0% | → |
| Math | 100.0% | 100.0% ⭐ | → |
| Reasoning | N/A | N/A | |
| Medical | 100.0% | 100.0% | → |
| Linux | 100.0% | 100.0% | → |
| Writing | 93.3% | 80.0% | ↓ 13.3% |
Average: 81.9% → 82.9% (+1.0%)
Math Retention: 100.0% of original performance
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("CompactAI/SmolLM3-3B-math-light")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/SmolLM3-3B-math-light")
# Example usage
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
| Property | Value |
|---|---|
| Base Model | HuggingFaceTB/SmolLM3-3B |
| Specialization | Math |
| Prune Mode | Light |
| Pruning Method | Activation-based weight pruning (Wanda) |
| Weight Reduction | 3% weights pruned |
This model is part of the SmolLM3-3B pruned model collection. Other variants:
This model inherits the license from the base model HuggingFaceTB/SmolLM3-3B.
Generated by ZANNPS [Zeto Automatic Neural Network Pruning System]
Base model
HuggingFaceTB/SmolLM3-3B-Base