|
|
--- |
|
|
license: apache-2.0 |
|
|
tags: |
|
|
- pruned |
|
|
- math |
|
|
- optimized |
|
|
- wanda |
|
|
- activation-pruning |
|
|
base_model: HuggingFaceTB/SmolLM3-3B |
|
|
pipeline_tag: text-generation |
|
|
--- |
|
|
|
|
|
# SmolLM3-3B-math-extra-heavy |
|
|
|
|
|
> 🎯 **MATH-optimized** | 📦 **Extra Heavy** pruning | ⚡ **35% weights pruned** |
|
|
|
|
|
This model is a **extremely pruned** version of [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B), specialized for **MATH** tasks using activation-aware weight pruning (Wanda-style). |
|
|
|
|
|
## ✨ Key Features |
|
|
|
|
|
- **Specialization**: Optimized for Math tasks |
|
|
- **Pruning Method**: Wanda-style (|W| × |activation|) importance scoring |
|
|
- **Size Reduction**: 35% weights pruned |
|
|
- **Use Case**: Maximum size reduction, best for edge deployment |
|
|
|
|
|
## 📊 Performance Comparison |
|
|
|
|
|
| Category | Original | Pruned | Change | |
|
|
|----------|----------|--------|--------| |
|
|
| Python | 80.0% | 0.0% | ↓ 80.0% | |
|
|
| Html | 0.0% | 0.0% | → | |
|
|
| Trivia | 100.0% | 40.0% | ↓ 60.0% | |
|
|
| **Math** | 100.0% | 73.3% ⭐ | ↓ 26.7% | |
|
|
| Reasoning | N/A | N/A | | |
|
|
| Medical | 100.0% | 20.0% | ↓ 80.0% | |
|
|
| Linux | 100.0% | 6.7% | ↓ 93.3% | |
|
|
| Writing | 93.3% | 20.0% | ↓ 73.3% | |
|
|
|
|
|
**Average**: 81.9% → 22.9% (-59.0%) |
|
|
|
|
|
**Math Retention**: 73.3% of original performance |
|
|
|
|
|
 |
|
|
|
|
|
## 🚀 Quick Start |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
|
|
model = AutoModelForCausalLM.from_pretrained("CompactAI/SmolLM3-3B-math-extra-heavy") |
|
|
tokenizer = AutoTokenizer.from_pretrained("CompactAI/SmolLM3-3B-math-extra-heavy") |
|
|
|
|
|
# Example usage |
|
|
inputs = tokenizer("Your prompt here", return_tensors="pt") |
|
|
outputs = model.generate(**inputs, max_new_tokens=100) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
|
|
|
## 📋 Technical Details |
|
|
|
|
|
| Property | Value | |
|
|
|----------|-------| |
|
|
| Base Model | [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B) | |
|
|
| Specialization | Math | |
|
|
| Prune Mode | Extra Heavy | |
|
|
| Pruning Method | Activation-based weight pruning (Wanda) | |
|
|
| Weight Reduction | 35% weights pruned | |
|
|
|
|
|
## 🔗 Related Models |
|
|
|
|
|
This model is part of the **SmolLM3-3B** pruned model collection. Other variants: |
|
|
- Extra-light (minimal pruning) |
|
|
- Light |
|
|
- Medium-light |
|
|
- Medium |
|
|
- Medium-heavy |
|
|
- Heavy |
|
|
- Extra-heavy (maximum compression) |
|
|
|
|
|
## 📜 License |
|
|
|
|
|
This model inherits the license from the base model [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B). |
|
|
|
|
|
--- |
|
|
*Generated by ZANNPS [Zeto Automatic Neural Network Pruning System]* |
|
|
|