File size: 1,024 Bytes
97b1614 90c4473 97b1614 4158a6c 97b1614 4158a6c 97b1614 a84967b 97b1614 90c4473 97b1614 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: apache-2.0
tags:
- pruned
- python
- optimized
base_model: Qwen/Qwen3-0.6B
---
# Qwen3-0.6B-python-heavy-prune
This model is a **heavy** pruned version of [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B), specialized for **PYTHON** tasks.
## Pruning Details
- **Base Model**: Qwen/Qwen3-0.6B
- **Specialization**: Python
- **Prune Mode**: Heavy
- **Method**: Activation-based weight pruning
## Performance Comparison
| Category | Original | Pruned |
|----------|----------|--------|
| Python | 13.3% | 13.3% |
| HTML | 53.3% | 0.0% |
| Trivia | 53.3% | 26.7% |
| Math | 40.0% | 20.0% |
| Reasoning | 33.3% | 0.0% |

## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("CompactAI/Qwen3-0.6B-python-heavy-prune-prune")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/Qwen3-0.6B-python-heavy-prune-prune")
```
## License
This model inherits the license from the base model.
|