| license: apache-2.0 | |
| tags: | |
| - pruned | |
| - python | |
| - optimized | |
| base_model: Qwen/Qwen3-4B | |
| # Qwen3-4B-python-heavy-prune | |
| This model is a **heavy** pruned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B), specialized for **PYTHON** tasks. | |
| ## Pruning Details | |
| - **Base Model**: Qwen/Qwen3-4B | |
| - **Specialization**: Python | |
| - **Prune Mode**: Heavy | |
| - **Method**: Activation-based weight pruning | |
| ## Performance Comparison | |
| | Category | Original | Pruned | | |
| |----------|----------|--------| | |
| | Python | 0.0% | 20.0% | | |
| | HTML | 6.7% | 33.3% | | |
| | Trivia | 86.7% | 80.0% | | |
| | Math | 40.0% | 46.7% | | |
| | Reasoning | 60.0% | 60.0% | | |
|  | |
| ## Usage | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model = AutoModelForCausalLM.from_pretrained("CompactAI/Qwen3-4B-python-heavy-prune-prune") | |
| tokenizer = AutoTokenizer.from_pretrained("CompactAI/Qwen3-4B-python-heavy-prune-prune") | |
| ``` | |
| ## License | |
| This model inherits the license from the base model. | |