Qwen3-4B-python-heavy-prune

This model is a heavy pruned version of Qwen/Qwen3-4B, specialized for PYTHON tasks.

Pruning Details

  • Base Model: Qwen/Qwen3-4B
  • Specialization: Python
  • Prune Mode: Heavy
  • Method: Activation-based weight pruning

Performance Comparison

Category Original Pruned
Python 0.0% 20.0%
HTML 6.7% 33.3%
Trivia 86.7% 80.0%
Math 40.0% 46.7%
Reasoning 60.0% 60.0%

Comparison Graph

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CompactAI/Qwen3-4B-python-heavy-prune-prune")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/Qwen3-4B-python-heavy-prune-prune")

License

This model inherits the license from the base model.

Downloads last month
-
Safetensors
Model size
4B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CompactAI/Qwen3-4B-python-heavy-prune

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
(441)
this model

Collection including CompactAI/Qwen3-4B-python-heavy-prune