Qwen3-1.7B-linux-extra-heavy

🎯 LINUX-optimized | πŸ“¦ Extra Heavy pruning | ⚑ 35% weights pruned

This model is a extremely pruned version of Qwen/Qwen3-1.7B, specialized for LINUX tasks using activation-aware weight pruning (Wanda-style).

✨ Key Features

  • Specialization: Optimized for Linux tasks
  • Pruning Method: Wanda-style (|W| Γ— |activation|) importance scoring
  • Size Reduction: 35% weights pruned
  • Use Case: Maximum size reduction, best for edge deployment

πŸ“Š Performance Comparison

Category Original Pruned Change
Python 40.0% 0.0% ↓ 40.0%
Html 0.0% 0.0% β†’
Trivia 100.0% 46.7% ↓ 53.3%
Math 100.0% 73.3% ↓ 26.7%
Reasoning N/A N/A
Medical 93.3% 60.0% ↓ 33.3%
Linux 100.0% 33.3% ⭐ ↓ 66.7%
Writing 73.3% 13.3% ↓ 60.0%

Average: 72.4% β†’ 32.4% (-40.0%)

Linux Retention: 33.3% of original performance

Comparison Graph

πŸš€ Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CompactAI/Qwen3-1.7B-linux-extra-heavy")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/Qwen3-1.7B-linux-extra-heavy")

# Example usage
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“‹ Technical Details

Property Value
Base Model Qwen/Qwen3-1.7B
Specialization Linux
Prune Mode Extra Heavy
Pruning Method Activation-based weight pruning (Wanda)
Weight Reduction 35% weights pruned

πŸ”— Related Models

This model is part of the Qwen3-1.7B pruned model collection. Other variants:

  • Extra-light (minimal pruning)
  • Light
  • Medium-light
  • Medium
  • Medium-heavy
  • Heavy
  • Extra-heavy (maximum compression)

πŸ“œ License

This model inherits the license from the base model Qwen/Qwen3-1.7B.


Generated by ZANNPS [Zeto Automatic Neural Network Pruning System]

Downloads last month
-
Safetensors
Model size
2B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CompactAI/Qwen3-1.7B-linux-extra-heavy

Finetuned
Qwen/Qwen3-1.7B
Finetuned
(464)
this model

Collection including CompactAI/Qwen3-1.7B-linux-extra-heavy