Qwen3-0.6B-medical-medium

🎯 MEDICAL-optimized | πŸ“¦ Medium pruning | ⚑ 30% weights pruned

This model is a moderately pruned version of Qwen/Qwen3-0.6B, specialized for MEDICAL tasks using activation-aware weight pruning (Wanda-style).

✨ Key Features

  • Specialization: Optimized for Medical tasks
  • Pruning Method: Wanda-style (|W| Γ— |activation|) importance scoring
  • Size Reduction: 30% weights pruned
  • Use Case: Balanced trade-off between size and accuracy

πŸ“Š Performance Comparison

Category Original Pruned Change
Python 40.0% 0.0% ↓ 40.0%
Html 0.0% 0.0% β†’
Trivia 80.0% 80.0% β†’
Math 100.0% 100.0% β†’
Reasoning N/A N/A
Medical 93.3% 86.7% ⭐ ↓ 6.7%
Linux 100.0% 93.3% ↓ 6.7%
Writing 33.3% 13.3% ↓ 20.0%

Average: 63.8% β†’ 53.3% (-10.5%)

Medical Retention: 92.9% of original performance

Comparison Graph

πŸš€ Quick Start

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("CompactAI/Qwen3-0.6B-medical-medium")
tokenizer = AutoTokenizer.from_pretrained("CompactAI/Qwen3-0.6B-medical-medium")

# Example usage
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

πŸ“‹ Technical Details

Property Value
Base Model Qwen/Qwen3-0.6B
Specialization Medical
Prune Mode Medium
Pruning Method Activation-based weight pruning (Wanda)
Weight Reduction 30% weights pruned

πŸ”— Related Models

This model is part of the Qwen3-0.6B pruned model collection. Other variants:

  • Extra-light (minimal pruning)
  • Light
  • Medium-light
  • Medium
  • Medium-heavy
  • Heavy
  • Extra-heavy (maximum compression)

πŸ“œ License

This model inherits the license from the base model Qwen/Qwen3-0.6B.


Generated by ZANNPS [Zeto Automatic Neural Network Pruning System]

Downloads last month
-
Safetensors
Model size
0.6B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for CompactAI/Qwen3-0.6B-medical-medium

Finetuned
Qwen/Qwen3-0.6B
Finetuned
(685)
this model