Running 2 SAKD: SWAN-Guided Knowledge Distillation 📖 2 Generate quantization‑ready student models via guided distillation
Running 1 Sensitivity-Aware Training (SAT) 📄 1 Train LLMs to be quantization‑ready with sensitivity‑aware methods
Running 1 SWAN: Data-Free Mixed-Precision Quantization 🦢 1 Quantize LLMs without data using per‑tensor mixed precision