POP: Prefill-Only Pruning for Efficient Large Model Inference Paper • 2602.03295 • Published 2 days ago • 4
POP: Prefill-Only Pruning for Efficient Large Model Inference Paper • 2602.03295 • Published 2 days ago • 4
A$^2$ATS: Retrieval-Based KV Cache Reduction via Windowed Rotary Position Embedding and Query-Aware Vector Quantization Paper • 2502.12665 • Published Feb 18, 2025 • 1
CHESS: Optimizing LLM Inference via Channel-Wise Thresholding and Selective Sparsification Paper • 2409.01366 • Published Sep 2, 2024 • 1