{ "pdf_info": [ { "para_blocks": [ { "bbox": [ 86, 76, 507, 111 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 86, 76, 507, 111 ], "spans": [ { "bbox": [ 86, 76, 507, 111 ], "type": "inline_equation", "content": "\\mathbf{A}^{2}" }, { "bbox": [ 86, 76, 507, 111 ], "type": "text", "content": " ATS: Retrieval-Based KV Cache Reduction via Windowed Rotary Position Embedding and Query-Aware Vector Quantization" } ] } ], "index": 0 }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 117, 127, 478, 158 ], "spans": [ { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": "Junhui He" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{1,2*}" }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": ", Junna Xing" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{2*†}" }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": ", Nan Wang" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{2*}" }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": ", Rui Xu" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{2,3*}" }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": ", Shangyu Wu" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{4}" }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": ", Peng Zhou" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{2}" }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": ", Qiang Liu" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{2}" }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": ", Chun Jason Xue" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{5}" }, { "bbox": [ 117, 127, 478, 158 ], "type": "text", "content": ", Qingan Li" }, { "bbox": [ 117, 127, 478, 158 ], "type": "inline_equation", "content": "^{1†}" } ] } ], "index": 1 }, { "bbox": [ 149, 158, 447, 186 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 149, 158, 447, 186 ], "spans": [ { "bbox": [ 149, 158, 447, 186 ], "type": "inline_equation", "content": "^{1}" }, { "bbox": [ 149, 158, 447, 186 ], "type": "text", "content": "Wuhan University, " }, { "bbox": [ 149, 158, 447, 186 ], "type": "inline_equation", "content": "^{2}" }, { "bbox": [ 149, 158, 447, 186 ], "type": "text", "content": "Alibaba Cloud Computing, " }, { "bbox": [ 149, 158, 447, 186 ], "type": "inline_equation", "content": "^{3}" }, { "bbox": [ 149, 158, 447, 186 ], "type": "text", "content": "Jinan University, " }, { "bbox": [ 149, 158, 447, 186 ], "type": "inline_equation", "content": "^{4}" }, { "bbox": [ 149, 158, 447, 186 ], "type": "text", "content": "City University of Hong Kong, " }, { "bbox": [ 149, 158, 447, 186 ], "type": "inline_equation", "content": "^{5}" }, { "bbox": [ 149, 158, 447, 186 ], "type": "text", "content": "MBZUAI" } ] } ], "index": 2 }, { "bbox": [ 155, 219, 202, 232 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 155, 219, 202, 232 ], "spans": [ { "bbox": [ 155, 219, 202, 232 ], "type": "text", "content": "Abstract" } ] } ], "index": 3 }, { "bbox": [ 86, 239, 274, 610 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 86, 239, 274, 610 ], "spans": [ { "bbox": [ 86, 239, 274, 610 ], "type": "text", "content": "Long context large language models (LLMs) pose significant challenges for efficient serving due to the large memory footprint and high access overhead of KV cache. Retrieval-based KV cache reduction methods can mitigate these challenges, typically by offloading the complete KV cache to CPU and retrieving necessary tokens on demand during inference. However, these methods still suffer from unsatisfactory accuracy degradation and extra retrieval overhead. To address these limitations, this paper proposes A" }, { "bbox": [ 86, 239, 274, 610 ], "type": "inline_equation", "content": "^2" }, { "bbox": [ 86, 239, 274, 610 ], "type": "text", "content": "ATS, a novel retrieval-based KV cache reduction method. A" }, { "bbox": [ 86, 239, 274, 610 ], "type": "inline_equation", "content": "^2" }, { "bbox": [ 86, 239, 274, 610 ], "type": "text", "content": "ATS aims to obtain an accurate approximation of attention scores by applying the vector quantization technique to key states, thereby enabling efficient and precise retrieval of the top-K tokens. First, we propose Windowed Rotary Position Embedding, which decouples the positional dependency from query and key states after position embedding. Then, we propose query-aware vector quantization that optimizes the objective of attention score approximation directly. Finally, we design the heterogeneous inference architecture for KV cache offloading, enabling long context serving with larger batch sizes. Experimental results demonstrate that A" }, { "bbox": [ 86, 239, 274, 610 ], "type": "inline_equation", "content": "^2" }, { "bbox": [ 86, 239, 274, 610 ], "type": "text", "content": "ATS can achieve a lower performance degradation with similar or lower overhead compared to existing methods, thereby increasing long context serving throughput by up to " }, { "bbox": [ 86, 239, 274, 610 ], "type": "inline_equation", "content": "2.7\\times" }, { "bbox": [ 86, 239, 274, 610 ], "type": "text", "content": "." } ] } ], "index": 4 }, { "bbox": [ 68, 617, 154, 630 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 617, 154, 630 ], "spans": [ { "bbox": [ 68, 617, 154, 630 ], "type": "text", "content": "1 Introduction" } ] } ], "index": 5 }, { "bbox": [ 67, 638, 291, 746 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 638, 291, 746 ], "spans": [ { "bbox": [ 67, 638, 291, 746 ], "type": "text", "content": "Large language models (LLMs) with long context windows (OpenAI, 2023; Reid et al., 2024; Dubey et al., 2024; Jiang et al., 2024; Yang et al., 2024a; DeepSeek-AI et al., 2024) are driving advancements in AI applications. However, these models pose significant challenges for efficient serving. Their Transformer-based (Vaswani et al., 2017) architecture generates and maintains a Key-Value" } ] } ], "index": 6 }, { "bbox": [ 302, 219, 526, 353 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 219, 526, 353 ], "spans": [ { "bbox": [ 302, 219, 526, 353 ], "type": "text", "content": "(KV) cache during inference to store intermediate results and avoid re-computation. As the context length increases, the size of the KV cache grows proportionally, leading to severe overheads. First, the size of KV cache accessed when generating each token increases, resulting in a GPU memory bandwidth bottleneck. Moreover, the large KV cache size of each request limits the maximum feasible batch size, resulting in suboptimal GPU utilization." } ] } ], "index": 7 }, { "bbox": [ 302, 355, 526, 584 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 355, 526, 584 ], "spans": [ { "bbox": [ 302, 355, 526, 584 ], "type": "text", "content": "Various methods were proposed to address these challenges from different perspectives. Quantization-based methods (Liu et al., 2024b; Hooper et al., 2024) compress KV cache by using lower bit-width representations for KV cache elements. Eviction-based methods (Xiao et al., 2024; Zhang et al., 2023; Li et al., 2024; Yang et al., 2024b) reduce the KV cache size by directly evicting unimportant tokens from memory. Retrieval-based methods (Tang et al., 2024; Singhania et al., 2024; Zhang et al., 2024; Liu et al., 2024a; Chen et al., 2025) offload the complete KV cache to CPU memory and retrieve necessary tokens on demand during inference. However, these methods still face challenges of limited compression ratio, unsatisfactory accuracy degradation, or extra retrieval overhead." } ] } ], "index": 8 }, { "bbox": [ 302, 586, 526, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 586, 526, 775 ], "spans": [ { "bbox": [ 302, 586, 526, 775 ], "type": "text", "content": "To address the above limitations, this paper proposes A" }, { "bbox": [ 302, 586, 526, 775 ], "type": "inline_equation", "content": "^2" }, { "bbox": [ 302, 586, 526, 775 ], "type": "text", "content": "ATS, a novel retrieval-based KV cache reduction method. A" }, { "bbox": [ 302, 586, 526, 775 ], "type": "inline_equation", "content": "^2" }, { "bbox": [ 302, 586, 526, 775 ], "type": "text", "content": "ATS aims to obtain an Accurate Approximation of ATtention Scores by applying vector quantization technique to key states, thereby enabling efficient and precise retrieval of the top-K tokens. In order to achieve this goal, we face two main challenges. First, the position-dependent nature of key states after applying position embedding hinders the direct application of shared codebooks across varying inputs. Second, directly utilizing the conventional vector quantization fails to guarantee an accurate approximation of attention scores. To overcome these challenges, the main contributions" } ] } ], "index": 9 } ], "discarded_blocks": [ { "bbox": [ 83, 751, 163, 762 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 83, 751, 163, 762 ], "spans": [ { "bbox": [ 83, 751, 163, 762 ], "type": "text", "content": "*Equal contributions." } ] } ], "index": 10 }, { "bbox": [ 83, 762, 174, 774 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 83, 762, 174, 774 ], "spans": [ { "bbox": [ 83, 762, 174, 774 ], "type": "text", "content": "† Corresponding authors." } ] } ], "index": 11 }, { "bbox": [ 283, 780, 311, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 283, 780, 311, 791 ], "spans": [ { "bbox": [ 283, 780, 311, 791 ], "type": "text", "content": "12451" } ] } ], "index": 13 }, { "bbox": [ 131, 795, 461, 818 ], "type": "footer", "angle": 0, "lines": [ { "bbox": [ 131, 795, 461, 818 ], "spans": [ { "bbox": [ 131, 795, 461, 818 ], "type": "text", "content": "Findings of the Association for Computational Linguistics: ACL 2025, pages 12451-12463 July 27 - August 1, 2025 ©2025 Association for Computational Linguistics" } ] } ], "index": 14 } ], "page_size": [ 595, 841 ], "page_idx": 0 }, { "para_blocks": [ { "bbox": [ 67, 71, 191, 84 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 71, 191, 84 ], "spans": [ { "bbox": [ 67, 71, 191, 84 ], "type": "text", "content": "in this paper are as follows:" } ] } ], "index": 0 }, { "bbox": [ 80, 85, 291, 301 ], "type": "list", "angle": 0, "index": 5, "blocks": [ { "bbox": [ 80, 85, 290, 166 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 80, 85, 290, 166 ], "spans": [ { "bbox": [ 80, 85, 290, 166 ], "type": "text", "content": "- We observe high inter-input similarities between codebooks of key states before position embedding, and the objective misalignment between vector quantization and attention score approximation by experimental and theoretical analysis;" } ] } ], "index": 1 }, { "bbox": [ 80, 167, 291, 220 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 80, 167, 291, 220 ], "spans": [ { "bbox": [ 80, 167, 291, 220 ], "type": "text", "content": "- We propose Windowed Rotary Position Embedding to decouple the positional dependency from query and key states after position embedding," } ] } ], "index": 2 }, { "bbox": [ 80, 222, 291, 261 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 80, 222, 291, 261 ], "spans": [ { "bbox": [ 80, 222, 291, 261 ], "type": "text", "content": "- We propose and query-aware vector quantization that directly optimizes the objective of attention score approximation;" } ] } ], "index": 3 }, { "bbox": [ 80, 262, 290, 301 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 80, 262, 290, 301 ], "spans": [ { "bbox": [ 80, 262, 290, 301 ], "type": "text", "content": "- We design the heterogeneous inference system for KV cache offloading, enabling long context serving with larger batch sizes." } ] } ], "index": 4 } ], "sub_type": "text" }, { "bbox": [ 67, 303, 291, 384 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 303, 291, 384 ], "spans": [ { "bbox": [ 67, 303, 291, 384 ], "type": "text", "content": "Experimental results demonstrate that " }, { "bbox": [ 67, 303, 291, 384 ], "type": "inline_equation", "content": "\\mathrm{A}^2\\mathrm{ATS}" }, { "bbox": [ 67, 303, 291, 384 ], "type": "text", "content": " can achieve a low accuracy degradation of 2.2 on Llama-3.1-8B and 0.4 on Mistral-7B while accessing only " }, { "bbox": [ 67, 303, 291, 384 ], "type": "inline_equation", "content": "6\\%" }, { "bbox": [ 67, 303, 291, 384 ], "type": "text", "content": " of the entire KV cache, thereby increasing long context serving throughput by up to " }, { "bbox": [ 67, 303, 291, 384 ], "type": "inline_equation", "content": "2.7\\times" }, { "bbox": [ 67, 303, 291, 384 ], "type": "text", "content": ". Our source code is publicly available1." } ] } ], "index": 6 }, { "bbox": [ 67, 411, 158, 423 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 411, 158, 423 ], "spans": [ { "bbox": [ 67, 411, 158, 423 ], "type": "text", "content": "2 Preliminaries" } ] } ], "index": 7 }, { "bbox": [ 67, 433, 260, 461 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 433, 260, 461 ], "spans": [ { "bbox": [ 67, 433, 260, 461 ], "type": "text", "content": "2.1 Self-Attention Modules and Rotary Position Embedding" } ] } ], "index": 8 }, { "bbox": [ 67, 466, 291, 547 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 466, 291, 547 ], "spans": [ { "bbox": [ 67, 466, 291, 547 ], "type": "text", "content": "Self-attention modules (Vaswani et al., 2017) and Rotary Position Embedding (RoPE) (Su et al., 2021) have become the de facto standard components of state-of-the-art (SOTA) LLMs (Dubey et al., 2024; Yang et al., 2024a; Jiang et al., 2024; DeepSeek-AI et al., 2024)." } ] } ], "index": 9 }, { "bbox": [ 67, 549, 291, 711 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 549, 291, 711 ], "spans": [ { "bbox": [ 67, 549, 291, 711 ], "type": "text", "content": "In the self-attention module, during decoding phase, the inference process begins by linearly projecting the input states of the " }, { "bbox": [ 67, 549, 291, 711 ], "type": "inline_equation", "content": "i" }, { "bbox": [ 67, 549, 291, 711 ], "type": "text", "content": "-th token into query " }, { "bbox": [ 67, 549, 291, 711 ], "type": "inline_equation", "content": "(q_{i})" }, { "bbox": [ 67, 549, 291, 711 ], "type": "text", "content": ", key " }, { "bbox": [ 67, 549, 291, 711 ], "type": "inline_equation", "content": "(k_{i})" }, { "bbox": [ 67, 549, 291, 711 ], "type": "text", "content": ", and value " }, { "bbox": [ 67, 549, 291, 711 ], "type": "inline_equation", "content": "(v_{i})" }, { "bbox": [ 67, 549, 291, 711 ], "type": "text", "content": " states, where " }, { "bbox": [ 67, 549, 291, 711 ], "type": "inline_equation", "content": "q_{i}, k_{i}, v_{i} \\in \\mathbb{R}^{1 \\times d}" }, { "bbox": [ 67, 549, 291, 711 ], "type": "text", "content": ", and " }, { "bbox": [ 67, 549, 291, 711 ], "type": "inline_equation", "content": "d" }, { "bbox": [ 67, 549, 291, 711 ], "type": "text", "content": " denotes the number of channels or hidden dimensions per head. To enable the model to effectively capture the positional relationships between tokens, position embeddings are then applied to the query and key states. These hidden states before and after this transformation are abbreviated as pre-PE and post-PE states, respectively." } ] } ], "index": 10 }, { "bbox": [ 67, 712, 290, 752 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 712, 290, 752 ], "spans": [ { "bbox": [ 67, 712, 290, 752 ], "type": "text", "content": "RoPE is a commonly used position embedding in SOTA LLMs. Specifically, for the " }, { "bbox": [ 67, 712, 290, 752 ], "type": "inline_equation", "content": "i" }, { "bbox": [ 67, 712, 290, 752 ], "type": "text", "content": "-th token, a position-dependent rotation matrix " }, { "bbox": [ 67, 712, 290, 752 ], "type": "inline_equation", "content": "R_{i} \\in \\mathbb{R}^{d \\times d}" }, { "bbox": [ 67, 712, 290, 752 ], "type": "text", "content": " is" } ] } ], "index": 11 }, { "bbox": [ 302, 71, 525, 98 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 71, 525, 98 ], "spans": [ { "bbox": [ 302, 71, 525, 98 ], "type": "text", "content": "applied to the query " }, { "bbox": [ 302, 71, 525, 98 ], "type": "inline_equation", "content": "q_{i}" }, { "bbox": [ 302, 71, 525, 98 ], "type": "text", "content": " and key " }, { "bbox": [ 302, 71, 525, 98 ], "type": "inline_equation", "content": "k_{i}" }, { "bbox": [ 302, 71, 525, 98 ], "type": "text", "content": ", to obtain their post-PE counterparts, denoted by " }, { "bbox": [ 302, 71, 525, 98 ], "type": "inline_equation", "content": "\\tilde{q}_i" }, { "bbox": [ 302, 71, 525, 98 ], "type": "text", "content": " and " }, { "bbox": [ 302, 71, 525, 98 ], "type": "inline_equation", "content": "\\tilde{k}_i" }, { "bbox": [ 302, 71, 525, 98 ], "type": "text", "content": ":" } ] } ], "index": 12 }, { "bbox": [ 360, 111, 525, 126 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 360, 111, 525, 126 ], "spans": [ { "bbox": [ 360, 111, 525, 126 ], "type": "interline_equation", "content": "\\tilde {q} _ {i} = q _ {i} R _ {i}, \\quad \\tilde {k} _ {i} = k _ {i} R _ {i} \\tag {1}", "image_path": "187867afe486e458592aecee758521efac74a3734e9cef0a424f23847c3c5089.jpg" } ] } ], "index": 13 }, { "bbox": [ 302, 132, 525, 212 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 132, 525, 212 ], "spans": [ { "bbox": [ 302, 132, 525, 212 ], "type": "text", "content": "Then the matrices of KV cache of the context can be denoted by " }, { "bbox": [ 302, 132, 525, 212 ], "type": "inline_equation", "content": "\\tilde{K} = [\\tilde{k}_1;\\tilde{k}_2;\\dots ;\\tilde{k}_n]\\in \\mathbb{R}^{n\\times d}" }, { "bbox": [ 302, 132, 525, 212 ], "type": "text", "content": " and " }, { "bbox": [ 302, 132, 525, 212 ], "type": "inline_equation", "content": "V = [v_{1};v_{2};\\ldots ;v_{n}]\\in \\mathbb{R}^{n\\times d}" }, { "bbox": [ 302, 132, 525, 212 ], "type": "text", "content": " respectively, where " }, { "bbox": [ 302, 132, 525, 212 ], "type": "inline_equation", "content": "n" }, { "bbox": [ 302, 132, 525, 212 ], "type": "text", "content": " denotes the context length. Next, these post-PE states are used to compute the output state " }, { "bbox": [ 302, 132, 525, 212 ], "type": "inline_equation", "content": "o_i" }, { "bbox": [ 302, 132, 525, 212 ], "type": "text", "content": " as shown in formula (2):" } ] } ], "index": 14 }, { "bbox": [ 302, 222, 525, 266 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 302, 222, 525, 266 ], "spans": [ { "bbox": [ 302, 222, 525, 266 ], "type": "interline_equation", "content": "o _ {i} = \\operatorname {S o f t m a x} \\left(\\frac {\\tilde {q} _ {i} \\tilde {K} ^ {\\top}}{\\sqrt {d}}\\right) V = \\operatorname {S o f t m a x} \\left(\\frac {u _ {i}}{\\sqrt {d}}\\right) V \\tag {2}", "image_path": "4a65c6286d7d6e26cee6b5644b8cdd574a2b15a8cc5f61bf7c2d81678e9292dd.jpg" } ] } ], "index": 15 }, { "bbox": [ 302, 267, 524, 293 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 267, 524, 293 ], "spans": [ { "bbox": [ 302, 267, 524, 293 ], "type": "text", "content": "where " }, { "bbox": [ 302, 267, 524, 293 ], "type": "inline_equation", "content": "u_{i} = \\tilde{q}_{i}\\tilde{K}^{\\top}\\in \\mathbb{R}^{1\\times n}" }, { "bbox": [ 302, 267, 524, 293 ], "type": "text", "content": " denotes the attention scores before softmax." } ] } ], "index": 16 }, { "bbox": [ 302, 295, 525, 348 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 295, 525, 348 ], "spans": [ { "bbox": [ 302, 295, 525, 348 ], "type": "text", "content": "Due to the inherent property of rotation matrices that " }, { "bbox": [ 302, 295, 525, 348 ], "type": "inline_equation", "content": "R_{i}R_{j}^{\\top} = R_{i - j}" }, { "bbox": [ 302, 295, 525, 348 ], "type": "text", "content": " (Su et al., 2021), the attention score " }, { "bbox": [ 302, 295, 525, 348 ], "type": "inline_equation", "content": "u_{i,j}" }, { "bbox": [ 302, 295, 525, 348 ], "type": "text", "content": " between the " }, { "bbox": [ 302, 295, 525, 348 ], "type": "inline_equation", "content": "i" }, { "bbox": [ 302, 295, 525, 348 ], "type": "text", "content": "-th query and " }, { "bbox": [ 302, 295, 525, 348 ], "type": "inline_equation", "content": "j" }, { "bbox": [ 302, 295, 525, 348 ], "type": "text", "content": "-th key can be expressed as:" } ] } ], "index": 17 }, { "bbox": [ 312, 356, 524, 393 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 312, 356, 524, 393 ], "spans": [ { "bbox": [ 312, 356, 524, 393 ], "type": "interline_equation", "content": "\\begin{array}{l} u _ {i, j} = \\tilde {q} _ {i} \\tilde {k} _ {j} ^ {\\top} = q _ {i} R _ {i} \\left(k _ {j} R _ {j}\\right) ^ {\\top} = q _ {i} R _ {i} R _ {j} ^ {\\top} k _ {j} ^ {\\top} \\tag {3} \\\\ = q _ {i} R _ {i - j} k _ {j} ^ {\\top} \\\\ \\end{array}", "image_path": "393a55e251e58140e997f923882d47bdc29292e41cfedf981c210cfc7bad79b9.jpg" } ] } ], "index": 18 }, { "bbox": [ 302, 401, 525, 455 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 401, 525, 455 ], "spans": [ { "bbox": [ 302, 401, 525, 455 ], "type": "text", "content": "This equation illustrates how RoPE encodes the relative position " }, { "bbox": [ 302, 401, 525, 455 ], "type": "inline_equation", "content": "(i - j)" }, { "bbox": [ 302, 401, 525, 455 ], "type": "text", "content": " directly into the attention scores, allowing the model to effectively capture the positional relationships between tokens." } ] } ], "index": 19 }, { "bbox": [ 302, 464, 484, 491 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 464, 484, 491 ], "spans": [ { "bbox": [ 302, 464, 484, 491 ], "type": "text", "content": "2.2 Vector Quantization for Efficient Attention Score Approximation" } ] } ], "index": 20 }, { "bbox": [ 302, 495, 525, 534 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 495, 525, 534 ], "spans": [ { "bbox": [ 302, 495, 525, 534 ], "type": "text", "content": "Vector quantization (Buzo et al., 1980) is a data compression technique that maps input vectors to a finite set of codewords from a learned codebook." } ] } ], "index": 21 }, { "bbox": [ 302, 535, 525, 602 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 535, 525, 602 ], "spans": [ { "bbox": [ 302, 535, 525, 602 ], "type": "text", "content": "Formally, given an input space " }, { "bbox": [ 302, 535, 525, 602 ], "type": "inline_equation", "content": "X \\subseteq \\mathbb{R}^{1 \\times d}" }, { "bbox": [ 302, 535, 525, 602 ], "type": "text", "content": " with data distribution " }, { "bbox": [ 302, 535, 525, 602 ], "type": "inline_equation", "content": "\\mathcal{D}" }, { "bbox": [ 302, 535, 525, 602 ], "type": "text", "content": ", vector quantization aims to construct a codebook " }, { "bbox": [ 302, 535, 525, 602 ], "type": "inline_equation", "content": "C = \\{c_1, c_2, \\ldots, c_L\\} \\subset \\mathbb{R}^{1 \\times d}" }, { "bbox": [ 302, 535, 525, 602 ], "type": "text", "content": " with a size of " }, { "bbox": [ 302, 535, 525, 602 ], "type": "inline_equation", "content": "L" }, { "bbox": [ 302, 535, 525, 602 ], "type": "text", "content": " codewords to minimize the following objective:" } ] } ], "index": 22 }, { "bbox": [ 356, 613, 525, 628 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 356, 613, 525, 628 ], "spans": [ { "bbox": [ 356, 613, 525, 628 ], "type": "interline_equation", "content": "J (C) = \\mathbb {E} _ {x \\sim \\mathcal {D}} [ \\| x - \\hat {x} \\| ^ {2} ] \\tag {4}", "image_path": "5f940aeaa3a694f07983a6dcded3d45132fcb47a531d298f5ff027ca0dbd1e7e.jpg" } ] } ], "index": 23 }, { "bbox": [ 302, 638, 525, 691 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 638, 525, 691 ], "spans": [ { "bbox": [ 302, 638, 525, 691 ], "type": "text", "content": "where " }, { "bbox": [ 302, 638, 525, 691 ], "type": "inline_equation", "content": "x\\in \\mathbb{R}^{1\\times d}" }, { "bbox": [ 302, 638, 525, 691 ], "type": "text", "content": " denotes the input vector, " }, { "bbox": [ 302, 638, 525, 691 ], "type": "inline_equation", "content": "\\hat{x} = c_{f(x;C)}" }, { "bbox": [ 302, 638, 525, 691 ], "type": "text", "content": " denotes the quantized vector, and " }, { "bbox": [ 302, 638, 525, 691 ], "type": "inline_equation", "content": "f(x;C)" }, { "bbox": [ 302, 638, 525, 691 ], "type": "text", "content": " denotes the quantization function that maps " }, { "bbox": [ 302, 638, 525, 691 ], "type": "inline_equation", "content": "x" }, { "bbox": [ 302, 638, 525, 691 ], "type": "text", "content": " to its nearest codeword:" } ] } ], "index": 24 }, { "bbox": [ 348, 702, 525, 725 ], "type": "interline_equation", "angle": 0, "lines": [ { "bbox": [ 348, 702, 525, 725 ], "spans": [ { "bbox": [ 348, 702, 525, 725 ], "type": "interline_equation", "content": "f (x; C) = \\underset {j} {\\operatorname {a r g m i n}} \\| x - c _ {j} \\| ^ {2} \\tag {5}", "image_path": "721a0264f2cc51ccbea27fd91f85c5aec99a530a419a807e2a7040be4fee40ec.jpg" } ] } ], "index": 25 }, { "bbox": [ 302, 735, 525, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 735, 525, 774 ], "spans": [ { "bbox": [ 302, 735, 525, 774 ], "type": "text", "content": "Finding the optimal codebook " }, { "bbox": [ 302, 735, 525, 774 ], "type": "inline_equation", "content": "C" }, { "bbox": [ 302, 735, 525, 774 ], "type": "text", "content": " is computationally expensive. Therefore, approximate algorithms such as LBG and " }, { "bbox": [ 302, 735, 525, 774 ], "type": "inline_equation", "content": "\\mathrm{k}" }, { "bbox": [ 302, 735, 525, 774 ], "type": "text", "content": " -means++ (Linde et al., 1980;" } ] } ], "index": 26 } ], "discarded_blocks": [ { "bbox": [ 84, 762, 257, 774 ], "type": "page_footnote", "angle": 0, "lines": [ { "bbox": [ 84, 762, 257, 774 ], "spans": [ { "bbox": [ 84, 762, 257, 774 ], "type": "text", "content": "ModelsSparsity↓Aux Mem↓Accuracy↑16K32K64K96KAverageLlama-3.1-8B-Instruct1.0000.00094.491.985.983.188.8H2O0.0600.00827.630.624.925.027.0SnapKV0.0600.00872.775.172.270.772.7Quest0.0600.03184.384.080.074.480.7MagicPIG0.0682.34492.387.683.979.185.7A2ATS0.0600.00892.290.484.379.686.6MegaBeam-Mistral-7B-512K1.0000.00091.888.283.383.486.7H2O0.0600.00822.523.420.722.622.3SnapKV0.0600.00869.368.569.565.267.6Quest0.0600.03181.580.876.774.478.4MagicPIG0.0642.34488.785.282.681.884.6A2ATS0.0620.00891.688.183.482.286.3", "image_path": "6a8bbb6f895d75511829a87b4525107e77589d800f2602126aaffb24ec72a919.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "type": "table", "bbox": [ 71, 310, 287, 380 ], "blocks": [ { "bbox": [ 67, 254, 525, 292 ], "lines": [ { "bbox": [ 67, 254, 525, 292 ], "spans": [ { "bbox": [ 67, 254, 525, 292 ], "type": "text", "content": "Table 1: Comparison of sparsity ratio, auxiliary memory usage and accuracy on RULER benchmark. 'Aux Mem' refers to 'Auxialiary Memory Usage', which denotes the extra memory usage caused by KV cache reduction methods compared to the original key cache. '16K', '32K', '64K' and '96K' denote the input context length." } ] } ], "index": 1, "angle": 0, "type": "table_caption" }, { "bbox": [ 71, 310, 287, 380 ], "lines": [ { "bbox": [ 71, 310, 287, 380 ], "spans": [ { "bbox": [ 71, 310, 287, 380 ], "type": "table", "html": "
Config16K↑32K↑64K↑96K↑Average↑
Baseline86.486.381.571.381.4
WRoPE92.390.082.878.485.9
QAVQ91.786.976.369.481.1
A²ATS92.290.484.379.686.6
", "image_path": "8a9c0875dd69bd71e8b1a507b49b84fe18cf024a78de7e9a55e958312fce5dd9.jpg" } ] } ], "index": 2, "angle": 0, "type": "table_body" } ], "index": 2 }, { "bbox": [ 67, 388, 289, 413 ], "lines": [ { "bbox": [ 67, 388, 289, 413 ], "spans": [ { "bbox": [ 67, 388, 289, 413 ], "type": "text", "content": "Table 2: Ablation study on the importance of WRoPE and query-aware vector quantization for accuracy." } ] } ], "index": 3, "angle": 0, "type": "text" }, { "bbox": [ 67, 433, 291, 608 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 433, 291, 608 ], "spans": [ { "bbox": [ 67, 433, 291, 608 ], "type": "text", "content": "tention plateaus below 80 tokens/s due to memory bandwidth bottleneck and ends up out of memory at a batch size of 22. " }, { "bbox": [ 67, 433, 291, 608 ], "type": "inline_equation", "content": "\\mathrm{A}^2\\mathrm{ATS}" }, { "bbox": [ 67, 433, 291, 608 ], "type": "text", "content": " achieves a peak throughput of over 160 tokens/s, a " }, { "bbox": [ 67, 433, 291, 608 ], "type": "inline_equation", "content": "2.1\\times" }, { "bbox": [ 67, 433, 291, 608 ], "type": "text", "content": " speedup over full attention, with a maximum batch size of over 64. This trend becomes more pronounced at 64K context lengths, where full attention struggles with batches over 5, while " }, { "bbox": [ 67, 433, 291, 608 ], "type": "inline_equation", "content": "\\mathrm{A}^2\\mathrm{ATS}" }, { "bbox": [ 67, 433, 291, 608 ], "type": "text", "content": " serving a batch size of up to 16 with a throughput of up to 45 tokens/s, delivering a " }, { "bbox": [ 67, 433, 291, 608 ], "type": "inline_equation", "content": "2.7\\times" }, { "bbox": [ 67, 433, 291, 608 ], "type": "text", "content": " performance advantage. These results highlight " }, { "bbox": [ 67, 433, 291, 608 ], "type": "inline_equation", "content": "\\mathrm{A}^2\\mathrm{ATS}" }, { "bbox": [ 67, 433, 291, 608 ], "type": "text", "content": "'s potential in mitigating the memory bottleneck in long context LLM serving." } ] } ], "index": 4 }, { "bbox": [ 67, 618, 160, 631 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 618, 160, 631 ], "spans": [ { "bbox": [ 67, 618, 160, 631 ], "type": "text", "content": "6 Related Work" } ] } ], "index": 5 }, { "bbox": [ 67, 640, 290, 734 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 640, 290, 734 ], "spans": [ { "bbox": [ 67, 640, 290, 734 ], "type": "text", "content": "Quantization-based KV cache reduction. This method aims to compress KV cache by using lower bit-width representations for KV cache elements (Liu et al., 2024b; Hooper et al., 2024). However, it typically faces challenges of limited compression ratio and extra computational overhead caused by the dequantization process." } ] } ], "index": 6 }, { "bbox": [ 67, 735, 291, 775 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 735, 291, 775 ], "spans": [ { "bbox": [ 67, 735, 291, 775 ], "type": "text", "content": "Eviction-based KV cache reduction. This method aims to reduce the KV cache size by directly evicting unimportant tokens from memory (Xiao et al.," } ] } ], "index": 7 }, { "bbox": [ 302, 312, 526, 448 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 312, 526, 448 ], "spans": [ { "bbox": [ 302, 312, 526, 448 ], "type": "text", "content": "2024; Zhang et al., 2023; Li et al., 2024; Yang et al., 2024b). These methods typically record statistics of attention weights of each token. When the KV cache reaches its capacity limit, they utilize heuristic rules and historical statistics to predict which tokens are more likely to get high attention weights in future decoding, then retain these tokens while evicting the rest. Although these methods generally have low additional overhead, they often lead to noticeable performance degradation." } ] } ], "index": 8 }, { "bbox": [ 302, 491, 526, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 491, 526, 774 ], "spans": [ { "bbox": [ 302, 491, 526, 774 ], "type": "text", "content": "Retrieval-based KV cache reduction. This method keeps the entire KV cache in memory while selectively retrieving tokens crucial for the current inference. Quest (Tang et al., 2024) chunks the continuous KV cache into pages and pre-calculates necessary metadata for each page during prefetching. For decoding, it selects the top-K critical cache pages to participate in selective attention computation. PQCache (Zhang et al., 2024) and ClusterKV (Liu et al., 2024a) perform vector quantization on the key states with individual codebooks constructed for each input during prefetching. For decoding, the system uses codewords of the codebooks to approximate attention scores, then retrieves the top-K tokens for computation. MagicPIG (Chen et al., 2025) employs Locality-Sensitive Hashing on query and key states for token retrieval. Although these methods generally achieve lower performance degradation compared to eviction-based methods, they still suffer from unsatisfactory performance and lead to extra overhead." } ] } ], "index": 9 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "12458" } ] } ], "index": 10 } ], "page_size": [ 595, 841 ], "page_idx": 7 }, { "para_blocks": [ { "bbox": [ 68, 71, 147, 83 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 71, 147, 83 ], "spans": [ { "bbox": [ 68, 71, 147, 83 ], "type": "text", "content": "7 Conclusion" } ] } ], "index": 0 }, { "bbox": [ 67, 91, 293, 282 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 91, 293, 282 ], "spans": [ { "bbox": [ 67, 91, 293, 282 ], "type": "text", "content": "In this paper, we propose A" }, { "bbox": [ 67, 91, 293, 282 ], "type": "inline_equation", "content": "^2" }, { "bbox": [ 67, 91, 293, 282 ], "type": "text", "content": "ATS, a novel retrieval-based KV cache reduction method. First, we propose Windowed Rotary Position Embedding to decouple the positional dependency from query and key states after position embedding. Then, we propose query-aware vector quantization to achieve an accurate attention score approximation. Next, we introduce the heterogeneous inference design for KV cache offloading, which increases available batch size. Experimental results demonstrate that A" }, { "bbox": [ 67, 91, 293, 282 ], "type": "inline_equation", "content": "^2" }, { "bbox": [ 67, 91, 293, 282 ], "type": "text", "content": "ATS achieves lower performance degradation with comparable or lower overhead compared to existing methods, thereby boosting long context serving throughput by up to " }, { "bbox": [ 67, 91, 293, 282 ], "type": "inline_equation", "content": "2.7\\times" }, { "bbox": [ 67, 91, 293, 282 ], "type": "text", "content": "." } ] } ], "index": 1 }, { "bbox": [ 67, 290, 149, 302 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 290, 149, 302 ], "spans": [ { "bbox": [ 67, 290, 149, 302 ], "type": "text", "content": "8 Limitations" } ] } ], "index": 2 }, { "bbox": [ 67, 311, 290, 337 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 311, 290, 337 ], "spans": [ { "bbox": [ 67, 311, 290, 337 ], "type": "text", "content": "The limitations of this work can be summarized in two main aspects." } ] } ], "index": 3 }, { "bbox": [ 67, 338, 291, 459 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 338, 291, 459 ], "spans": [ { "bbox": [ 67, 338, 291, 459 ], "type": "text", "content": "First, while " }, { "bbox": [ 67, 338, 291, 459 ], "type": "inline_equation", "content": "\\mathrm{A}^2\\mathrm{ATS}" }, { "bbox": [ 67, 338, 291, 459 ], "type": "text", "content": " demonstrates lower accuracy degradation compared to existing methods while accessing a comparable proportion of KV cache, it still exhibits non-negligible performance degradation. This suggests opportunities for future work to investigate adaptive attention sparsity allocation strategies that dynamically optimize the sparsity ratios across layers and attention heads, based on their contextual importance." } ] } ], "index": 4 }, { "bbox": [ 67, 460, 291, 568 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 460, 291, 568 ], "spans": [ { "bbox": [ 67, 460, 291, 568 ], "type": "text", "content": "Second, while " }, { "bbox": [ 67, 460, 291, 568 ], "type": "inline_equation", "content": "\\mathrm{A}^2\\mathrm{ATS}" }, { "bbox": [ 67, 460, 291, 568 ], "type": "text", "content": " increases long context serving throughput by up to " }, { "bbox": [ 67, 460, 291, 568 ], "type": "inline_equation", "content": "2.7\\times" }, { "bbox": [ 67, 460, 291, 568 ], "type": "text", "content": " our current implementation is limited to single-GPU deployment. Future research could further explore (1) distributed multi-GPU system designs for scaled deployment, (2) integration with disaggregated LLM serving architectures like MoonCake (Qin et al., 2024)." } ] } ], "index": 5 }, { "bbox": [ 68, 577, 170, 591 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 577, 170, 591 ], "spans": [ { "bbox": [ 68, 577, 170, 591 ], "type": "text", "content": "Acknowledgements" } ] } ], "index": 6 }, { "bbox": [ 67, 597, 291, 638 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 597, 291, 638 ], "spans": [ { "bbox": [ 67, 597, 291, 638 ], "type": "text", "content": "We thank all the reviewers for their insightful comments. This work is supported by the National Natural Science Foundation of China (No. 62472330)." } ] } ], "index": 7 }, { "bbox": [ 68, 660, 127, 672 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 660, 127, 672 ], "spans": [ { "bbox": [ 68, 660, 127, 672 ], "type": "text", "content": "References" } ] } ], "index": 8 }, { "bbox": [ 69, 678, 291, 775 ], "type": "list", "angle": 0, "index": 11, "blocks": [ { "bbox": [ 69, 678, 291, 745 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 678, 291, 745 ], "spans": [ { "bbox": [ 69, 678, 291, 745 ], "type": "text", "content": "David Arthur and Sergei Vassilvitskii. 2007. k-means++: the advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2007, New Orleans, Louisiana, USA, January 7-9, 2007, pages 1027-1035. SIAM." } ] } ], "index": 9 }, { "bbox": [ 69, 751, 290, 775 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 751, 290, 775 ], "spans": [ { "bbox": [ 69, 751, 290, 775 ], "type": "text", "content": "Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao" } ] } ], "index": 10 } ], "sub_type": "ref_text" }, { "bbox": [ 303, 72, 527, 775 ], "type": "list", "angle": 0, "index": 17, "blocks": [ { "bbox": [ 313, 72, 527, 150 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 313, 72, 527, 150 ], "spans": [ { "bbox": [ 313, 72, 527, 150 ], "type": "text", "content": "Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. 2024. LongBench: A bilingual, multitask benchmark for long context understanding. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3119-3137, Bangkok, Thailand. Association for Computational Linguistics." } ] } ], "index": 12 }, { "bbox": [ 304, 157, 526, 225 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 157, 526, 225 ], "spans": [ { "bbox": [ 304, 157, 526, 225 ], "type": "text", "content": "Andres Buzo, Augustine H. Gray Jr., Robert M. Gray, and John D. Markel. 1980. Speech coding based upon vector quantization. In IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '80, Denver, Colorado, USA, April 9-11, 1980, pages 15-18. IEEE." } ] } ], "index": 13 }, { "bbox": [ 303, 231, 526, 309 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 303, 231, 526, 309 ], "spans": [ { "bbox": [ 303, 231, 526, 309 ], "type": "text", "content": "Zhuoming Chen, Ranajoy Sadhukhan, Zihao Ye, Yang Zhou, Jianyu Zhang, Niklas Nolte, Yuandong Tian, Matthijs Douze, León Bottou, Zhihao Jia, and Beidi Chen. 2025. Magicpig: LSH sampling for efficient LLM generation. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025. OpenReview.net." } ] } ], "index": 14 }, { "bbox": [ 304, 317, 527, 613 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 317, 527, 613 ], "spans": [ { "bbox": [ 304, 317, 527, 613 ], "type": "text", "content": "DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Haowei Zhang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Li, Hui Qu, J. L. Cai, Jian Liang, Jianzhong Guo, Jiaqi Ni, Jiashi Li, Jiawei Wang, Jin Chen, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, Junxiao Song, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Lei Xu, Leyi Xia, Liang Zhao, Litong Wang, Liyue Zhang, Meng Li, Miaojun Wang, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Mingming Li, Ning Tian, Panpan Huang, Peiyi Wang, Peng Zhang, Qiancheng Wang, Qihao Zhu, Qinyu Chen, Qiushi Du, R. J. Chen, R. L. Jin, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, Runxin Xu, Ruoyu Zhang, Ruyi Chen, S. S. Li, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shaoqing Wu, Shengfeng Ye, Shengfeng Ye, Shirong Ma, Shiyu Wang, Shuang Zhou, Shuiping Yu, Shunfeng Zhou, Shuting Pan, T. Wang, Tao Yun, Tian Pei, Tianyu Sun W. L. Xiao and Wangding Zeng. 2024. Deepseek-v3 technical report. CoRR, abs/2412.19437." } ] } ], "index": 15 }, { "bbox": [ 304, 620, 527, 775 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 620, 527, 775 ], "spans": [ { "bbox": [ 304, 620, 527, 775 ], "type": "text", "content": "Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Rozière, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Al-lonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan," } ] } ], "index": 16 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "12459" } ] } ], "index": 18 } ], "page_size": [ 595, 841 ], "page_idx": 8 }, { "para_blocks": [ { "bbox": [ 69, 72, 291, 773 ], "type": "list", "angle": 0, "index": 6, "blocks": [ { "bbox": [ 79, 72, 291, 280 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 79, 72, 291, 280 ], "spans": [ { "bbox": [ 79, 72, 291, 280 ], "type": "text", "content": "Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel M. Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, and et al. 2024. The llama 3 herd of models. CoRR, abs/2407.21783." } ] } ], "index": 0 }, { "bbox": [ 69, 293, 291, 391 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 293, 291, 391 ], "spans": [ { "bbox": [ 69, 293, 291, 391 ], "type": "text", "content": "Coleman Hooper, Sehoon Kim, Hiva Mohammadzadeh, Michael W. Mahoney, Yakun Sophia Shao, Kurt Keutzer, and Amir Gholami. 2024. Kvquant: Towards 10 million context length LLM inference with KV cache quantization. In Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024." } ] } ], "index": 1 }, { "bbox": [ 69, 405, 290, 460 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 405, 290, 460 ], "spans": [ { "bbox": [ 69, 405, 290, 460 ], "type": "text", "content": "Cheng-Ping Hsieh, Simeng Sun, Samuel Kriman, Shantanu Acharya, Dima Rekesh, Fei Jia, Yang Zhang, and Boris Ginsburg. 2024. RULER: what's the real context size of your long-context language models? CoRR, abs/2404.06654." } ] } ], "index": 2 }, { "bbox": [ 69, 473, 291, 594 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 473, 291, 594 ], "spans": [ { "bbox": [ 69, 473, 291, 594 ], "type": "text", "content": "Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2024. Mixtral of experts. CoRR, abs/2401.04088." } ] } ], "index": 3 }, { "bbox": [ 69, 607, 291, 673 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 607, 291, 673 ], "spans": [ { "bbox": [ 69, 607, 291, 673 ], "type": "text", "content": "Sehoon Kim, Coleman Hooper, Thanakul Wattanawong, Minwoo Kang, Ruohan Yan, Hasan Genc, Grace Dinh, Qijing Huang, Kurt Keutzer, Michael W. Mahoney, Yakun Sophia Shao, and Amir Gholami. 2023. Full stack optimization of transformer inference: a survey. CoRR, abs/2302.14017." } ] } ], "index": 4 }, { "bbox": [ 69, 686, 291, 773 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 686, 291, 773 ], "spans": [ { "bbox": [ 69, 686, 291, 773 ], "type": "text", "content": "Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, and Deming Chen. 2024. Snapkv: LLM knows what you are looking for before generation. In Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024." } ] } ], "index": 5 } ], "sub_type": "ref_text" }, { "bbox": [ 304, 72, 526, 774 ], "type": "list", "angle": 0, "index": 16, "blocks": [ { "bbox": [ 305, 72, 525, 105 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 305, 72, 525, 105 ], "spans": [ { "bbox": [ 305, 72, 525, 105 ], "type": "text", "content": "Yoseph Linde, Andres Buzo, and Robert M. Gray. 1980. An algorithm for vector quantizer design. IEEE Trans. Commun., 28(1):84-95." } ] } ], "index": 7 }, { "bbox": [ 304, 114, 526, 170 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 114, 526, 170 ], "spans": [ { "bbox": [ 304, 114, 526, 170 ], "type": "text", "content": "Lucas D. Lingle. 2024. Transformer-vq: Linear-time transformers via vector quantization. In *The Twelfth International Conference on Learning Representations*, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net." } ] } ], "index": 8 }, { "bbox": [ 304, 179, 526, 224 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 179, 526, 224 ], "spans": [ { "bbox": [ 304, 179, 526, 224 ], "type": "text", "content": "Guangda Liu, Chengwei Li, Jieru Zhao, Chenqi Zhang, and Minyi Guo. 2024a. Clusterkv: Manipulating LLM KV cache in semantic space for recallable compression. CoRR, abs/2412.03213." } ] } ], "index": 9 }, { "bbox": [ 304, 232, 526, 299 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 232, 526, 299 ], "spans": [ { "bbox": [ 304, 232, 526, 299 ], "type": "text", "content": "Zirui Liu, Jiayi Yuan, Hongye Jin, Shaochen Zhong, Zhaozhuo Xu, Vladimir Braverman, Beidi Chen, and Xia Hu. 2024b. KIVI: A tuning-free asymmetric 2bit quantization for KV cache. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net." } ] } ], "index": 10 }, { "bbox": [ 304, 307, 525, 330 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 307, 525, 330 ], "spans": [ { "bbox": [ 304, 307, 525, 330 ], "type": "text", "content": "OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774." } ] } ], "index": 11 }, { "bbox": [ 304, 338, 526, 438 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 338, 526, 438 ], "spans": [ { "bbox": [ 304, 338, 526, 438 ], "type": "text", "content": "Guilherme Penedo, Hynek Kydlicek, Loubna Ben Allal, Anton Lozhkov, Margaret Mitchell, Colin A. Raffel, Leandro von Werra, and Thomas Wolf. 2024. The fineweb datasets: Decanting the web for the finest text data at scale. In Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024." } ] } ], "index": 12 }, { "bbox": [ 304, 448, 526, 492 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 448, 526, 492 ], "spans": [ { "bbox": [ 304, 448, 526, 492 ], "type": "text", "content": "Ruoyu Qin, Zheming Li, Weiran He, Mingxing Zhang, Yongwei Wu, Weimin Zheng, and Xinran Xu. 2024. Mooncake: A kvcache-centric disaggregated architecture for LLM serving. CoRR, abs/2407.00079." } ] } ], "index": 13 }, { "bbox": [ 304, 501, 526, 719 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 501, 526, 719 ], "spans": [ { "bbox": [ 304, 501, 526, 719 ], "type": "text", "content": "Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy P. Lillicrap, Jean-Baptiste Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan First, Julian Schrittwieser, Ioannis Antonoglou, Rohan Anil, Sebastian Borgeaud, Andrew M. Dai, Katie Millican, Ethan Dyer, Mia Glaese, Thibault Sotti-aux, Benjamin Lee, Fabio Viola, Malcolm Reynolds, Yuanzhong Xu, James Molloy, Jilin Chen, Michael Isard, Paul Barham, Tom Hennigan, Ross McIlroy, Melvin Johnson, Johan Schalkwyk, Eli Collins, Eliza Rutherford, Erica Moreira, Kareem Ayoub, Megha Goel, Clemens Meyer, Gregory Thornton, Zhen Yang, Henryk Michalewski, Zaheer Abbas, Nathan Schucher, Ankesh Anand, Richard Ives, James Keeling, Karel Lenc, Salem Haykal, Siamak Shakeri, Pranav Shyam, Aakanksha Chowdhery, Roman Ring, Stephen Spencer, Eren Sezener, and et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. CoRR, abs/2403.05530." } ] } ], "index": 14 }, { "bbox": [ 304, 729, 525, 774 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 729, 525, 774 ], "spans": [ { "bbox": [ 304, 729, 525, 774 ], "type": "text", "content": "Prajwal Singhania, Siddharth Singh, Shwai He, Soheil Feizi, and Abhinav Bhatele. 2024. Loki: Low-rank keys for efficient sparse attention. In Advances in Neural Information Processing Systems 38: Annual" } ] } ], "index": 15 } ], "sub_type": "ref_text" } ], "discarded_blocks": [ { "bbox": [ 284, 781, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 781, 312, 791 ], "spans": [ { "bbox": [ 284, 781, 312, 791 ], "type": "text", "content": "12460" } ] } ], "index": 17 } ], "page_size": [ 595, 841 ], "page_idx": 9 }, { "para_blocks": [ { "bbox": [ 69, 72, 290, 773 ], "type": "list", "angle": 0, "index": 11, "blocks": [ { "bbox": [ 80, 72, 290, 105 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 80, 72, 290, 105 ], "spans": [ { "bbox": [ 80, 72, 290, 105 ], "type": "text", "content": "Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024." } ] } ], "index": 0 }, { "bbox": [ 69, 116, 290, 161 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 116, 290, 161 ], "spans": [ { "bbox": [ 69, 116, 290, 161 ], "type": "text", "content": "Jianlin Su. 2023a. Expand the context length with rope, part 3 - unlocking the unlimited extrapolation potential with rerope. https://normxu.github.io/Rethinking-Rotary-Position-Embedding-3/." } ] } ], "index": 1 }, { "bbox": [ 69, 171, 290, 194 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 171, 290, 194 ], "spans": [ { "bbox": [ 69, 171, 290, 194 ], "type": "text", "content": "Jianlin Su. 2023b. Rectified rotary position embeddings. https://github.com/bojone/erope." } ] } ], "index": 2 }, { "bbox": [ 69, 204, 289, 238 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 204, 289, 238 ], "spans": [ { "bbox": [ 69, 204, 289, 238 ], "type": "text", "content": "Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding. CoRR, abs/2104.09864." } ] } ], "index": 3 }, { "bbox": [ 69, 248, 290, 314 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 248, 290, 314 ], "spans": [ { "bbox": [ 69, 248, 290, 314 ], "type": "text", "content": "Jiaming Tang, Yilong Zhao, Kan Zhu, Guangxuan Xiao, Baris Kasikci, and Song Han. 2024. QUEST: query-aware sparsity for efficient long-context LLM inference. In *Forty-first International Conference on Machine Learning*, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net." } ] } ], "index": 4 }, { "bbox": [ 69, 324, 289, 369 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 324, 289, 369 ], "spans": [ { "bbox": [ 69, 324, 289, 369 ], "type": "text", "content": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762." } ] } ], "index": 5 }, { "bbox": [ 69, 379, 290, 401 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 379, 290, 401 ], "spans": [ { "bbox": [ 69, 379, 290, 401 ], "type": "text", "content": "Chen Wu, Yin Song, and Eden Duthie. 2024. awsprototyping/MegaBeam-Mistral-7B-512k." } ] } ], "index": 6 }, { "bbox": [ 69, 412, 290, 478 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 412, 290, 478 ], "spans": [ { "bbox": [ 69, 412, 290, 478 ], "type": "text", "content": "Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, and Mike Lewis. 2024. Efficient streaming language models with attention sinks. In *The Twelfth International Conference on Learning Representations*, ICLR 2024, Vienna, Austria, May 7-11, 2024. OpenReview.net." } ] } ], "index": 7 }, { "bbox": [ 69, 488, 290, 620 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 488, 290, 620 ], "spans": [ { "bbox": [ 69, 488, 290, 620 ], "type": "text", "content": "An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. 2024a. Qwen2.5 technical report. CoRR, abs/2412.15115." } ] } ], "index": 8 }, { "bbox": [ 69, 631, 290, 708 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 631, 290, 708 ], "spans": [ { "bbox": [ 69, 631, 290, 708 ], "type": "text", "content": "Dongjie Yang, Xiaodong Han, Yan Gao, Yao Hu, Shilin Zhang, and Hai Zhao. 2024b. Pyramid Infer: Pyramid KV cache compression for high-throughput LLM inference. In Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024, pages 3258-3270. Association for Computational Linguistics." } ] } ], "index": 9 }, { "bbox": [ 69, 719, 290, 773 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 69, 719, 290, 773 ], "spans": [ { "bbox": [ 69, 719, 290, 773 ], "type": "text", "content": "Hailin Zhang, Xiaodong Ji, Yilin Chen, Fangcheng Fu, Xupeng Miao, Xiaonan Nie, Weipeng Chen, and Bin Cui. 2024. Pqcache: Product quantization-based kvcache for long context LLM inference. CoRR, abs/2407.12820." } ] } ], "index": 10 } ], "sub_type": "ref_text" }, { "bbox": [ 304, 72, 525, 181 ], "type": "ref_text", "angle": 0, "lines": [ { "bbox": [ 304, 72, 525, 181 ], "spans": [ { "bbox": [ 304, 72, 525, 181 ], "type": "text", "content": "Zhenyu Zhang, Ying Sheng, Tianyi Zhou, Tianlong Chen, Lianmin Zheng, Ruisi Cai, Zhao Song, Yuandong Tian, Christopher Ré, Clark W. Barrett, Zhangyang Wang, and Beidi Chen. 2023. H2O: heavy-hitter oracle for efficient generative inference of large language models. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023." } ] } ], "index": 12 } ], "discarded_blocks": [ { "bbox": [ 284, 781, 311, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 781, 311, 791 ], "spans": [ { "bbox": [ 284, 781, 311, 791 ], "type": "text", "content": "12461" } ] } ], "index": 13 } ], "page_size": [ 595, 841 ], "page_idx": 10 }, { "para_blocks": [ { "bbox": [ 68, 71, 211, 84 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 68, 71, 211, 84 ], "spans": [ { "bbox": [ 68, 71, 211, 84 ], "type": "text", "content": "A Method Configurations" } ] } ], "index": 0 }, { "bbox": [ 67, 92, 290, 131 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 92, 290, 131 ], "spans": [ { "bbox": [ 67, 92, 290, 131 ], "type": "text", "content": "For the main experiments, we compare the following five KV cache reduction methods, along with the full attention baseline:" } ] } ], "index": 1 }, { "bbox": [ 81, 133, 290, 307 ], "type": "list", "angle": 0, "index": 7, "blocks": [ { "bbox": [ 81, 133, 289, 172 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 81, 133, 289, 172 ], "spans": [ { "bbox": [ 81, 133, 289, 172 ], "type": "text", "content": "- H2O (Zhang et al., 2023): An eviction-based method that preserves heavy hitter tokens and recent tokens;" } ] } ], "index": 2 }, { "bbox": [ 81, 174, 289, 212 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 81, 174, 289, 212 ], "spans": [ { "bbox": [ 81, 174, 289, 212 ], "type": "text", "content": "- SnapKV (Li et al., 2024): An eviction-based method that preserves important tokens based on statistics within an observation window;" } ] } ], "index": 3 }, { "bbox": [ 81, 215, 288, 253 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 81, 215, 288, 253 ], "spans": [ { "bbox": [ 81, 215, 288, 253 ], "type": "text", "content": "- Quest (Tang et al., 2024): A retrieval-based method that selects tokens based on metadata of KV cache pages;" } ] } ], "index": 4 }, { "bbox": [ 81, 255, 290, 295 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 81, 255, 290, 295 ], "spans": [ { "bbox": [ 81, 255, 290, 295 ], "type": "text", "content": "- MagicPIG (Chen et al., 2025): A retrieval-based method that utilizes LSH for token sampling;" } ] } ], "index": 5 }, { "bbox": [ 81, 296, 228, 307 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 81, 296, 228, 307 ], "spans": [ { "bbox": [ 81, 296, 228, 307 ], "type": "text", "content": "- " }, { "bbox": [ 81, 296, 228, 307 ], "type": "inline_equation", "content": "\\mathbf{A}^2\\mathbf{ATS}" }, { "bbox": [ 81, 296, 228, 307 ], "type": "text", "content": ": The proposed method." } ] } ], "index": 6 } ], "sub_type": "text" }, { "bbox": [ 67, 309, 291, 564 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 309, 291, 564 ], "spans": [ { "bbox": [ 67, 309, 291, 564 ], "type": "text", "content": "For a fair comparison, the sparsity ratios of all KV cache reduction methods are controlled around 0.06, which means approximately " }, { "bbox": [ 67, 309, 291, 564 ], "type": "inline_equation", "content": "6\\%" }, { "bbox": [ 67, 309, 291, 564 ], "type": "text", "content": " of KV cache is accessed at each inference. It is important to note that the sparsity ratio discussed in this paper differs in definition from the " }, { "bbox": [ 67, 309, 291, 564 ], "type": "inline_equation", "content": "\\mathrm{cost}_2" }, { "bbox": [ 67, 309, 291, 564 ], "type": "text", "content": " in MagicPIG (Chen et al., 2025). Specifically, " }, { "bbox": [ 67, 309, 291, 564 ], "type": "inline_equation", "content": "\\mathrm{cost}_2" }, { "bbox": [ 67, 309, 291, 564 ], "type": "text", "content": " measures the ratio of computation overhead (FLOPs) compared to full attention, whereas our sparsity ratio measures the ratio of memory access overhead (MOPs) relative to full attention. Since attention modules are typically considered memory-bound (Kim et al., 2023), we argue that the latter metric provides more meaningful insights on potential overhead reduction. Additionally, the initial 4 tokens and the most recent 64 tokens are statically preserved to align with MagicPIG (Xiao et al., 2024). The detailed configurations for each methods are shown in Table 3." } ] } ], "index": 8 }, { "type": "table", "bbox": [ 71, 574, 287, 654 ], "blocks": [ { "bbox": [ 71, 574, 287, 654 ], "lines": [ { "bbox": [ 71, 574, 287, 654 ], "spans": [ { "bbox": [ 71, 574, 287, 654 ], "type": "table", "html": "
MethodConfigurations
H2Ohh_size = 0.06 × input_length
SnapKVprompt_capacity = 0.06 × input_length
Questpage_size = 32, ratio = 0.06
MagicPIGK = 10, L = 150
A²ATStopk = 0.03
", "image_path": "d99d37f410d6c7a71df0bbe37409691e385f1bb0bca0f5a168135d39aa2c691c.jpg" } ] } ], "index": 9, "angle": 0, "type": "table_body" } ], "index": 9 }, { "bbox": [ 68, 661, 289, 672 ], "lines": [ { "bbox": [ 68, 661, 289, 672 ], "spans": [ { "bbox": [ 68, 661, 289, 672 ], "type": "text", "content": "Table 3: Configurations of KV cache reduction methods." } ] } ], "index": 10, "angle": 0, "type": "text" }, { "bbox": [ 67, 698, 261, 714 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 67, 698, 261, 714 ], "spans": [ { "bbox": [ 67, 698, 261, 714 ], "type": "text", "content": "B Detailed Ablation Study Analysis" } ] } ], "index": 11 }, { "bbox": [ 67, 720, 291, 774 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 67, 720, 291, 774 ], "spans": [ { "bbox": [ 67, 720, 291, 774 ], "type": "text", "content": "Table 2 validates the effectiveness of WRoPE and query-aware vector quantization on improving model accuracy. Experimental results draw the following conclusions:" } ] } ], "index": 12 }, { "bbox": [ 302, 71, 525, 165 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 71, 525, 165 ], "spans": [ { "bbox": [ 302, 71, 525, 165 ], "type": "text", "content": "WRoPE is fundamental to attention score approximation using shared codebooks. WRoPE achieves an average improvement of " }, { "bbox": [ 302, 71, 525, 165 ], "type": "inline_equation", "content": "+4.5" }, { "bbox": [ 302, 71, 525, 165 ], "type": "text", "content": " over the baseline, with consistent gains across all context lengths. This result confirms WRoPE's critical role in preventing representation divergence of key states caused by positional embedding." } ] } ], "index": 13 }, { "bbox": [ 302, 167, 526, 449 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 167, 526, 449 ], "spans": [ { "bbox": [ 302, 167, 526, 449 ], "type": "text", "content": "Query-aware vector quantization provides a further improvement in model accuracy by aligning the objectives of vector quantization and attention score approximation. Our full method, incorporating query-aware vector quantization, demonstrates further improvements, particularly at longer context lengths (+1.5 at 64K, +1.2 at 96K, respectively). However, query-aware vector quantization alone does not outperform conventional vector quantization. This is because query-aware vector quantization relies on the estimation of the second-moment matrix of queries, i.e., " }, { "bbox": [ 302, 167, 526, 449 ], "type": "inline_equation", "content": "H = \\mathbb{E}_{\\tilde{q} \\sim \\mathcal{D}\\text{query}}[\\tilde{q}^{\\top}\\tilde{q}]" }, { "bbox": [ 302, 167, 526, 449 ], "type": "text", "content": " (Section 4.2, Equation 13). Positional dependencies induced by RoPE cause representation divergence in " }, { "bbox": [ 302, 167, 526, 449 ], "type": "inline_equation", "content": "\\tilde{q}^{\\top}\\tilde{q}" }, { "bbox": [ 302, 167, 526, 449 ], "type": "text", "content": ", hindering the accurate estimation of " }, { "bbox": [ 302, 167, 526, 449 ], "type": "inline_equation", "content": "H" }, { "bbox": [ 302, 167, 526, 449 ], "type": "text", "content": " and leading to decreased accuracy. The synergy between WRoPE (which eliminates positional dependencies) and query-aware quantization (which aligns the objective between vector quantization and attention score approximation) ultimately delivers state-of-the-art performance." } ] } ], "index": 14 }, { "bbox": [ 302, 460, 524, 474 ], "type": "title", "angle": 0, "lines": [ { "bbox": [ 302, 460, 524, 474 ], "spans": [ { "bbox": [ 302, 460, 524, 474 ], "type": "text", "content": "C Additional Experiments on LongBench" } ] } ], "index": 15 }, { "bbox": [ 302, 481, 525, 587 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 481, 525, 587 ], "spans": [ { "bbox": [ 302, 481, 525, 587 ], "type": "text", "content": "To further validate the effectiveness of our proposed method on real-world tasks, we conducted additional experiments on 6 tasks from Long-Bench (Bai et al., 2024) (HotpotQA, MultiFieldQAen, QMSum, TriviaQA, PassageRetrieval-en and RepoBench-P). The configurations of all KV cache reduction methods are the same as those presented Section 5.1." } ] } ], "index": 16 }, { "bbox": [ 302, 589, 525, 682 ], "type": "text", "angle": 0, "lines": [ { "bbox": [ 302, 589, 525, 682 ], "spans": [ { "bbox": [ 302, 589, 525, 682 ], "type": "text", "content": "Table 4 compares the accuracies of different methods, along with attention sparsity ratios. Experimental results demonstrate that the proposed " }, { "bbox": [ 302, 589, 525, 682 ], "type": "inline_equation", "content": "\\mathrm{A}^2\\mathrm{ATS}" }, { "bbox": [ 302, 589, 525, 682 ], "type": "text", "content": " outperforms other baselines under comparable sparsity, emphasizing the effectiveness of our proposed method on a broader range of long-context tasks." } ] } ], "index": 17 } ], "discarded_blocks": [ { "bbox": [ 284, 780, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 780, 312, 791 ], "spans": [ { "bbox": [ 284, 780, 312, 791 ], "type": "text", "content": "12462" } ] } ], "index": 18 } ], "page_size": [ 595, 841 ], "page_idx": 11 }, { "para_blocks": [ { "type": "table", "bbox": [ 117, 359, 477, 459 ], "blocks": [ { "bbox": [ 117, 359, 477, 459 ], "lines": [ { "bbox": [ 117, 359, 477, 459 ], "spans": [ { "bbox": [ 117, 359, 477, 459 ], "type": "table", "html": "
ModelsSparsity↓Accuracy↑
HpQAMfQAQMSTrQAPReRBPAverage
Llama-3.1-8B-Instruct1.0058.4156.4325.0191.47100.056.3764.62
H2O0.06057.4449.7224.0991.5599.5052.2862.43
SnapKV0.06057.3854.1224.5990.9299.5054.3663.48
Quest0.06057.7955.4924.5890.7099.5053.9463.67
MagicPIG0.06857.8055.9925.2690.8299.5055.3164.11
A2ATS0.06058.0356.8425.0391.6399.5056.2764.55
", "image_path": "b6c08bc2dca7b916f9adfebe4cae6ab3005f668a6fefce8f7e487bdfc3ef795b.jpg" } ] } ], "index": 0, "angle": 0, "type": "table_body" } ], "index": 0 }, { "bbox": [ 204, 470, 388, 481 ], "lines": [ { "bbox": [ 204, 470, 388, 481 ], "spans": [ { "bbox": [ 204, 470, 388, 481 ], "type": "text", "content": "Table 4: Experimental results on LongBench." } ] } ], "index": 1, "angle": 0, "type": "text" } ], "discarded_blocks": [ { "bbox": [ 284, 781, 312, 791 ], "type": "page_number", "angle": 0, "lines": [ { "bbox": [ 284, 781, 312, 791 ], "spans": [ { "bbox": [ 284, 781, 312, 791 ], "type": "text", "content": "12463" } ] } ], "index": 2 } ], "page_size": [ 595, 841 ], "page_idx": 12 } ], "_backend": "vlm", "_version_name": "2.6.4" }