File size: 18,477 Bytes
4f781c9 a4e273f 4f781c9 a4e273f 4f781c9 a4e273f 4f781c9 a4e273f 4f781c9 a4e273f 4f781c9 6b60859 4f781c9 6b60859 4f781c9 6b60859 4f781c9 a4e273f 4f781c9 a4e273f 4f781c9 6b60859 4f781c9 6b60859 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 | ---
license: apache-2.0
language:
- en
- zh
base_model:
- Qwen/Qwen3-4B-Base
pipeline_tag: text-generation
---
<div align="center">
<h1>Kwai Summary Attention (KSA)</h1>
<p align="center">
<strong>Efficient long-context modeling via learnable summary tokens</strong>
</p>
<p align="center">
<a href="https://arxiv.org/abs/2604.24432">
<img alt="Paper" src="https://img.shields.io/badge/Paper-arXiv%3A2604.24432-b31b1b?logo=arxiv" />
</a>
<a href="https://github.com/Kuaishou-OneRec/KSA">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Kuaishou--OneRec-black?logo=github" />
</a>
<a href="#-license">
<img alt="License" src="https://img.shields.io/badge/License-Apache%202.0-green" />
</a>
</p>
<p align="center">
<a href="README.md">English</a> | <a href="README_zh.md">δΈζ</a>
</p>
</div>
<br>
## π Introduction
**Kwai Summary Attention (KSA)** is an efficient attention mechanism that compresses historical context into a small set of *learnable summary tokens* inserted at regular chunk boundaries. Unlike GQA/MLA, which keep one cache entry per token, and unlike sliding-window or linear attention, which discard or lossily compress distant history, KSA takes an **intermediate path**: the KV cache scales as **O(N/R)** with a semantic-level compression ratio R, trading a small amount of memory for *complete, referential, and interpretable* retention of long-range dependencies.
This repository contains:
- **Muse** training framework with the Qwen3 + Summary Attention model.
- A block-sparse training / prefill **kernel** for Summary Attention.
- A ring-buffer **KV cache** implementation for decoding, packaged as a HuggingFace `trust_remote_code` template.
- A full end-to-end **pretraining recipe** that progressively extends a Qwen3-1.9B base from 8k to 128k context.
- Weight conversion utilities (DCP β HuggingFace safetensors) and an inference sanity-check script.
<p align="center"><img src="./assets/figures/mainmodel.png" width="80%" alt="KSA hybrid architecture: summary tokens interleaved with text tokens, with Summary Attention layers and Full Attention layers in a 3:1 hybrid ratio." /></p>
<p align="center"><em>Figure: KSA hybrid architecture. Summary tokens interleave with text tokens; Summary Attention and Full Attention layers are stacked in a 3:1 ratio.</em></p>
## π₯ News
- **2026-04-28** β KSA technical report is released on arXiv: [arXiv:2604.24432](https://arxiv.org/abs/2604.24432).
- **2026-04-28** β Code, training recipes, block-sparse kernel, and HuggingFace `trust_remote_code` template are open-sourced under this repository.
- **2026-05-08** β [KSA-4B-base](https://huggingface.co/OpenOneRec/KSA-4B-base) (CPT from Qwen3-4B, 128K context) weights are released on HuggingFace.
## β¨ Highlights
- **Sequence-level KV compression.** Summary tokens partition the sequence into chunks of size $N$; the summary of each chunk acts as a compressed prior of distant history. KV cache grows as $O(N/R)$ instead of $O(N)$, and is **orthogonal** to GQA / MLA β the compression ratios multiply.
- **Sliding *chunk* attention, not sliding *window*.** Window boundaries are aligned with chunk boundaries so every past chunk is either fully visible (text) or summarized (summary token), never partially both. This avoids the information gap that naive SWA introduces at window edges.
- **Hybrid by default.** The released recipe uses a `3:1` *Summary : Full* layer interleaving. A small dose of full attention serves as a cross-chunk integrator and stabilizes long-context retrieval.
- **Summary KV cache for decoding.** KV states are laid out as a single contiguous buffer `[scratch | current chunk | sliding chunks (ring) | summary buffer]`. Every decode step reads one contiguous slice β no `cat`, no `gather`, no dense mask materialization. See [`examples/pretrain/hf_template/modeling_qwen3sa.py`](examples/pretrain/hf_template/modeling_qwen3sa.py).
- **Block-sparse training / prefill kernel.** Only non-empty block pairs are loaded from HBM to SRAM, avoiding the $O(L^2)$ mask materialization that would otherwise be infeasible at 128k. Distributed as a prebuilt wheel under [`summary_attention_kernel/`](summary_attention_kernel/).
- **Three-stage training recipe.** Attention distillation β parameter annealing β sequence-length extension, all reproducible via the `run_pretrain_{8,32,64,128}k.sh` launchers.
## π€ Model Zoo
Pretrained checkpoints published on HuggingFace.
| Model | Backbone | Parameters | Context | Training | Link |
| :------------ | :---------- | :--------- | :------ | :-------------------- | :---- |
| KSA-4B-base | Qwen3-4B | 4B | 128k | Continual pretraining | [π€ OpenOneRec/KSA-4B-base](https://huggingface.co/OpenOneRec/KSA-4B-base) |
The 1.9B *from-scratch* configuration is provided as a reproducible recipe only; no 1.9B weights will be released.
## ποΈ Method & Architecture
KSA compresses long context at the *semantic* level by inserting a small number of **learnable summary tokens** at fixed chunk boundaries, then treating the past as a sequence of chunks β each exposed either as full text or as its summary state.
### 1. Sliding Chunk Attention
<p align="center"><img src="./assets/figures/sca_vs_swa.png" width="75%" alt="Sliding-window attention may cut through a chunk and lose boundary information; sliding-chunk attention aligns with chunk boundaries and guarantees clean information routing." /></p>
<p align="center"><em>Figure: Sliding-chunk attention aligns windows to chunk boundaries. Naive sliding windows cut through chunks and drop boundary information.</em></p>
If the window boundary cuts through a chunk, that chunk is neither fully covered by text tokens nor wholly summarized β its information falls through the cracks. KSA aligns windows to chunks so every past chunk is *exclusively* accessed either as full text (inside the window) or via its summary token (outside), with no double-counting and no gaps.
### 2. Ring-buffer KV Cache
<p align="center"><img src="./assets/figures/buffer_layout.png" width="82%" alt="Contiguous KV cache layout for KSA decoding: scratch slot, current chunk, sliding-chunk ring, and summary token buffer all share a single physical tensor." /></p>
<p align="center"><em>Figure: Decoding KV cache layout. Every logical region is a contiguous slice of a single physical tensor.</em></p>
Every logical region β scratch, current chunk, sliding ring, summary buffer β is a contiguous slice of a single tensor. Text attention and summary attention each read one span. RoPE is applied *before* caching, so physical position in the ring is independent of logical position. Chunk eviction is an in-place copy into the oldest ring slot; no reallocation, no concatenation, no dense mask.
### 3. Sub-linear KV Scaling
<p align="center"><img src="./assets/figures/kv_cache_comparison.png" width="65%" alt="KV cache growth vs. sequence length: Full attention grows linearly, SWA is flat but loses distant history, KSA grows sub-linearly while preserving a compressed trace of all history." /></p>
<p align="center"><em>Figure: KV cache growth vs. sequence length.</em></p>
### 4. Training Recipe
Three stages, repeated at each target sequence length (8k β 32k β 64k β 128k):
1. **Attention distillation** β warm up the summary-attention parameters against a Full-Attention teacher.
2. **Parameter annealing** β unfreeze the full model and jointly optimize.
3. **Sequence-length extension** β scale `max_position_embeddings` and resume with adjusted RoPE base.
See [`examples/pretrain/README.md`](examples/pretrain/README.md) for per-stage hyperparameters.
### Released model configuration
The release ships two recipes: a 1.9B hybrid model trained from scratch (recipe only β no weights released) and a 4B continual-pretraining variant.
| Configuration | From Scratch (1.9B) | Continual Pretraining (4B) |
| :---------------------------- | :------------------ | :------------------------- |
| Number of layers | 24 | 36 |
| Hidden size | 2048 | 2560 |
| Intermediate size | 6144 | 9728 |
| Attention heads (Q / KV) | 16 / 16 | 32 / 8 |
| Head dimension | 128 | 128 |
| Hybrid ratio (Summary : Full) | 3 : 1 | 3 : 1 |
| Summary chunk size | 8 | 8 |
| Sliding chunk number | 128 | 128 |
| Tied embeddings | False | True |
The config lives at [`examples/pretrain/model_config/model_config_1b9_hybrid.json`](examples/pretrain/model_config/model_config_1b9_hybrid.json) and is loaded via the `Qwen3SummaryAttentionConfig` / `Qwen3SummaryModel` registered in `muse/models/`.
## π Performance
We evaluate KSA under two settings β **Continual Pretraining (CPT)** from a Qwen3-4B-base checkpoint (85B tokens), and **Train-from-Scratch** at 1.9B (400B tokens). Full results are in the [technical report](https://arxiv.org/abs/2604.24432); the highlights below are taken directly from its tables.
### Long-context retrieval β RULER (CPT, 4B)
| Benchmark | Full | Hybrid-SWA | Hybrid-SCA | Hybrid-Linear | KSA | **Hybrid-KSA** |
| :---------- | :-------- | :--------- | :--------- | :------------ | :---- | :------------- |
| RULER-4K | 92.88 | 91.30 | 86.02 | 86.39 | 91.55 | **92.97** |
| RULER-8K | **91.38** | 88.03 | 84.28 | 83.86 | 86.78 | 90.53 |
| RULER-16K | **89.12** | 82.87 | 80.67 | 78.06 | 84.78 | 88.86 |
| RULER-32K | 84.74 | 78.94 | 76.89 | 76.48 | 80.30 | **86.65** |
| RULER-64K | **78.16** | 73.88 | 68.88 | 73.50 | 76.09 | 76.04 |
| RULER-128K | 65.86 | 66.27 | 60.94 | 67.98 | 66.81 | **71.67** |
Hybrid-KSA leads at 4K, 32K, and 128K, and at **128K it surpasses Full attention by +5.81 points** while operating with a substantially smaller KV cache. Across all RULER lengths it is the strongest sub-quadratic alternative to Full attention.
### General benchmarks (CPT, 4B)
| Benchmark | Full | Hybrid-SWA | Hybrid-SCA | Hybrid-Linear | KSA | **Hybrid-KSA** |
| :-------- | :-------- | :--------- | :--------- | :------------ | :---- | :------------- |
| MMLU | **71.83** | 70.57 | 69.83 | 64.33 | 70.73 | 70.50 |
| CMMLU | **75.00** | 73.69 | 72.59 | 68.41 | 73.29 | 72.63 |
| C-Eval | **73.66** | 72.36 | 71.66 | 67.42 | 72.14 | 72.66 |
| MMLU-Pro | **46.36** | 45.23 | 45.11 | 38.83 | 45.70 | 45.39 |
| CMath | 83.41 | **84.84** | 83.16 | 79.09 | 84.58 | 84.25 |
| GSM8K | **82.75** | 81.92 | 80.10 | 72.44 | 81.09 | 79.50 |
| MATH | 47.48 | **48.24** | 47.45 | 42.57 | 48.15 | 47.56 |
| MBPP | 61.30 | 61.70 | 59.60 | 55.30 | 61.50 | **62.20** |
| HumanEval | 58.54 | 61.89 | 61.89 | 54.58 | 60.97 | **62.50** |
| **Avg.** | 73.50 | 72.12 | 69.94 | 67.28 | 72.30 | **73.59** |
KSA preserves full general capability under CPT β Hybrid-KSA's average **(73.59) edges out Full attention (73.50)**, with the smallest gap-to-Full of any sub-quadratic alternative.
### Train-from-scratch headlines (1.9B, 400B tokens)
- **RULER-128K**: Hybrid-KSA **65.35** vs. Full attention **48.75** ( **+16.60** ). Hybrid-KSA stays robust as length grows (80.65 β 65.35 from 4K to 128K), while Full attention collapses (76.08 β 48.75).
- **GSM8K**: Hybrid-KSA **59.14** vs. Full **48.29** ( **+10.85** ). **MATH**: **36.92** vs. **23.38** ( **+13.54** ).
- **MBPP / HumanEval**: best of all configurations at **36.40 / 31.71**.
- **Training loss**: Hybrid-KSA reaches the lowest final loss (**1.524**), below Hybrid-GDN (1.534), Hybrid-SWA (1.550), and Full (1.572).
### Needle-in-a-Haystack & RULER-128K subtasks (CPT)
Hybrid-KSA achieves **near-perfect single-needle retrieval across 4Kβ128K** at all needle depths, with only a minor dip at 128K. On RULER-128K subtasks it leads on **NIAH-Multivalue (98.75, +10.63 over Full)**, **VT (90.50, +30.0 over Full)**, **FWE (65.84)**, and **SQuAD (42.50)**.
### Inference efficiency (4B, 128K context)
- **KV cache**: 7.5 GB vs. 18.6 GB for Full attention β a **2.5Γ reduction**.
- **Decode throughput** at 16K prefill: **1.06Γ of Full attention**, vs. 0.73Γ for Hybrid-SWA and 0.81Γ for Hybrid-Ring-Linear.
## π Quick Start
### 1. Build the reference image
Ubuntu 24.04 + CUDA 12.6 + Python 3.12 + PyTorch 2.6.0 + FlashAttention 2.7.4.post1, with the block-sparse kernel preinstalled:
```bash
docker build -t ksa-train -f dockerfile/Dockerfile .
```
Versions are pinned from an actual training-host snapshot; see [`dockerfile/requirements.txt`](dockerfile/requirements.txt) for the full list. If you prefer bare-metal installation, mirror the same pins.
### 2. Configure environment variables
```bash
cp .env.example .env # then edit paths
bash set_env.sh
```
The run scripts auto-export `PYTHONPATH=$PWD:$PYTHONPATH`, so keeping the repo root on `PYTHONPATH` is sufficient.
### 3. Pretrain (progressive length extension)
Four stages, each resuming weights from the previous:
```bash
bash examples/pretrain/run_pretrain_8k.sh # 1. from scratch at 8k
bash examples/pretrain/run_pretrain_32k.sh # 2. extend to 32k
bash examples/pretrain/run_pretrain_64k.sh # 3. extend to 64k
bash examples/pretrain/run_pretrain_128k.sh # 4. extend to 128k
```
Edit `CHECKPOINT_DIR` / `OUTPUT_DIR` at the top of each script to match your storage layout. Each stage launches via `mpirun` and writes DCP checkpoints + dataloader state to `$OUTPUT_DIR/global_stepN/`. See [`examples/pretrain/README.md`](examples/pretrain/README.md) for mid-run resume, chunked-CE toggles, and per-stage hyperparameters.
### 4. Convert a trained checkpoint to HuggingFace
```bash
bash examples/pretrain/convert/convert_muse_to_hf.sh \
/path/to/muse_outputs/1b9_sa_hybrid_128k \
global_step5000 \
examples/pretrain/hf_template
```
The converted HF directory lands at `<OUTPUT_DIR>/<STEP>/hf/` and contains the remapped safetensors plus the `modeling_qwen3sa.py` / `summary_context.py` / tokenizer files from `hf_template/`. See [`examples/pretrain/hf_template/README.md`](examples/pretrain/hf_template/README.md) for the expected template contents.
### 5. Inference β sanity-check a converted model
```bash
python examples/inference/inference.py \
--model_path /path/to/global_step5000/hf \
--prompt "δ»η»δΈδΈδ½ θͺε·±" \
--device cuda:0
```
The inference path uses HuggingFace's `AutoModelForCausalLM` with `trust_remote_code=True` and goes through the ring-buffer KV cache defined in `hf_template/modeling_qwen3sa.py` β no framework-specific glue required.
## π Repository Layout
```
.
βββ muse/ # Training framework (models, layers, training loop)
β βββ models/qwen3_sa/ # Qwen3 + Summary Attention model
β βββ layers/summary_context.py # SummaryBatchContext + mask helpers
β βββ ...
βββ recipes/
β βββ pretrain_kai_summary_unified.py # Main pretrain entry
βββ summary_attention_kernel/
β βββ summary_attn-*.whl # Block-sparse SA kernel (training + prefill)
β βββ flash_attn_cute-*.whl # CuTe-based FlashAttention build used by the kernel
βββ examples/
β βββ pretrain/ # Progressive 8kβ128k recipe
β β βββ model_config/ # model_config_1b9_hybrid.json
β β βββ dataset_config/ # per-seq-length mmap dataset specs
β β βββ run_pretrain_{8,32,64,128}k.sh
β β βββ convert/ # DCP β HF safetensors
β β βββ hf_template/ # HF-compatible modeling + config template
β βββ inference/
β βββ inference.py # Quick chat-style sanity check
βββ data/ # (User-populated) mmap corpora
βββ dockerfile/ # Reference Dockerfile + requirements.txt
βββ README.md / README_zh.md
```
## π£οΈ Roadmap
We are actively working on:
- [x] Technical report on arXiv ([arXiv:2604.24432](https://arxiv.org/abs/2604.24432)).
- [x] Release the 4B continual-pretraining checkpoint ([KSA-4B-base](https://huggingface.co/OpenOneRec/KSA-4B-base)).
- [ ] Expanded evaluation scripts for RULER / NIAH / LongBench v2 reproduction.
- [ ] A reference serving stack with the ring-buffer KV cache.
- [ ] Additional ablations and tutorials.
Contributions are welcome β feel free to open an issue or PR.
## π Citation
If you find KSA useful, please cite our technical report:
```bibtex
@techreport{kwai2026ksa,
title = {Kwai Summary Attention Technical Report},
author = {OneRec Team},
year = {2026},
institution = {Kuaishou Technology},
url = {https://arxiv.org/abs/2604.24432}
}
```
## π‘οΈ License
The code in this repository is licensed under the **Apache 2.0 License** (see [`LICENSE`](LICENSE)). Model weights, when released, will be subject to their own license agreements.
## π Acknowledgements
KSA is built upon and inspired by the open-source ecosystem. We would like to thank:
- **Qwen3** β for the base architecture and tokenizer that KSA extends.
- **FlashAttention** β for the dense-attention primitives our block-sparse kernel composes with.
- **HuggingFace Transformers** β for the model / tokenizer / generation abstractions that make `trust_remote_code` deployment painless.
- **PyTorch distributed training** β for FSDP, DCP, and the communication primitives that make large-scale pretraining tractable.
We sincerely thank these projects for their outstanding work.
|