Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,163 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
--
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<div align="center">
|
| 2 |
+
|
| 3 |
+
<h1>BinaryAttention: One-Bit QK-Attention for Vision and Diffusion Transformers</h1>
|
| 4 |
+
|
| 5 |
+
<div>
|
| 6 |
+
<a href='https://github.com/EdwardChasel' target='_blank'>Chaodong Xiao<sup>1,2</sup></a>,
|
| 7 |
+
<a href='https://scholar.google.com.hk/citations?hl=zh-CN&user=UX26wSMAAAAJ' target='_blank'>Zhengqiang Zhang<sup>1,2</sup></a>,
|
| 8 |
+
<a href='https://www4.comp.polyu.edu.hk/~cslzhang/' target='_blank'>Lei Zhang<sup>1,2,β </sup></a>
|
| 9 |
+
</div>
|
| 10 |
+
<div>
|
| 11 |
+
<sup>1</sup>The Hong Kong Polytechnic University, <sup>2</sup>OPPO Research Institute
|
| 12 |
+
</div>
|
| 13 |
+
<div>
|
| 14 |
+
(β ) corresponding author
|
| 15 |
+
</div>
|
| 16 |
+
|
| 17 |
+
[[π arXiv paper]](https://arxiv.org/abs/2603.09582) [[π€ Hugging Face]](https://huggingface.co/EdwardChasel/BinaryAttention)
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
</div>
|
| 22 |
+
|
| 23 |
+
#### π©Accepted by CVPR2026
|
| 24 |
+
|
| 25 |
+
## π¬ Overview
|
| 26 |
+
|
| 27 |
+
<p align="center">
|
| 28 |
+
<img src="assets/main.png" alt="main" width="80%">
|
| 29 |
+
</p>
|
| 30 |
+
|
| 31 |
+
## π Abstract
|
| 32 |
+
|
| 33 |
+
Transformers have achieved widespread and remarkable success, while the computational complexity of their attention modules remains a major bottleneck for vision tasks. Existing methods mainly employ 8-bit or 4-bit quantization to balance efficiency and accuracy. In this paper, with theoretical justification, we indicate that binarization of attention preserves the essential similarity relationships, and propose BinaryAttention, an effective method for fast and accurate 1-bit qk-attention. Specifically, we retain only the sign of queries and keys in computing the attention, and replace the floating dot products with bit-wise operations, significantly reducing the computational cost. We mitigate the inherent information loss under 1-bit quantization by incorporating a learnable bias, and enable end-to-end acceleration. To maintain the accuracy of attention, we adopt quantization-aware training and self-distillation techniques, mitigating quantization errors while ensuring sign-aligned similarity. BinaryAttention is more than 2x faster than FlashAttention2 on A100 GPUs. Extensive experiments on vision transformer and diffusion transformer benchmarks demonstrate that BinaryAttention matches or even exceeds full-precision attention, validating its effectiveness. Our work provides a highly efficient and effective alternative to full-precision attention, pushing the frontier of low-bit vision and diffusion transformers.
|
| 34 |
+
|
| 35 |
+
## π― Main Results
|
| 36 |
+
|
| 37 |
+
* ### Image Classification on ImageNet-1K
|
| 38 |
+
|
| 39 |
+
<p align="center">
|
| 40 |
+
<img src="assets/classification.png" alt="classification" width="80%">
|
| 41 |
+
</p>
|
| 42 |
+
|
| 43 |
+
* ### Object Detection and Instance Segmentation on COCO
|
| 44 |
+
|
| 45 |
+
<p align="center">
|
| 46 |
+
<img src="assets/detection.png" alt="detection" width="80%">
|
| 47 |
+
</p>
|
| 48 |
+
|
| 49 |
+
* ### Semantic Segmentation on ADE20K
|
| 50 |
+
|
| 51 |
+
<p align="center">
|
| 52 |
+
<img src="assets/segmentation.png" alt="segmentation" width="80%">
|
| 53 |
+
</p>
|
| 54 |
+
|
| 55 |
+
* ### Image Generation on ImageNet-1K
|
| 56 |
+
|
| 57 |
+
<p align="center">
|
| 58 |
+
<img src="assets/generation.png" alt="generation" width="80%">
|
| 59 |
+
</p>
|
| 60 |
+
|
| 61 |
+
## π οΈ Getting Started
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
# 1. Clone the repository
|
| 65 |
+
git clone https://github.com/EdwardChasel/BinaryAttention.git
|
| 66 |
+
cd BinaryAttention
|
| 67 |
+
|
| 68 |
+
# 2. Create and activate a new conda environment
|
| 69 |
+
conda create -n BinaryAttention python=3.10
|
| 70 |
+
conda activate BinaryAttention
|
| 71 |
+
|
| 72 |
+
# 3. Install dependent packages
|
| 73 |
+
pip install --upgrade pip
|
| 74 |
+
pip install -r requirements.txt
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
## β¨ Pre-trained Models
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
<summary> ImageNet-1k Image Classification </summary>
|
| 82 |
+
<br>
|
| 83 |
+
|
| 84 |
+
[](https://huggingface.co/EdwardChasel/BinaryAttention/tree/main/checkpoints)
|
| 85 |
+
<div>
|
| 86 |
+
|
| 87 |
+
| name | pretrain | resolution | acc@1 | #param | OPs| download |
|
| 88 |
+
| :------------: | :----------: | :--------: | :---: | :----: | :---: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------: |
|
| 89 |
+
| BinaryAttention-T | ImageNet-1K | 224x224 | 72.88 | 6M | 1.1G | [ckpt](https://drive.google.com/file/d/1Ii08AKRvhyTxN1EQKj4KGVC5y_zzCwIe/view?usp=sharing) |
|
| 90 |
+
| BinaryAttention-S | ImageNet-1K | 224x224 | 80.24 | 22M | 4.3G | [ckpt](https://drive.google.com/file/d/1vjBI85DrnWNTbzS-UpUtEptoyq0uEVid/view?usp=sharing) |
|
| 91 |
+
| BinaryAttention-B | ImageNet-1K | 224x224 | 82.04 | 87M | 17.0G | [ckpt](https://drive.google.com/file/d/1Sk52vY8SdNQR5QKuj6KKvc_FYGDYTrMj/view?usp=sharing) |
|
| 92 |
+
| BinaryAttention-B | ImageNet-1K | 384x384 | 83.64 | 87M | 50.2G | [ckpt](https://drive.google.com/file/d/1c8usSZQEPTFPzxIjKoxVi_ql1rxfXA6V/view?usp=sharing) |
|
| 93 |
+
|
| 94 |
+
</div>
|
| 95 |
+
|
| 96 |
+
|
| 97 |
+
|
| 98 |
+
|
| 99 |
+
|
| 100 |
+
## π Data Preparation
|
| 101 |
+
|
| 102 |
+
ImageNet is an image database organized according to the WordNet hierarchy. Download and extract ImageNet train and val images from http://image-net.org/. Organize the data into the following directory structure:
|
| 103 |
+
|
| 104 |
+
```
|
| 105 |
+
imagenet/
|
| 106 |
+
βββ train/
|
| 107 |
+
β βββ n01440764/ (Example synset ID)
|
| 108 |
+
β β βββ image1.JPEG
|
| 109 |
+
β β βββ image2.JPEG
|
| 110 |
+
β β βββ ...
|
| 111 |
+
β βββ n01443537/ (Another synset ID)
|
| 112 |
+
β β βββ ...
|
| 113 |
+
β βββ ...
|
| 114 |
+
βββ val/
|
| 115 |
+
βββ n01440764/ (Example synset ID)
|
| 116 |
+
β βββ image1.JPEG
|
| 117 |
+
β βββ ...
|
| 118 |
+
βββ ...
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
## π Quick Start
|
| 122 |
+
|
| 123 |
+
To train BinaryAttention models for classification on ImageNet, use the following commands for different configurations:
|
| 124 |
+
|
| 125 |
+
```bash
|
| 126 |
+
python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py \
|
| 127 |
+
--model </name/of/model> \
|
| 128 |
+
--attn-quant --attn-bias --pv-quant \
|
| 129 |
+
--batch-size 128 \
|
| 130 |
+
--input-size </size/of/input/image> \
|
| 131 |
+
--epochs 300 --lr 5e-5 --min-lr 5e-6 --weight-decay 0.02 \
|
| 132 |
+
--finetune </path/of/full-precision/checkpoint> \
|
| 133 |
+
--data-path </path/of/dataset> \
|
| 134 |
+
--output_dir </path/of/output> \
|
| 135 |
+
## optional, defaults to tiny disable, small and base enable.
|
| 136 |
+
# --distillation-type hard \
|
| 137 |
+
# --teacher-path </path/of/full-precision/checkpoint>
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
To evaluate the performance with pre-trained weights:
|
| 141 |
+
|
| 142 |
+
```bash
|
| 143 |
+
python main.py --eval --resume </path/of/checkpoint> --data-path </path/of/dataset> --model </name/of/model> --attn-quant --attn-bias --pv-quant --input-size </size/of/input>
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
## ποΈ Citation
|
| 147 |
+
|
| 148 |
+
```BibTeX
|
| 149 |
+
@inproceedings{xiao2026binary,
|
| 150 |
+
title={BinaryAttention: One-Bit QK-Attention for Vision and Diffusion Transformers},
|
| 151 |
+
author={Xiao, Chaodong and Zhang, Zhengqiang and Zhang, Lei},
|
| 152 |
+
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
|
| 153 |
+
year={2026}
|
| 154 |
+
}
|
| 155 |
+
```
|
| 156 |
+
|
| 157 |
+
## π Acknowledgments
|
| 158 |
+
|
| 159 |
+
This project is largely based on [DeiT](https://github.com/facebookresearch/deit), [SageAttention](https://github.com/thu-ml/SageAttention). We are truly grateful for their excellent work.
|
| 160 |
+
|
| 161 |
+
## π« License
|
| 162 |
+
|
| 163 |
+
This project is released under the [Apache 2.0 license](LICENSE).
|