File size: 1,952 Bytes
0240c95
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f4fcdbb
0240c95
f4fcdbb
12d3ff1
f4fcdbb
0240c95
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
pipeline_tag: image-classification
license: mit
---

# Breaking the Low-Rank Dilemma of Linear Attention: RAVLT Model Card

This model card describes the Rank-Augmented Vision Linear Transformer (RAVLT), introduced in the paper "[Breaking the Low-Rank Dilemma of Linear Attention](https://arxiv.org/abs/2411.07635)". RAVLT achieves state-of-the-art performance on ImageNet-1k classification while maintaining linear complexity.

**Key Features:**

* High accuracy: Achieves 84.4% Top-1 accuracy on ImageNet-1k (RAVLT-S).
* Parameter efficiency: Uses only 26M parameters (RAVLT-S).
* Computational efficiency: Achieves 4.6G FLOPs (RAVLT-S).
* Linear complexity.

RAVLT is based on Rank-Augmented Linear Attention (RALA), a novel attention mechanism that addresses the low-rank limitations of standard linear attention.


## Model Variants

Several RAVLT variants were trained, offering different tradeoffs between accuracy, parameters, and FLOPs:

| Model    | Params (M) | FLOPs (G) | Checkpoint     |
| -------- | ---------- | --------- | --------------- |
| RAVLT-T  | 15          | 2.4       | [RAVLT-T](https://huggingface.co/aldjalkdf/RAVLT/blob/main/RAVLT_T.pth) |
| RAVLT-S  | 26          | 4.6       | [RAVLT-S](https://huggingface.co/aldjalkdf/RAVLT/blob/main/RAVLT_S.pth) |
| RAVLT-B  | 48          | 9.9       | [RAVLT-B](https://huggingface.co/aldjalkdf/RAVLT/blob/main/RAVLT_B.pth) |
| RAVLT-L  | 95          | 16.0      | [RAVLT-L](https://huggingface.co/aldjalkdf/RAVLT/blob/main/RAVLT_L.pth) |


## How to use (Placeholder - Awaiting Code Release)

Instructions on how to use the model will be provided once the code repository is available.  Code will be available at https://github.com/qhfan/RALA.

## Citation

```bibtex
@inproceedings{fan2024breakinglowrank,
      title={Breaking the Low-Rank Dilemma of Linear Attention},
      author={Qihang Fan and Huaibo Huang and Ran He },
      year={2025},
      booktitle={CVPR},
}
```