Add 1 files
Browse files- 2204/2204.08446.md +280 -0
2204/2204.08446.md
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: VSA: Learning Varied-Size Window Attention in Vision Transformers
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2204.08446
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
\newfloatcommand
|
| 7 |
+
figureboxfigure[\nocapbeside][] \newfloatcommand tableboxtable[\nocapbeside][]
|
| 8 |
+
|
| 9 |
+
1 1 institutetext: University of Sydney, Australia 2 2 institutetext: JD Explore Academy, China
|
| 10 |
+
|
| 11 |
+
2 2 email: {yuxu7116,qzha2506}@uni.sydney.edu.au,
|
| 12 |
+
|
| 13 |
+
jing.zhang1@sydney.edu.au, dacheng.tao@gmail.com
|
| 14 |
+
|
| 15 |
+
###### Abstract
|
| 16 |
+
|
| 17 |
+
Attention within windows has been widely explored in vision transformers to balance the performance, computation complexity, and memory footprint. However, current models adopt a hand-crafted fixed-size window design, which restricts their capacity of modeling long-term dependencies and adapting to objects of different sizes. To address this drawback, we propose V aried-S ize Window A ttention (VSA) to learn adaptive window configurations from data. Specifically, based on the tokens within each default window, VSA employs a window regression module to predict the size and location of the target window, _i.e_., the attention area where the key and value tokens are sampled. By adopting VSA independently for each attention head, it can model long-term dependencies, capture rich context from diverse windows, and promote information exchange among overlapped windows. VSA is an easy-to-implement module that can replace the window attention in state-of-the-art representative models with minor modifications and negligible extra computational cost while improving their performance by a large margin, e.g., 1.1% for Swin-T on ImageNet classification. In addition, the performance gain increases when using larger images for training and test. Experimental results on more downstream tasks, including object detection, instance segmentation, and semantic segmentation, further demonstrate the superiority of VSA over the vanilla window attention in dealing with objects of different sizes. The code is available at \href https://github.com/ViTAE-Transformer/ViTAE-VSAhttps://github.com/ViTAE-Transformer/ViTAE-VSA.
|
| 18 |
+
|
| 19 |
+
1 Introduction
|
| 20 |
+
--------------
|
| 21 |
+
|
| 22 |
+
Recent Vision transformers have shown great potential in various vision tasks. By stacking multiple transformer blocks with vanilla attention, ViT[[14](https://arxiv.org/html/2204.08446#bib.bib14)] processes non-overlapping image patches and obtain superior classification performance. However, vanilla attention with quadratic complexity over the input length is hard to adapt to vision tasks with high-resolution images as input due to the expensive computational cost. To alleviate such issues, window-based attention[[28](https://arxiv.org/html/2204.08446#bib.bib28)] is proposed to partition the images into local windows and conduct attention within each window to balance the performance, computation complexity, as well as memory footprint. This mechanism enables vision transformers to make a great success in many downstream visual tasks[[28](https://arxiv.org/html/2204.08446#bib.bib28), [42](https://arxiv.org/html/2204.08446#bib.bib42), [13](https://arxiv.org/html/2204.08446#bib.bib13), [49](https://arxiv.org/html/2204.08446#bib.bib49), [43](https://arxiv.org/html/2204.08446#bib.bib43), [40](https://arxiv.org/html/2204.08446#bib.bib40), [39](https://arxiv.org/html/2204.08446#bib.bib39), [31](https://arxiv.org/html/2204.08446#bib.bib31)]. However, it also enforces a spatial constraint on transformers’ attention distance, i.e., within the predefined window at each layer, thereby limiting the transformer’s ability to deal with objects at different scales.
|
| 23 |
+
|
| 24 |
+
Recent works have explored heuristic designs of attending to more tokens to alleviate such a spatial constraint. For example, Swin transformer[[28](https://arxiv.org/html/2204.08446#bib.bib28)] enlarges the window sizes from 7 ×\times× 7 to 12 ×\times× 12 when varying the image size from 224 ×\times× 224 to 384 ×\times× 384, and sets the window size as 32 ×\times× 32 to deal with image size 640 ×\times× 640 in SwinV2[[27](https://arxiv.org/html/2204.08446#bib.bib27)]. Some other methods try to find a good trade-off between attending to more tokens and increasing attention distance, _e.g_., multiple window mechanisms have been explored in Focal attention [[42](https://arxiv.org/html/2204.08446#bib.bib42)], where coarse granularity tokens are involved in capturing long-distance information. Cross-shaped window attention[[13](https://arxiv.org/html/2204.08446#bib.bib13)] relaxes the spatial constraint of the window in vertical and horizontal directions and allows the transformer to attend to far-away relevant tokens along with the two directions while keeping the constraint along the diagonal direction. Pale[[35](https://arxiv.org/html/2204.08446#bib.bib35)] further increases the diagonal-direction attention distance by attending to tokens in the dilated vertical/horizontal directions. These methods have achieved superior performance in image classification tasks by enlarging the attention distance. However, they sacrifice computational efficiency and consume more memory, especially when training large models with high-resolution images. Besides, all these methods determine the window sizes heuristically. Intuitively, using a fixed-size window may be sub-optimal for dealing with objects of different sizes, although stacking more layers could mitigate this issue to some extent, which may also result in more parameters and optimization difficulty. In this paper, we argue that if the window can be relaxed to a varied-size rectangular one, whose size and position are learned directly from data, the transformer can capture rich context from diverse windows and learn more powerful object feature representation.
|
| 25 |
+
|
| 26 |
+
\thisfloatsetup
|
| 27 |
+
heightadjust=all,valign=c
|
| 28 |
+
|
| 29 |
+
{floatrow}
|
| 30 |
+
[2] \ffigbox[1.5\FBwidth] \ffigbox[0.5\FBwidth] 
|
| 31 |
+
|
| 32 |
+
Figure 1: The comparison of the current works (hand-crafted windows) and the proposed VSA (varied-size windows).
|
| 33 |
+
|
| 34 |
+
Figure 2: The performance with different image sizes.
|
| 35 |
+
|
| 36 |
+
To this end, we propose a novel V aried-S ize Window A ttention (VSA) mechanism to learn adaptive window configurations from data. Different from the previous window-based transformers where query, key, and value tokens are all sampled from the same window as shown in Figure[2](https://arxiv.org/html/2204.08446#S1.F2 "Figure 2 ‣ 1 Introduction ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers")(a), VSA employs a window regression module to predict the size and location of the target window based on the tokens within each default window. Then, the key and values tokens are sampled from the target window. By adopting VSA independently for each attention head, it enables the attention layers to model long-term dependencies, capture rich context from diverse windows, and promote information exchange among overlapped windows, as illustrated in Figure[2](https://arxiv.org/html/2204.08446#S1.F2 "Figure 2 ‣ 1 Introduction ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers")(b). VSA is an easy-to-implementation module that can replace the window attention in state-of-the-art representative models with minor modifications and negligible extra computational cost while improving their performance by a large margin, e.g., 1.1% for Swin-T on ImageNet classification. In addition, the performance gain increases when using larger images for training and test, as shown in Figure[2](https://arxiv.org/html/2204.08446#S1.F2 "Figure 2 ‣ 1 Introduction ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"). With the larger images as input, Swin-T with predefined window sizes cannot adapt to large objects well, and the improvement brought by enlarging image sizes is marginal, i.e., a gain of 0.3% from 224 ×\times× 224 to 480 ×\times× 480. In contrast, the performance gain of VSA over Swin-T increases significantly from 1.1% to 1.9%, owing to the varied-size window attention. Besides, as VSA can effectively promote information exchange across overlapped windows via token sampling, it does not need the shifted windows mechanism in Swin.
|
| 37 |
+
|
| 38 |
+
In conclusion, the contribution of this study is threefold. (1) We introduce a novel VSA mechanism that can directly learn adaptive window size and location from data. It breaks the spatial constraint of the fixed-size window in existing works and makes it easier for window-based transformers to adapt to objects at different scales. (2) VSA can serve as an easy-to-implement module to improve various window-based transformers, including but not limited to Swin[[28](https://arxiv.org/html/2204.08446#bib.bib28), [27](https://arxiv.org/html/2204.08446#bib.bib27)] and ViTAEv2[[40](https://arxiv.org/html/2204.08446#bib.bib40), [49](https://arxiv.org/html/2204.08446#bib.bib49)], with minor modifications and negligible extra computational cost. (3) Extensive experimental results on public benchmarks demonstrate the superiority of VSA over the vanilla window attention on various visual tasks, including image classification, object detection, and semantic segmentation.
|
| 39 |
+
|
| 40 |
+
2 Related Work
|
| 41 |
+
--------------
|
| 42 |
+
|
| 43 |
+
### 2.1 Window-based vision transformers
|
| 44 |
+
|
| 45 |
+
Vision transformers[[14](https://arxiv.org/html/2204.08446#bib.bib14)] have demonstrated superior performance in many vision tasks by modeling long-term dependencies among local image patches (a.k.a. tokens)[[39](https://arxiv.org/html/2204.08446#bib.bib39), [23](https://arxiv.org/html/2204.08446#bib.bib23)]. However, vanilla full attention performs poorly in training efficiency due to the shortage of inductive bias. To improve the efficiency, the following works either implicitly or explicitly introduce inductive bias into vision transformers[[30](https://arxiv.org/html/2204.08446#bib.bib30), [40](https://arxiv.org/html/2204.08446#bib.bib40), [11](https://arxiv.org/html/2204.08446#bib.bib11), [41](https://arxiv.org/html/2204.08446#bib.bib41)] and obtain superior classification performance. After that, multi-stage design has been explored in [[33](https://arxiv.org/html/2204.08446#bib.bib33), [32](https://arxiv.org/html/2204.08446#bib.bib32), [28](https://arxiv.org/html/2204.08446#bib.bib28), [34](https://arxiv.org/html/2204.08446#bib.bib34), [49](https://arxiv.org/html/2204.08446#bib.bib49)] to better adapt vision transformers to downstream vision tasks. Among them, Swin[[28](https://arxiv.org/html/2204.08446#bib.bib28)] is a representative work. By partitioning the tokens into non-overlapping windows and conducting attention within each window, Swin alleviates the huge computational cost caused by attention when dealing with larger input images. Although it balances the performance, computational cost, and memory footprints well, window-based attentions bring a spatial constraint on the attention distance due to the constant maximum size of windows. To alleviate such issues, different techniques have been explored to recover the transformer’s ability to model long-term dependency gradually, _e.g_., using additional tokens for efficient cross-window feature exchange or designing delicate windows to allow the transformer layers to attend to far-away tokens in specific directions[[16](https://arxiv.org/html/2204.08446#bib.bib16), [13](https://arxiv.org/html/2204.08446#bib.bib13), [35](https://arxiv.org/html/2204.08446#bib.bib35), [22](https://arxiv.org/html/2204.08446#bib.bib22)]. However, they still 1) rely on heuristic-designed windows for attention computation and 2) need to stack the transformers layers sequentially to enable feature exchange across all windows and model long-term dependencies. Thus, they lack the flexibility to adapt well to inputs of various sizes since their maximum attention distances are restricted by the constant and data-agnostic window size and model depth.
|
| 46 |
+
|
| 47 |
+
Unlike them, the proposed VSA estimates window sizes and locations adaptively based on input features and calculates attention within such windows. Therefore, VSA allows transformer layers to model long-term dependencies, capture rich context, and promote cross-window information exchange from diverse varied-size windows. As VSA learns the window sizes in a data-driven manner, it can benefit window-based vision transformers to adapt to objects at various scales and thus helps boost their performance on image classification, object detection, and semantic segmentation.
|
| 48 |
+
|
| 49 |
+
### 2.2 Deformable sampling
|
| 50 |
+
|
| 51 |
+
Deformable sampling has been widely explored previously to help the convolution networks[[10](https://arxiv.org/html/2204.08446#bib.bib10), [51](https://arxiv.org/html/2204.08446#bib.bib51)] to focus on regions of interest and extract better features. Similar mechanisms have been exploited in deformable-DETR[[52](https://arxiv.org/html/2204.08446#bib.bib52)] to help the transformer detector to find and utilize the most valuable token features for object detection in a sparse manner. Recently, DPT[[6](https://arxiv.org/html/2204.08446#bib.bib6)] designs deformable patch merging layers based on PVT[[33](https://arxiv.org/html/2204.08446#bib.bib33)] to help the transformer to preserve better features after downsampling. VSA, from another perceptive, introduces learnable varied-size window attention into transformers. By flexibly estimating the window sizes and locations for attention calculation, VSA breaks the spatial constraint of fixed-size windows and makes it easier for window-based transformers to better adapt to the objects at various scales.
|
| 52 |
+
|
| 53 |
+
3 Method
|
| 54 |
+
--------
|
| 55 |
+
|
| 56 |
+
In this section, we will take Swin transformer[[28](https://arxiv.org/html/2204.08446#bib.bib28)] as an example and give a detailed description of applying VSA in Swin. The details of incorporating VSA into ViTAE[[49](https://arxiv.org/html/2204.08446#bib.bib49)] will be presented in the supplementary.
|
| 57 |
+
|
| 58 |
+
### 3.1 Preliminary
|
| 59 |
+
|
| 60 |
+
We will first briefly review the window attention operation in the baseline method Swin transformer. Given the input features X∈ℛ H×W×C 𝑋 superscript ℛ 𝐻 𝑊 𝐶 X\in\mathcal{R}^{H\times W\times C}italic_X ∈ caligraphic_R start_POSTSUPERSCRIPT italic_H × italic_W × italic_C end_POSTSUPERSCRIPT as input, Swin transformer employs several window-based attention layers for feature extraction. In each window-based attention layer, the input features are firstly partitioned into several non-overlapping windows, _i.e_., {X w i∈ℛ w×w×C|i∈[1,…,H×W w 2]}conditional-set superscript subscript 𝑋 𝑤 𝑖 superscript ℛ 𝑤 𝑤 𝐶 𝑖 1…𝐻 𝑊 superscript 𝑤 2\{X_{w}^{i}\in\mathcal{R}^{w\times w\times C}|i\in[1,\dots,\frac{H\times W}{w^% {2}}]\}{ italic_X start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∈ caligraphic_R start_POSTSUPERSCRIPT italic_w × italic_w × italic_C end_POSTSUPERSCRIPT | italic_i ∈ [ 1 , … , divide start_ARG italic_H × italic_W end_ARG start_ARG italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ] }, where w 𝑤 w italic_w is the predefined window size. After that, the partitioned tokens are flatten along the spatial dimension and projected to query, key, and value tokens, _i.e_., {Q w,f i,K w,f i,V w,f i∈ℛ w 2×N×C′|i∈[1,…,H×W w 2]}conditional-set superscript subscript 𝑄 𝑤 𝑓 𝑖 superscript subscript 𝐾 𝑤 𝑓 𝑖 superscript subscript 𝑉 𝑤 𝑓 𝑖 superscript ℛ superscript 𝑤 2 𝑁 superscript 𝐶′𝑖 1…𝐻 𝑊 superscript 𝑤 2\{Q_{w,f}^{i},K_{w,f}^{i},V_{w,f}^{i}\in\mathcal{R}^{w^{2}\times N\times C^{% \prime}}|i\in[1,\dots,\frac{H\times W}{w^{2}}]\}{ italic_Q start_POSTSUBSCRIPT italic_w , italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_K start_POSTSUBSCRIPT italic_w , italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_V start_POSTSUBSCRIPT italic_w , italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∈ caligraphic_R start_POSTSUPERSCRIPT italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT × italic_N × italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT | italic_i ∈ [ 1 , … , divide start_ARG italic_H × italic_W end_ARG start_ARG italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ] }, where Q,K,V 𝑄 𝐾 𝑉 Q,K,V italic_Q , italic_K , italic_V represent the query, key, and value tokens, respectively, N 𝑁 N italic_N denotes the head number and C′superscript 𝐶′C^{\prime}italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is the channel dimension along each head. It is noted that N×C′𝑁 superscript 𝐶′N\times C^{\prime}italic_N × italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT equals the channel dimension C 𝐶 C italic_C of the given feature. Given the flattened query, key, and value tokens from the same default window, the window-based attention layers conduct full attention within the window, _i.e_.,
|
| 61 |
+
|
| 62 |
+
F w,f i=MHSA(Q w,f i,K w,f i,V w,f i).superscript subscript 𝐹 𝑤 𝑓 𝑖 𝑀 𝐻 𝑆 𝐴 superscript subscript 𝑄 𝑤 𝑓 𝑖 superscript subscript 𝐾 𝑤 𝑓 𝑖 superscript subscript 𝑉 𝑤 𝑓 𝑖 F_{w,f}^{i}=MHSA(Q_{w,f}^{i},K_{w,f}^{i},V_{w,f}^{i}).italic_F start_POSTSUBSCRIPT italic_w , italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT = italic_M italic_H italic_S italic_A ( italic_Q start_POSTSUBSCRIPT italic_w , italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_K start_POSTSUBSCRIPT italic_w , italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_V start_POSTSUBSCRIPT italic_w , italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) .(1)
|
| 63 |
+
|
| 64 |
+
The F w,f i∈ℛ w 2×N×C′superscript subscript 𝐹 𝑤 𝑓 𝑖 superscript ℛ superscript 𝑤 2 𝑁 superscript 𝐶′F_{w,f}^{i}\in\mathcal{R}^{w^{2}\times N\times C^{\prime}}italic_F start_POSTSUBSCRIPT italic_w , italic_f end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∈ caligraphic_R start_POSTSUPERSCRIPT italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT × italic_N × italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT is the features after attention and MHSA 𝑀 𝐻 𝑆 𝐴 MHSA italic_M italic_H italic_S italic_A represents the vanilla multi-head self-attention operation[[14](https://arxiv.org/html/2204.08446#bib.bib14)]. The relative position embeddings are utilized during the attention calculation to encode spatial information into the features. The extracted features F 𝐹 F italic_F are reshaped back to the window shape, _i.e_., F w i∈ℛ w×w×C superscript subscript 𝐹 𝑤 𝑖 superscript ℛ 𝑤 𝑤 𝐶 F_{w}^{i}\in\mathcal{R}^{w\times w\times C}italic_F start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∈ caligraphic_R start_POSTSUPERSCRIPT italic_w × italic_w × italic_C end_POSTSUPERSCRIPT, and added with the input feature X w i superscript subscript 𝑋 𝑤 𝑖 X_{w}^{i}italic_X start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT. The same operation is individually repeated for each window and the generated features from all windows are then concatenated to recover the shape of input features. After that, an FFN module is employed to refine the extracted features, which contains two linear layers with hidden dimension αC 𝛼 𝐶\alpha C italic_α italic_C, where α 𝛼\alpha italic_α is the expansion ratio. For notation simplification, we dismiss the window index notation i 𝑖 i italic_i in the following since each window’s operations are the same.
|
| 65 |
+
|
| 66 |
+
With the usage of window-based attention, the computational complexity decreases to linear to the input size, _i.e_., each window attention’s complexity is 𝒪(w 4C)𝒪 superscript 𝑤 4 𝐶\mathcal{O}(w^{4}C)caligraphic_O ( italic_w start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT italic_C ) and the computation complexity of window attention for each image is 𝒪(w 2HWC)𝒪 superscript 𝑤 2 𝐻 𝑊 𝐶\mathcal{O}(w^{2}HWC)caligraphic_O ( italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_H italic_W italic_C ). To bridge connections between different windows, shifted operations are used between two adjacent transformer layers in Swin[[28](https://arxiv.org/html/2204.08446#bib.bib28)]. As a result, the receptive field of the model is gradually enlarged with layers stacking in sequence. However, current window-based attentions restrict the attention area of the tokens within the corresponding hand-crafted window at each transformer layer. It limits the model’s ability to capture far-away contextual information and learn better feature representations for objects at different scales.
|
| 67 |
+
|
| 68 |
+

|
| 69 |
+
|
| 70 |
+
Figure 3: The pipeline of the transformer with our proposed varied-size window attention. (a) The overall structure of stacking VSA transformers blocks; (b) The details of the proposed VSA module; (c) The pipeline of the VSA transformer block.
|
| 71 |
+
|
| 72 |
+
### 3.2 Varied-size window attention
|
| 73 |
+
|
| 74 |
+
Base window generation. Rather than stacking layers with hand-crafted windows to gradually enlarge the receptive field, our VSA allows the query tokens to attend to far-away regions and empower the network with the flexibility to determine the target window size, _i.e_., attention area, given specific input data at each layer. VSA only needs to make minor modifications to the basic structure of backbone networks and serves as an easy-to-implement module to replace the vanilla window attention in window-based transformers as in Figure[3](https://arxiv.org/html/2204.08446#S3.F3 "Figure 3 ‣ 3.1 Preliminary ‣ 3 Method ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers") (a). Technically, given the input features X 𝑋 X italic_X, VSA first partitions these tokens into several windows X w subscript 𝑋 𝑤 X_{w}italic_X start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT with the predefined window size w 𝑤 w italic_w, following the baseline methods’ routine. We refer to these windows as default windows and get the query features from the default windows, _i.e_.,
|
| 75 |
+
|
| 76 |
+
Q w=Linear(X w).subscript 𝑄 𝑤 𝐿 𝑖 𝑛 𝑒 𝑎 𝑟 subscript 𝑋 𝑤 Q_{w}=Linear(X_{w}).italic_Q start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_L italic_i italic_n italic_e italic_a italic_r ( italic_X start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) .(2)
|
| 77 |
+
|
| 78 |
+
Varied-size window regression module. To estimate the size and location of the target window for each default window, VSA considers the size and location of the default window as a reference and adopts a varied-size window regression (VSR 𝑉 𝑆 𝑅 VSR italic_V italic_S italic_R) module to predict the scale and offset upon the references as shown in Figure[3](https://arxiv.org/html/2204.08446#S3.F3 "Figure 3 ‣ 3.1 Preliminary ‣ 3 Method ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers")(b). The VSR 𝑉 𝑆 𝑅 VSR italic_V italic_S italic_R module consists of an average pooling layer, a LeakyReLU[[38](https://arxiv.org/html/2204.08446#bib.bib38)] activation layer, and a 1×1 1 1 1\times 1 1 × 1 convolutional layer with stride 1 in sequence. The kernel size and stride of the pooling layer follow the default window size, _i.e_.,
|
| 79 |
+
|
| 80 |
+
S w,O w=Conv∘LeakyReLU∘AveragePool(X w),subscript 𝑆 𝑤 subscript 𝑂 𝑤 𝐶 𝑜 𝑛 𝑣 𝐿 𝑒 𝑎 𝑘 𝑦 𝑅 𝑒 𝐿 𝑈 𝐴 𝑣 𝑒 𝑟 𝑎 𝑔 𝑒 𝑃 𝑜 𝑜 𝑙 subscript 𝑋 𝑤 S_{w},O_{w}=Conv\circ LeakyReLU\circ AveragePool(X_{w}),italic_S start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT , italic_O start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT = italic_C italic_o italic_n italic_v ∘ italic_L italic_e italic_a italic_k italic_y italic_R italic_e italic_L italic_U ∘ italic_A italic_v italic_e italic_r italic_a italic_g italic_e italic_P italic_o italic_o italic_l ( italic_X start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) ,(3)
|
| 81 |
+
|
| 82 |
+
where S w subscript 𝑆 𝑤 S_{w}italic_S start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT and O w∈ℛ 2×N subscript 𝑂 𝑤 superscript ℛ 2 𝑁 O_{w}\in\mathcal{R}^{2\times N}italic_O start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ∈ caligraphic_R start_POSTSUPERSCRIPT 2 × italic_N end_POSTSUPERSCRIPT represent the estimated scales and offsets in the horizontal and vertical directions w.r.t. the default window locations, independently for N 𝑁 N italic_N attention heads. The generated windows are referred to as target windows.
|
| 83 |
+
|
| 84 |
+
Varied-size window-based attention. We first get the key and value tokens K,V∈ℛ H×W×C 𝐾 𝑉 superscript ℛ 𝐻 𝑊 𝐶 K,V\in\mathcal{R}^{H\times W\times C}italic_K , italic_V ∈ caligraphic_R start_POSTSUPERSCRIPT italic_H × italic_W × italic_C end_POSTSUPERSCRIPT from the feature map X 𝑋 X italic_X, _i.e_.,
|
| 85 |
+
|
| 86 |
+
K,V=Reshape∘Linear(X).𝐾 𝑉 𝑅 𝑒 𝑠 ℎ 𝑎 𝑝 𝑒 𝐿 𝑖 𝑛 𝑒 𝑎 𝑟 𝑋 K,V=Reshape\circ Linear(X).italic_K , italic_V = italic_R italic_e italic_s italic_h italic_a italic_p italic_e ∘ italic_L italic_i italic_n italic_e italic_a italic_r ( italic_X ) .(4)
|
| 87 |
+
|
| 88 |
+
Then the VSA module uniformly samples M 𝑀 M italic_M features from each varied-size window over K,V 𝐾 𝑉 K,V italic_K , italic_V respectively, and obtains K w,v,V w,v∈ℛ M×N×C′subscript 𝐾 𝑤 𝑣 subscript 𝑉 𝑤 𝑣 superscript ℛ 𝑀 𝑁 superscript 𝐶′K_{w,v},V_{w,v}\in\mathcal{R}^{M\times N\times C^{\prime}}italic_K start_POSTSUBSCRIPT italic_w , italic_v end_POSTSUBSCRIPT , italic_V start_POSTSUBSCRIPT italic_w , italic_v end_POSTSUBSCRIPT ∈ caligraphic_R start_POSTSUPERSCRIPT italic_M × italic_N × italic_C start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT to serve as the key/value tokens for the query tokens Q w subscript 𝑄 𝑤 Q_{w}italic_Q start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT. To keep the computational cost as window attention, we set M 𝑀 M italic_M equal to w×w 𝑤 𝑤 w\times w italic_w × italic_w. The sampled tokens K w,v,V w,v subscript 𝐾 𝑤 𝑣 subscript 𝑉 𝑤 𝑣 K_{w,v},V_{w,v}italic_K start_POSTSUBSCRIPT italic_w , italic_v end_POSTSUBSCRIPT , italic_V start_POSTSUBSCRIPT italic_w , italic_v end_POSTSUBSCRIPT are then fed into MHSA 𝑀 𝐻 𝑆 𝐴 MHSA italic_M italic_H italic_S italic_A with queries Q w subscript 𝑄 𝑤 Q_{w}italic_Q start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT for attention calculation. However, as the key/value tokens are sampled from different locations with the query tokens, the relative position embeddings between the query and key tokens may not describe the spatial relationship well. Following the spirit in CPVT[[8](https://arxiv.org/html/2204.08446#bib.bib8)], we adopt conditional position embedding (CPE) before the MHSA layers to supply the spatial relationships into the model as shown in Figure[3](https://arxiv.org/html/2204.08446#S3.F3 "Figure 3 ‣ 3.1 Preliminary ‣ 3 Method ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers") (c), _i.e_.,
|
| 89 |
+
|
| 90 |
+
X=Z l−1+CPE(Z l−1),𝑋 superscript 𝑍 𝑙 1 𝐶 𝑃 𝐸 superscript 𝑍 𝑙 1 X=Z^{l-1}+CPE(Z^{l-1}),italic_X = italic_Z start_POSTSUPERSCRIPT italic_l - 1 end_POSTSUPERSCRIPT + italic_C italic_P italic_E ( italic_Z start_POSTSUPERSCRIPT italic_l - 1 end_POSTSUPERSCRIPT ) ,(5)
|
| 91 |
+
|
| 92 |
+
where Z l−1 superscript 𝑍 𝑙 1 Z^{l-1}italic_Z start_POSTSUPERSCRIPT italic_l - 1 end_POSTSUPERSCRIPT is the feature from the previous transformer block and CPE 𝐶 𝑃 𝐸 CPE italic_C italic_P italic_E is implemented by a depth-wise convolution layer with kernel size equal the window size, _i.e_., 7×7 7 7 7\times 7 7 × 7 by default, and stride 1.
|
| 93 |
+
|
| 94 |
+
### 3.3 Computation complexity analysis
|
| 95 |
+
|
| 96 |
+
The extra computations caused by VSA come from the CPE 𝐶 𝑃 𝐸 CPE italic_C italic_P italic_E and VSR 𝑉 𝑆 𝑅 VSR italic_V italic_S italic_R module, while the other parts, including the window-based multi-head self-attention and FFN network, are exactly the same as the baseline models. Given the input features X∈ℛ H×W×C 𝑋 superscript ℛ 𝐻 𝑊 𝐶 X\in\mathcal{R}^{H\times W\times C}italic_X ∈ caligraphic_R start_POSTSUPERSCRIPT italic_H × italic_W × italic_C end_POSTSUPERSCRIPT, VSA firstly uses a depth-wise convolutional layer with 7×7 7 7 7\times 7 7 × 7 kernels to generate CPE, which brings extra 𝒪(49⋅HWC)𝒪⋅49 𝐻 𝑊 𝐶\mathcal{O}(49\cdot HWC)caligraphic_O ( 49 ⋅ italic_H italic_W italic_C ) computations. In the VSR 𝑉 𝑆 𝑅 VSR italic_V italic_S italic_R module, we first employ an average pooling layer with kernel size and stride equal to the window size to aggregate features from the default windows, whose complexity is 𝒪(HWC)𝒪 𝐻 𝑊 𝐶\mathcal{O}(HWC)caligraphic_O ( italic_H italic_W italic_C ). The following activation function does not introduce extra computations, and the last convolutional layer with kernel size 1×1 1 1 1\times 1 1 × 1 takes X pool∈ℛ H w×W w×C subscript 𝑋 𝑝 𝑜 𝑜 𝑙 superscript ℛ 𝐻 𝑤 𝑊 𝑤 𝐶 X_{pool}\in\mathcal{R}^{\frac{H}{w}\times\frac{W}{w}\times C}italic_X start_POSTSUBSCRIPT italic_p italic_o italic_o italic_l end_POSTSUBSCRIPT ∈ caligraphic_R start_POSTSUPERSCRIPT divide start_ARG italic_H end_ARG start_ARG italic_w end_ARG × divide start_ARG italic_W end_ARG start_ARG italic_w end_ARG × italic_C end_POSTSUPERSCRIPT as the input and estimates the scales S w subscript 𝑆 𝑤 S_{w}italic_S start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT and offsets O w subscript 𝑂 𝑤 O_{w}italic_O start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT. Both the scales and offsets belong to ℛ 2×N superscript ℛ 2 𝑁\mathcal{R}^{2\times N}caligraphic_R start_POSTSUPERSCRIPT 2 × italic_N end_POSTSUPERSCRIPT. Thus, the computational complexity of the convolutional layer is 𝒪(4N w 2HWC)𝒪 4 𝑁 superscript 𝑤 2 𝐻 𝑊 𝐶\mathcal{O}(\frac{4N}{w^{2}}HWC)caligraphic_O ( divide start_ARG 4 italic_N end_ARG start_ARG italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG italic_H italic_W italic_C ), where N 𝑁 N italic_N is the number of the attention heads in the transformer layers, and w 𝑤 w italic_w is the window size. After obtaining the scales and offsets, we transform the default windows to the varied-size windows and uniformly sample w×w 𝑤 𝑤 w\times w italic_w × italic_w tokens within each target window. The computational complexity for each window is w 2×4×C superscript 𝑤 2 4 𝐶 w^{2}\times 4\times C italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT × 4 × italic_C, and the total computational complexity for the sampling operation is 𝒪(4⋅HWC)𝒪⋅4 𝐻 𝑊 𝐶\mathcal{O}(4\cdot HWC)caligraphic_O ( 4 ⋅ italic_H italic_W italic_C ). Thus, the total extra computations brought by VSA is 𝒪{(54+4N w 2)HWC}𝒪 54 4 𝑁 superscript 𝑤 2 𝐻 𝑊 𝐶\mathcal{O}\{(54+\frac{4N}{w^{2}})HWC\}caligraphic_O { ( 54 + divide start_ARG 4 italic_N end_ARG start_ARG italic_w start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG ) italic_H italic_W italic_C }, which is far less (≤5%absent percent 5\leq 5\%≤ 5 %) than the total computational cost of the baseline models, regarding the complexity of FFN is 𝒪(2αHWC 2)𝒪 2 𝛼 𝐻 𝑊 superscript 𝐶 2\mathcal{O}(2\alpha HWC^{2})caligraphic_O ( 2 italic_α italic_H italic_W italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) and C 𝐶 C italic_C is always larger than 96.
|
| 97 |
+
|
| 98 |
+
4 Experiments
|
| 99 |
+
-------------
|
| 100 |
+
|
| 101 |
+
### 4.1 Implementation details
|
| 102 |
+
|
| 103 |
+
We evaluate the performance of the proposed VSA based on Swin[[28](https://arxiv.org/html/2204.08446#bib.bib28)] and ViTAEv2[[49](https://arxiv.org/html/2204.08446#bib.bib49)]. The former is a pure transformer model with shifted windows between two adjacent layers, while the latter is an improved transformer model by introducing convolution inductive bias, which models long- and short-term dependencies jointly. In this paper, we adopt the full-window version of ViTAEv2 as the baseline. All the models are trained for 300 epochs from scratch on the standard ImageNet-1k[[12](https://arxiv.org/html/2204.08446#bib.bib12)] dataset with input resolution 224×224 224 224 224\times 224 224 × 224. We follow the hyper-parameters setting in the baseline methods to train the variants with VSA, _e.g_., we use AdamW[[29](https://arxiv.org/html/2204.08446#bib.bib29)] optimizer with cosine learning rate schedulers during training. A 20-epochs linear warm-up is utilized following Swin[[28](https://arxiv.org/html/2204.08446#bib.bib28)] to stabilize training. The initial learning rate is set to 0.001 for 1024 batch size during training. The data augmentation is the same as [[28](https://arxiv.org/html/2204.08446#bib.bib28)] and [[49](https://arxiv.org/html/2204.08446#bib.bib49)], _i.e_., random cropping, auto-augmentation[[24](https://arxiv.org/html/2204.08446#bib.bib24)], CutMix[[46](https://arxiv.org/html/2204.08446#bib.bib46)], MixUp[[47](https://arxiv.org/html/2204.08446#bib.bib47)], and random erasing are used to augment the input images. Besides, label smoothing with a weight of 0.1 is adopted. It is also noteworthy that there is no shifted window mechanism in the models with VSA, since VSA enables cross-window information exchange among overlapped varied-size windows.
|
| 104 |
+
|
| 105 |
+
### 4.2 Image Classification on ImageNet
|
| 106 |
+
|
| 107 |
+
We evaluate the classification performance of different models on the ImageNet[[12](https://arxiv.org/html/2204.08446#bib.bib12)] validation set. As shown in Table[1](https://arxiv.org/html/2204.08446#S4.T1 "Table 1 ‣ 4.2 Image Classification on ImageNet ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"), the proposed VSA helps boost the classification accuracy of Swin transformer by 1.1% absolute Top-1 accuracy, _i.e_. from 81.2% to 82.3%, even without the shifted window mechanisms. It indicates that VSA can flexibly determine the appropriate window sizes and locations given the input features, allow the tokens to effectively attend far-away but relevant tokens outside the default windows to extract rich context, and learn better feature representations. Besides, Swin-T with VSA obtains comparable performance with MSG-T[[16](https://arxiv.org/html/2204.08446#bib.bib16)], which adopts extra messenger tokens for feature exchange across windows, _i.e_., 82.3% v.s. 82.4%, demonstrating that our varied-size window mechanism can enable sufficient feature exchange across windows without the need of using extra tokens. For ViTAEv2[[49](https://arxiv.org/html/2204.08446#bib.bib49)], ViTAEv2-S with VSA obtains 82.7% (+0.5%) classification accuracy with only 20M parameters, demonstrating that the proposed varied-size window attention is compatible with not only the transformers with vanilla window attentions but also those with convolutions for feature exchange across windows.
|
| 108 |
+
|
| 109 |
+
Table 1: Image classification results on ImageNet. ‘Input Size’ denotes the image size used for training and test.
|
| 110 |
+
|
| 111 |
+
Model Params FLOPs Input ImageNet[[12](https://arxiv.org/html/2204.08446#bib.bib12)]Real[[1](https://arxiv.org/html/2204.08446#bib.bib1)]
|
| 112 |
+
(M)(G)Size Top-1 Top-5 Top-1
|
| 113 |
+
DeiT-S[[30](https://arxiv.org/html/2204.08446#bib.bib30)]22 4.6 224 81.2 95.4 86.8
|
| 114 |
+
PVT-S[[33](https://arxiv.org/html/2204.08446#bib.bib33)]25 3.8 224 79.8--
|
| 115 |
+
ViL-S[[48](https://arxiv.org/html/2204.08446#bib.bib48)]25 4.9 224 82.4--
|
| 116 |
+
PiT-S[[21](https://arxiv.org/html/2204.08446#bib.bib21)]24 4.8 224 80.9--
|
| 117 |
+
TNT-S[[18](https://arxiv.org/html/2204.08446#bib.bib18)]24 5.2 224 81.3 95.6-
|
| 118 |
+
MSG-T[[16](https://arxiv.org/html/2204.08446#bib.bib16)]25 3.8 224 82.4--
|
| 119 |
+
Twins-PCPVT-S[[7](https://arxiv.org/html/2204.08446#bib.bib7)]24 3.8 224 81.2--
|
| 120 |
+
Twins-SVT-S[[7](https://arxiv.org/html/2204.08446#bib.bib7)]24 2.9 224 81.7--
|
| 121 |
+
T2T-ViT-14[[45](https://arxiv.org/html/2204.08446#bib.bib45)]22 5.2 224 81.5 95.7 86.8
|
| 122 |
+
Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)]29 4.5 224 81.2--
|
| 123 |
+
Swin-T+++VSA 29 4.6 224 82.3 96.1 87.5
|
| 124 |
+
ViTAEv2-S 1 1 footnotemark: 1[[49](https://arxiv.org/html/2204.08446#bib.bib49)]20 5.4 224 82.2 96.1 87.5
|
| 125 |
+
ViTAEv2-S 1 1 footnotemark: 1+++VSA 20 5.6 224 82.7 96.3 87.8
|
| 126 |
+
Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)]29 14.2 384 81.4 95.4 86.4
|
| 127 |
+
Swin-T+++VSA 29 14.9 384 83.2 96.5 88.0
|
| 128 |
+
Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)]29 23.2 480 81.5 95.7 86.3
|
| 129 |
+
Swin-T+++VSA 29 24.0 480 83.4 96.7 88.0
|
| 130 |
+
PiT-B[[21](https://arxiv.org/html/2204.08446#bib.bib21)]74 12.5 224 82.0--
|
| 131 |
+
TNT-B[[18](https://arxiv.org/html/2204.08446#bib.bib18)]66 14.1 224 82.8 96.3-
|
| 132 |
+
Focal-B[[42](https://arxiv.org/html/2204.08446#bib.bib42)]90 16.0 224 83.8--
|
| 133 |
+
ViL-B[[48](https://arxiv.org/html/2204.08446#bib.bib48)]56 13.4 224 83.7--
|
| 134 |
+
MSG-S[[16](https://arxiv.org/html/2204.08446#bib.bib16)]56 8.4 224 83.4--
|
| 135 |
+
PVTv2-B5[[32](https://arxiv.org/html/2204.08446#bib.bib32)]82 11.8 224 83.8--
|
| 136 |
+
Swin-S[[28](https://arxiv.org/html/2204.08446#bib.bib28)]50 8.7 224 83.0--
|
| 137 |
+
Swin-S+++VSA 50 8.9 224 83.8 96.8 88.54
|
| 138 |
+
Swin-B[[28](https://arxiv.org/html/2204.08446#bib.bib28)]88 15.4 224 83.3-88.0
|
| 139 |
+
Swin-B+++VSA 88 16.0 224 83.9 96.7 88.6
|
| 140 |
+
|
| 141 |
+
* 1
|
| 142 |
+
The full window version.
|
| 143 |
+
|
| 144 |
+
When scaling the input images to higher resolutions, _i.e_., from 224 ×\times× 224 to 384 ×\times× 384 and 480 ×\times× 480, the performance gains from VSA become larger owing to its ability to learn adaptive target window sizes from data. Specifically, the performance gain brought by VSA increases from 1.1% to 1.8% absolute accuracy over Swin-T when scaling the input size from 224 to 384, respectively. For the 480 ×\times× 480 input resolution, the performance gain of VSA further increases to 1.9%, while the Swin transformers only benefit from the higher resolution marginally (_i.e_., 0.2%). The reason is that the fixed-size window attention in Swin limits the attention region at each transformer layer, which brings difficulty in handling objects at different scales. In contrast, VSA can learn to vary the window size to adapt to the objects and capture rich contextual information from different attention heads at each layer, which is beneficial for learning powerful object feature representations.
|
| 145 |
+
|
| 146 |
+
### 4.3 Object detection and instance segmentation on MS COCO
|
| 147 |
+
|
| 148 |
+
Settings. We evaluate the backbone models for the object detection and instance segmentation tasks on the MS COCO[[26](https://arxiv.org/html/2204.08446#bib.bib26)] dataset, which contains 118K training, 5K validation, and 20K test images with full annotations. We adopt the models trained on ImageNet with 224 ×\times× 224 input resolutions as backbones and use three typical object detection frameworks, _i.e_., the two-stage frameworks Mask RCNN[[19](https://arxiv.org/html/2204.08446#bib.bib19)] and Cascade RCNN[[2](https://arxiv.org/html/2204.08446#bib.bib2), [3](https://arxiv.org/html/2204.08446#bib.bib3)], and the one-stage framework RetinaNet[[25](https://arxiv.org/html/2204.08446#bib.bib25)]. We follow the common practice in mmdetection[[5](https://arxiv.org/html/2204.08446#bib.bib5)], _i.e_., multi-scale training with an AdamW optimizer and a batch size of 16. The initial learning rate is 0.0001 and the weight decay is 0.05. We adopt both 1×\times× (12 epochs) and 3×\times× (36 epochs) training schedules for the Mask RCNN framework to evaluate the object detection performance w.r.t. different backbones. For RetinaNet and Cascade RCNN, the models are trained with 1×\times× and 3×\times× schedules, respectively. The results on other settings are reported in the supplementary.
|
| 149 |
+
|
| 150 |
+
Table 2: Object detection results on MS COCO with Mask RCNN.
|
| 151 |
+
|
| 152 |
+
* 1
|
| 153 |
+
The full window version.
|
| 154 |
+
|
| 155 |
+
Results. The results of baseline models and those with VSA on the MS COCO dataset with Mask RCNN, RetinaNet, and Cascade RCNN are reported in Tables[2](https://arxiv.org/html/2204.08446#S4.T2 "Table 2 ‣ 4.3 Object detection and instance segmentation on MS COCO ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"), [4](https://arxiv.org/html/2204.08446#S4.T4 "Table 4 ‣ 4.3 Object detection and instance segmentation on MS COCO ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"), and [4](https://arxiv.org/html/2204.08446#S4.T4 "Table 4 ‣ 4.3 Object detection and instance segmentation on MS COCO ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"), respectively. Compared to the baseline method Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)] and ViTAEv2[[49](https://arxiv.org/html/2204.08446#bib.bib49)], their VSA variants obtain better performance on both object detection and instance segmentation tasks with all detection frameworks, _e.g_., VSA brings a gain of 1.9 and 2.4 mAP bb 𝑏 𝑏{}^{bb}start_FLOATSUPERSCRIPT italic_b italic_b end_FLOATSUPERSCRIPT for Swin-T and ViTAEv2-S with Mask RCNN 1×\times× training schedule, confirming that VSA learns better object features than the vanilla window attention via the varied-size window attention that can better deal with objects at different scales for object detection. Besides, a longer training schedule (3×\times×) also sees a significant performance gain from VSA over the vanilla window attention. For example, the performance gain of VSA on Swin-T and ViTAEv2 reaches 1.5 mAP bb 𝑏 𝑏{}^{bb}start_FLOATSUPERSCRIPT italic_b italic_b end_FLOATSUPERSCRIPT and 3.4 mAP bb 𝑏 𝑏{}^{bb}start_FLOATSUPERSCRIPT italic_b italic_b end_FLOATSUPERSCRIPT, respectively. We attribute this to the better attention regions learned by the VSR module in our VSA under longer training epochs. Similar conclusions can also be drawn when using RetinaNet[[25](https://arxiv.org/html/2204.08446#bib.bib25)] and Cascade RCNN[[2](https://arxiv.org/html/2204.08446#bib.bib2)] as detection frameworks, where VSA brings a gain of at least 2.0 and 1.2 mAP bb 𝑏 𝑏{}^{bb}start_FLOATSUPERSCRIPT italic_b italic_b end_FLOATSUPERSCRIPT, respectively. It is also noteworthy that the performance gains on ViTAEv2 are more significant than those on Swin-T. This is because there is no shifted window mechanism existing in ViTAEv2, and thus the ability to model long-range dependencies via attention is constrained within each window. In contrast, the varied size window attention in VSA empowers ViTAEv2 models to have such an ability and efficiently exchange rich contextual information across windows.
|
| 156 |
+
|
| 157 |
+
\thisfloatsetup
|
| 158 |
+
heightadjust=all,valign=c
|
| 159 |
+
|
| 160 |
+
Table 3: Object detection results on MS COCO[[26](https://arxiv.org/html/2204.08446#bib.bib26)] with RetinaNet[[25](https://arxiv.org/html/2204.08446#bib.bib25)].
|
| 161 |
+
|
| 162 |
+
Table 4: Object detection results on MS COCO[[26](https://arxiv.org/html/2204.08446#bib.bib26)] with Cascade RCNN[[2](https://arxiv.org/html/2204.08446#bib.bib2)].
|
| 163 |
+
|
| 164 |
+
[2] \tablebox
|
| 165 |
+
|
| 166 |
+
Table 3: Object detection results on MS COCO[[26](https://arxiv.org/html/2204.08446#bib.bib26)] with RetinaNet[[25](https://arxiv.org/html/2204.08446#bib.bib25)].
|
| 167 |
+
|
| 168 |
+
Table 4: Object detection results on MS COCO[[26](https://arxiv.org/html/2204.08446#bib.bib26)] with Cascade RCNN[[2](https://arxiv.org/html/2204.08446#bib.bib2)].
|
| 169 |
+
|
| 170 |
+
### 4.4 Semantic segmentation on Cityscapes
|
| 171 |
+
|
| 172 |
+
Settings. The Cityscapes[[9](https://arxiv.org/html/2204.08446#bib.bib9)] dataset is adopted to evaluate the performance of different backbones for semantic segmentation. The dataset contains over 5K well-annotated images of street scenes from 50 different cities. UperNet[[37](https://arxiv.org/html/2204.08446#bib.bib37)] is adopted as the segmentation framework. The training and evaluation of the models follow the common practice, _i.e_., using the Adam optimizer with polynomial learning rate schedulers. The models are trained for 40k iterations and 80k iterations separately with both 512×\times×1024 and 769×\times×769 input resolutions.
|
| 173 |
+
|
| 174 |
+
Table 5: Semantic segmentation results on Cityscapes[[9](https://arxiv.org/html/2204.08446#bib.bib9)] with UperNet[[37](https://arxiv.org/html/2204.08446#bib.bib37)]. * denotes results are obtained with multi-scale test.
|
| 175 |
+
|
| 176 |
+
Results. The results are available in Table[5](https://arxiv.org/html/2204.08446#S4.T5 "Table 5 ‣ 4.4 Semantic segmentation on Cityscapes ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"). With 512×\times×1024 input size, VSA brings over 1.3 mIoU and 1.4 mAcc gains for both Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)] and ViTAEv2-S[[49](https://arxiv.org/html/2204.08446#bib.bib49)], no matter with 40k or 80k training schedules. This observations hold with 769×\times×769 resolution images as input, where VSA brings over 1.0 mIoU and 1.0 mAcc gains for both models. Such phenomena validates the effectiveness of the proposed VSA in improving the baseline models’ performance on semantic segmentation tasks. With more training iterations (80k), the performance gains of VSA over Swin-T increases from 1.9 to 2.2 mIoU with 512×\times×1024 and from 1.7 to 2.0 mIoU with 769×\times×769, owing to the better attention regions learned by the VSR module. Besides, with multi-scale testing, the performance of using VSA further improves, indicating that VSA can implicit capture multi-scale features as the target windows have different scales and locations for each head.
|
| 177 |
+
|
| 178 |
+
### 4.5 Ablation Study
|
| 179 |
+
|
| 180 |
+
We adopt Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)] with VSA for ablation studies. The models are trained for 300 epochs with AdamW optimizer. To find the optimal configuration of VSA, we gradually substitute the window attention in different stages of Swin with VSA. The results are shown in Table[7](https://arxiv.org/html/2204.08446#S4.T7 "Table 7 ‣ 4.5 Ablation Study ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"), where ✓✓\checkmark✓ indicates that VSA replaces the vanilla window attention. We can see that the performance gradually improves with more VSA used and reaches the best when using VSA in all four stages. Meanwhile, it only takes a few extra parameters and FLOPs. Therefore, we choose to use VSA at all stages as the default setting in this paper.
|
| 181 |
+
|
| 182 |
+
{floatrow}
|
| 183 |
+
[2] \tablebox VSA at stages FLOPs Param Acc.stage 1 stage 2 stage 3 stage 4(G)(M)(%)4.5 28.2 81.2✓4.5 28.3 81.4✓✓4.6 28.7 81.9✓✓✓4.6 28.7 82.1✓✓✓✓4.6 28.7 82.3\tablebox
|
| 184 |
+
|
| 185 |
+
Table 6: The ablation study of using VSA in each stage of Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)].
|
| 186 |
+
|
| 187 |
+
Table 7: The ablation study of each component in VSA based on Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)].
|
| 188 |
+
|
| 189 |
+
We take Swin-T as the baseline and further validate the contribution of each component in VSA. The results are available in Table[7](https://arxiv.org/html/2204.08446#S4.T7 "Table 7 ‣ 4.5 Ablation Study ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"), where ✓✓\checkmark✓ denotes using the specific component. ‘Shift’ is short for the shifted window mechanism. With only ‘Shift’ marked, the model becomes the baseline Swin-T. As can be seen, the model with ‘VSR’ alone outperforms Swin-T by 0.3% absolute accuracy, implying (1) the effectiveness of varied-size windows in cross-window information exchange and (2) the advantage of adapting the window sizes and locations, _i.e_., attention regions, to the objects at different scales. Besides, using CPE and VSR in VSA further boosts to 82.3%, which outperforms the variant of ‘CPE’ + ‘Shift’ by 0.6% accuracy. It indicates that CPE is better compatible with varied size windows by providing local positional information. It is also noteworthy that there is no need to use the shifted-window mechanism in VSA according to the results in the last two rows, confirming that varied-size windows can guarantee the feature exchange across overlapped windows.
|
| 190 |
+
|
| 191 |
+
### 4.6 Throughputs & GPU memory comparison
|
| 192 |
+
|
| 193 |
+
Table 8: Throughput & GPU memory comparison with VSA.
|
| 194 |
+
|
| 195 |
+
Throughputs on A100 Throughputs on V100 Memory
|
| 196 |
+
(fps)(fps)(G)
|
| 197 |
+
Swin-T 1557 679 15.8
|
| 198 |
+
Swin-T+VSA 1297 595 16.1
|
| 199 |
+
Swin-S 961 401 23.0
|
| 200 |
+
Swin-S+VSA 769 352 23.5
|
| 201 |
+
|
| 202 |
+
We also evaluate the model’s throughputs during inference and GPU memory consumption during training, with batch size 128 and input resolution 224 ×\times× 224. We run each model 20 times firstly as warmup and count the average throughputs of the subsequent 30 runs as the throughputs of the models. All of the experiments are conducted on the NVIDIA A100 and V100 GPUs. As shown in Table[8](https://arxiv.org/html/2204.08446#S4.T8 "Table 8 ‣ 4.6 Throughputs & GPU memory comparison ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"), VSA slows down the Swin model by about 12%∼similar-to\sim∼17% on different hardware platforms and consumes 2% more GPU memory, with much better performance on both classification and downstream dense prediction tasks. Such slow-down and extra memory consumption is mainly due to the sub-optimal optimization of sampling operations compared with the matrix multiply operations in the PyTorch framework, where the latter is sufficiently optimized with cuBLAS. Integrating the sampling operation with following linear projection operations with CUDA optimization can help alleviate the speed concerns, which we leave as our future work to implement the proposed VSR module better.
|
| 203 |
+
|
| 204 |
+
### 4.7 Visual inspection and analysis
|
| 205 |
+
|
| 206 |
+

|
| 207 |
+
|
| 208 |
+
Figure 4: Visualization of the varied-size windows generated by VSA from ImageNet (a) and MS COCO (b). The t-SNE analysis is also provided in (c).
|
| 209 |
+
|
| 210 |
+
Visualization of target windows. We visualize the default windows used in Swin-T[[28](https://arxiv.org/html/2204.08446#bib.bib28)] and the varied-size windows generated by VSA on images from the ImageNet[[12](https://arxiv.org/html/2204.08446#bib.bib12)] and MS COCO[[26](https://arxiv.org/html/2204.08446#bib.bib26)] datasets to see where VSA learns to attend for different images. The results are visualized in Figure[4](https://arxiv.org/html/2204.08446#S4.F4 "Figure 4 ‣ 4.7 Visual inspection and analysis ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers"). As shown in Figure[4](https://arxiv.org/html/2204.08446#S4.F4 "Figure 4 ‣ 4.7 Visual inspection and analysis ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers")(a), the generated windows from VSA can better cover the target objects in the images while the fixed-size windows adopted in Swin can only capture part of the targets. It can also be inferred from Figure[4](https://arxiv.org/html/2204.08446#S4.F4 "Figure 4 ‣ 4.7 Visual inspection and analysis ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers")(b) that the windows generated by different heads in VSA have different sizes and locations to focus on different parts of the targets, which helps to capture rich contextual information and learn better object feature representations. Besides, the windows that cover the target objects have more variance in size and location compared with those covering background as shown in (b), _e.g_., the windows on the zebra and elephant vary (the blue, red, orange, pink, _etc_.) significantly while others in the background are less varied. In addition, the target windows are overlapped with each other, thus enabling abundant cross-window feature exchange and making it possible to drop the shifted window mechanism in VSA.
|
| 211 |
+
|
| 212 |
+
t-SNE analysis. We further use t-SNE to analyze the features generated by Swin-T models with and without VSA. We randomly select 20 categories from the ImageNet dataset and use t-SNE to visualize the extracted features. As shown in Figure[4](https://arxiv.org/html/2204.08446#S4.F4 "Figure 4 ‣ 4.7 Visual inspection and analysis ‣ 4 Experiments ‣ VSA: Learning Varied-Size Window Attention in Vision Transformers")(c), the features generated by Swin-T with VSA are better clustered, demonstrating that VSA can help the models deal with objects of different sizes and learn more discriminative features.
|
| 213 |
+
|
| 214 |
+
5 Limitation and Discussion
|
| 215 |
+
---------------------------
|
| 216 |
+
|
| 217 |
+
Although VSA has been proven efficient in dealing with images of varied resolutions and has shown its effectiveness on various vision tasks, including classification, detection, instance segmentation, and semantic segmentation, we only evaluate VSA with Swin[[28](https://arxiv.org/html/2204.08446#bib.bib28)] and ViTAEv2[[49](https://arxiv.org/html/2204.08446#bib.bib49)] in this paper. It will be our future work to explore the usage of VSA on other transformers with window-based attentions, _e.g_., CSwin[[13](https://arxiv.org/html/2204.08446#bib.bib13)] and Pale[[35](https://arxiv.org/html/2204.08446#bib.bib35)], which use cross-shaped attentions. Besides, to keep the computational cost as the vanilla window attention, we only sample sparse tokens from each target window, _i.e_., the number of sampled tokens equals the default window size, which may ignore some details when the window becomes extremely large. Although the missed details may be complemented from other windows via feature exchange, a more efficient sampling strategy can be explored in the future study.
|
| 218 |
+
|
| 219 |
+
6 Conclusion
|
| 220 |
+
------------
|
| 221 |
+
|
| 222 |
+
This paper presents a novel varied-size window attention (VSA), _i.e_., an easy-to-implement module that can help boost the performance of representative window-based vision transformers such as Swin in various vision tasks, including image classification, object detection, instance segmentation, and semantic segmentation. By estimating the appropriate window size and location for each image in a data-driven manner, VSA enables the transformers to attend to far-away yet relevant tokens with negligible extra computational cost, thereby modeling long-term dependencies among tokens, capturing rich context from diverse windows, and promoting information exchange among overlapped window. In the future, we will investigate the usage of VSA in more attentions types including cross-shaped windows, axial attentions, and others as long as they can be parameterized w.r.t. size (_e.g_., height, width, or radius), rotation angle, and position. We hope that this study can provide useful insight to the community in developing more advanced attention mechanisms as well as vision transformers.
|
| 223 |
+
|
| 224 |
+
Acknowledgement Mr. Qiming Zhang, Mr. Yufei Xu, and Dr. Jing Zhang are supported by ARC FL-170100117.
|
| 225 |
+
|
| 226 |
+
References
|
| 227 |
+
----------
|
| 228 |
+
|
| 229 |
+
* [1] Beyer, L., Hénaff, O.J., Kolesnikov, A., Zhai, X., Oord, A.v.d.: Are we done with imagenet? arXiv preprint arXiv:2006.07159 (2020)
|
| 230 |
+
* [2] Cai, Z., Vasconcelos, N.: Cascade r-cnn: Delving into high quality object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 6154–6162 (2018)
|
| 231 |
+
* [3] Cai, Z., Vasconcelos, N.: Cascade r-cnn: High quality object detection and instance segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (2019)
|
| 232 |
+
* [4] Chen, C.F., Panda, R., Fan, Q.: Regionvit: Regional-to-local attention for vision transformers. In: International Conference on Learning Representations (2022)
|
| 233 |
+
* [5] Chen, K., Wang, J., Pang, J., Cao, Y., Xiong, Y., Li, X., Sun, S., Feng, W., Liu, Z., Xu, J., et al.: Mmdetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155 (2019)
|
| 234 |
+
* [6] Chen, Z., Zhu, Y., Zhao, C., Hu, G., Zeng, W., Wang, J., Tang, M.: Dpt: Deformable patch-based transformer for visual recognition. In: Proceedings of the 29th ACM International Conference on Multimedia. pp. 2899–2907 (2021)
|
| 235 |
+
* [7] Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C.: Twins: Revisiting the design of spatial attention in vision transformers. In: Advances in Neural Information Processing Systems (2021)
|
| 236 |
+
* [8] Chu, X., Tian, Z., Zhang, B., Wang, X., Wei, X., Xia, H., Shen, C.: Conditional positional encodings for vision transformers. arXiv preprint arXiv:2102.10882 (2021)
|
| 237 |
+
* [9] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
|
| 238 |
+
* [10] Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convolutional networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 764–773 (2017)
|
| 239 |
+
* [11] Dai, Z., Liu, H., Le, Q.V., Tan, M.: Coatnet: Marrying convolution and attention for all data sizes. In: Advances in Neural Information Processing Systems (2021)
|
| 240 |
+
* [12] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 248–255. Ieee (2009)
|
| 241 |
+
* [13] Dong, X., Bao, J., Chen, D., Zhang, W., Yu, N., Yuan, L., Chen, D., Guo, B.: Cswin transformer: A general vision transformer backbone with cross-shaped windows. arXiv preprint arXiv:2107.00652 (2021)
|
| 242 |
+
* [14] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale. In: International Conference on Learning Representations (2021)
|
| 243 |
+
* [15] El-Nouby, A., Touvron, H., Caron, M., Bojanowski, P., Douze, M., Joulin, A., Laptev, I., Neverova, N., Synnaeve, G., Verbeek, J., et al.: Xcit: Cross-covariance image transformers. arXiv preprint arXiv:2106.09681 (2021)
|
| 244 |
+
* [16] Fang, J., Xie, L., Wang, X., Zhang, X., Liu, W., Tian, Q.: Msg-transformer: Exchanging local spatial information by manipulating messenger tokens. arXiv preprint arXiv:2105.15168 (2021)
|
| 245 |
+
* [17] Guo, J., Han, K., Wu, H., Xu, C., Tang, Y., Xu, C., Wang, Y.: Cmt: Convolutional neural networks meet vision transformers. arXiv preprint arXiv:2107.06263 (2021)
|
| 246 |
+
* [18] Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., Wang, Y.: Transformer in transformer. Advances in Neural Information Processing Systems 34 (2021)
|
| 247 |
+
* [19] He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 2961–2969 (2017)
|
| 248 |
+
* [20] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 770–778 (2016)
|
| 249 |
+
* [21] Heo, B., Yun, S., Han, D., Chun, S., Choe, J., Oh, S.J.: Rethinking spatial dimensions of vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2021)
|
| 250 |
+
* [22] Huang, Z., Ben, Y., Luo, G., Cheng, P., Yu, G., Fu, B.: Shuffle transformer: Rethinking spatial shuffle for vision transformer. arXiv preprint arXiv:2106.03650 (2021)
|
| 251 |
+
* [23] Jing, Y., Liu, X., Ding, Y., Wang, X., Ding, E., Song, M., Wen, S.: Dynamic instance normalization for arbitrary style transfer. In: AAAI (2020)
|
| 252 |
+
* [24] Lin, C., Guo, M., Li, C., Yuan, X., Wu, W., Yan, J., Lin, D., Ouyang, W.: Online hyper-parameter learning for auto-augmentation strategy. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 6579–6588 (2019)
|
| 253 |
+
* [25] Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 2980–2988 (2017)
|
| 254 |
+
* [26] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 740–755. Springer (2014)
|
| 255 |
+
* [27] Liu, Z., Hu, H., Lin, Y., Yao, Z., Xie, Z., Wei, Y., Ning, J., Cao, Y., Zhang, Z., Dong, L., et al.: Swin transformer v2: Scaling up capacity and resolution. arXiv preprint arXiv:2111.09883 (2021)
|
| 256 |
+
* [28] Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B.: Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 10012–10022 (2021)
|
| 257 |
+
* [29] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2018)
|
| 258 |
+
* [30] Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jegou, H.: Training data-efficient image transformers; distillation through attention. In: International Conference on Machine Learning. PMLR (2021)
|
| 259 |
+
* [31] Wang, P., Wang, X., Wang, F., Lin, M., Chang, S., Xie, W., Li, H., Jin, R.: Kvt: k-nn attention for boosting vision transformers. arXiv preprint arXiv:2106.00515 (2021)
|
| 260 |
+
* [32] Wang, W., Xie, E., Li, X., Fan, D.P., Song, K., Liang, D., Lu, T., Luo, P., Shao, L.: Pvtv2: Improved baselines with pyramid vision transformer (2021)
|
| 261 |
+
* [33] Wang, W., Xie, E., Li, X., Fan, D.P., Song, K., Liang, D., Lu, T., Luo, P., Shao, L.: Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 568–578 (2021)
|
| 262 |
+
* [34] Wang, W., Yao, L., Chen, L., Lin, B., Cai, D., He, X., Liu, W.: Crossformer: A versatile vision transformer hinging on cross-scale attention. In: International Conference on Learning Representations (2022)
|
| 263 |
+
* [35] Wu, S., Wu, T., Tan, H., Guo, G.: Pale transformer: A general vision transformer backbone with pale-shaped attention. In: Proceedings of the AAAI Conference on Artificial Intelligence (2022)
|
| 264 |
+
* [36] Xia, Z., Pan, X., Song, S., Li, L.E., Huang, G.: Vision transformer with deformable attention (2022)
|
| 265 |
+
* [37] Xiao, T., Liu, Y., Zhou, B., Jiang, Y., Sun, J.: Unified perceptual parsing for scene understanding. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 418–434 (2018)
|
| 266 |
+
* [38] Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853 (2015)
|
| 267 |
+
* [39] Xu, Y., Zhang, J., Zhang, Q., Tao, D.: Vitpose: Simple vision transformer baselines for human pose estimation. arXiv preprint arXiv:2204.12484 (2022)
|
| 268 |
+
* [40] Xu, Y., ZHANG, Q., Zhang, J., Tao, D.: ViTAE: Vision transformer advanced by exploring intrinsic inductive bias. In: Advances in Neural Information Processing Systems (2021)
|
| 269 |
+
* [41] Yan, H., Li, Z., Li, W., Wang, C., Wu, M., Zhang, C.: Contnet: Why not use convolution and transformer at the same time? arXiv preprint arXiv:2104.13497 (2021)
|
| 270 |
+
* [42] Yang, J., Li, C., Zhang, P., Dai, X., Xiao, B., Yuan, L., Gao, J.: Focal attention for long-range interactions in vision transformers. In: Advances in Neural Information Processing Systems (2021)
|
| 271 |
+
* [43] Yang, Z., Liu, D., Wang, C., Yang, J., Tao, D.: Modeling image composition for complex scene generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7764–7773 (2022)
|
| 272 |
+
* [44] Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. In: International Conference on Learning Representations (2016)
|
| 273 |
+
* [45] Yuan, L., Chen, Y., Wang, T., Yu, W., Shi, Y., Jiang, Z.H., Tay, F.E., Feng, J., Yan, S.: Tokens-to-token vit: Training vision transformers from scratch on imagenet. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 558–567 (2021)
|
| 274 |
+
* [46] Yun, S., Han, D., Oh, S.J., Chun, S., Choe, J., Yoo, Y.: Cutmix: Regularization strategy to train strong classifiers with localizable features. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 6023–6032 (2019)
|
| 275 |
+
* [47] Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
|
| 276 |
+
* [48] Zhang, P., Dai, X., Yang, J., Xiao, B., Yuan, L., Zhang, L., Gao, J.: Multi-scale vision longformer: A new vision transformer for high-resolution image encoding. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). pp. 2998–3008 (October 2021)
|
| 277 |
+
* [49] Zhang, Q., Xu, Y., Zhang, J., Tao, D.: Vitaev2: Vision transformer advanced by exploring inductive bias for image recognition and beyond. arXiv preprint arXiv:2202.10108 (2022)
|
| 278 |
+
* [50] Zhang, Q., Yang, Y.B.: Rest: An efficient transformer for visual recognition. Advances in Neural Information Processing Systems 34 (2021)
|
| 279 |
+
* [51] Zhu, X., Hu, H., Lin, S., Dai, J.: Deformable convnets v2: More deformable, better results. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 9308–9316 (2019)
|
| 280 |
+
* [52] Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. In: International Conference on Learning Representations (2021)
|