| { |
| "title": "Multi-Tailed Vision Transformer for Efficient Inference", |
| "abstract": "Recently, Vision Transformer (ViT) has achieved promising performance in image recognition and gradually serves as a powerful backbone in various vision tasks. To satisfy the sequential input of Transformer, the tail of ViT first splits each image into a sequence of visual tokens with a fixed length. Then, the following self-attention layers construct the global relationship between tokens to produce useful representation for the downstream tasks. Empirically, representing the image with more tokens leads to better performance, yet the quadratic computational complexity of self-attention layer to the number of tokens could seriously influence the efficiency of ViT\u2019s inference. For computational reduction, a few pruning methods progressively prune uninformative tokens in the Transformer encoder, while leaving the number of tokens before the Transformer untouched. In fact, fewer tokens as the input for the Transformer encoder can directly reduce the following computational cost. In this spirit, we propose a Multi-Tailed Vision Transformer (MT-ViT) in the paper. MT-ViT adopts multiple tails to produce visual sequences of different lengths for the following Transformer encoder. A tail predictor is introduced to decide which tail is the most efficient for the image to produce accurate prediction. Both modules are optimized in an end-to-end fashion, with the Gumbel-Softmax trick. Experiments on ImageNet-1K demonstrate that MT-ViT can achieve a significant reduction on FLOPs with no degradation of the accuracy and outperform compared methods in both accuracy and FLOPs.", |
| "sections": [ |
| { |
| "section_id": "1", |
| "parent_section_id": null, |
| "section_name": "Introduction", |
| "text": "The great success of Transformer (Vaswani et al., 2017 ###reference_b72###; Devlin et al., 2018 ###reference_b19###; Brown et al., 2020 ###reference_b2###) in Natural Language Processing (NLP) has drawn computer vision researchers\u2019 attention. There have been some attempts on adopting the Transformer as an alternative deep neural architecture in computer vision. Vision Transformer (ViT) (Dosovitskiy et al., 2020 ###reference_b20###) is a seminal work that employs a fully Transformer architecture to address the image classification task. By first splitting an image into multiple local patches, ViT can then form a visual sequence for the Transformer input. The self-attention mechanism in ViT is capable of measuring the relationship between any two local patches and then information of patches are aggregated to produce a high-level representation for the image recognition task.\nFollowing ViT, a number of of variants (Yuan et al., 2021 ###reference_b83###; Han et al., 2021 ###reference_b30###; Pan et al., 2021b ###reference_b60###; Touvron et al., 2021 ###reference_b71###; Jiang et al., 2021 ###reference_b41###; Zhou et al., 2021 ###reference_b87###; Guo et al., 2022 ###reference_b27###; Wang et al., 2021a ###reference_b73###) have been developed. For example, DeiT (Touvron et al., 2021 ###reference_b71###), without the pre-training on an extra large-scale dataset (e.g., JFT-300M (Sun et al., 2017 ###reference_b67###)), for the first time boosts ViT to achieve the state-of-the-art performance on the ImageNet-1K (Deng et al., 2009 ###reference_b18###) benchmark. T2T-ViT (Yuan et al., 2021 ###reference_b83###), which can also be trained from scratch on the ImageNet-1K benchmark, proposes to boost the exchange of local information and global information with a T2T-module before the transformer encoder. CrossViT (Chen et al., 2021a ###reference_b5###) exploits multi-scale features of the image in vision transformer and TNT (Han et al., 2021 ###reference_b30###) focuses on investigating the attention inside a single patch and proposes the divide of a single patch into multiple smaller patches. CrossFormer (Wang et al., 2021a ###reference_b73###) proposes a novel method that utilizes patches of different sizes to construct cross-scale attention, which shows great improvement in several important vision benchmarks.\nThese efforts have thus ensured vision transformer to be a strong substitution (i.e., Swin Transformer (Liu et al., 2021 ###reference_b54###) and Twins (Chu et al., 2021 ###reference_b17###)) for CNNs architecture (Krizhevsky et al., 2012 ###reference_b45###; He et al., 2016 ###reference_b33###; Tan and Le, 2019 ###reference_b68###) in vision tasks. However, compared with CNNs, vision transformers do not show a significant decrease in the computational cost, or sometimes consume even more.\nAs the computational cost of the transformer is quadratic to the sequence length, a natural idea is therefore to decrease the number of tokens in the transformer for a potential acceleration. But the number of tokens is also a key factor to the accuracy, which thus requests a non-trivial strategy of screening tokens to achieve a trade-off between accuracy and computational cost. Inspired by the pruning in CNNs (He et al., 2017b ###reference_b35###), a few works suggest pruning tokens in vision transformers for efficient inference. PoWER-BERT (Goyal et al., 2020 ###reference_b25###) observes that the redundancy in Transformer gradually grows from the shallow layer to the high layer, and the token redundancy can be measured through their attention scores. This insightful observation then motivated pruning methods that progressively prune tokens in vision transformers (Rao et al., 2021 ###reference_b63###; Tang et al., 2022 ###reference_b69###; Chen et al., 2021c ###reference_b8###; Yin et al., 2022 ###reference_b82###; Xu et al., 2022b ###reference_b80###).\n###figure_1### These pruning methods have shown great success in reducing the number of tokens for efficient inference in vision transformers, i.e., all of them can preserve the accuracy under a nearly FLOPs reduction. But their progressively pruning strategy only deals with the Transformer encoder, while leaving the very beginning number of tokens at the input of vision transformers untouched. In fact, the upper bound of the overall computational cost is mainly determined by the number of tokens in the \u201cimage-to-tokens\u201d step before the Transformer encoder. If we split an image into fewer patches, there will be a shorter visual sequence input, which implies a higher inference speed by the following vision transformer. Also, the different complexities of images will favor a customized number of patches to be split. An easy image could also be accurately recognized with fewer patches, while difficult images could need a more fine-grained split to guarantee the recognition accuracy, as shown in Figure 1 ###reference_###.\nThis paper introduces the Multi-Tailed Vision Transformer (MT-ViT), a novel approach that optimizes the number of patches used to represent input images, thereby reducing computational cost. Unlike traditional Vision Transformers, which typically employ a single \"tail\" in their \"image-to-tokens\" module, MT-ViT incorporates multiple tails that can produce visual sequences of varying lengths. By projecting patches of corresponding resolution into the same d-dimensional token, we enable sharing of the subsequent Transformer encoder.\nWhen processing an input image, a tail predictor is trained to determine which tail should be used to generate the visual sequence. As tail selection is a non-differentiable process, we employ the Gumbel-Softmax technique to optimize MT-ViT and the tail predictor in an end-to-end fashion. We evaluate MT-ViT on both small scale datasets (e.g., CIFAR100 (Krizhevsky et al., 2009 ###reference_b44###), TinyImageNet (Chrabaszcz et al., 2017 ###reference_b16###)) and large scale datasets (e.g., ImageNet-1K (Deng et al., 2009 ###reference_b18###)) on top of various backbone (e.g., DeiT (Touvron et al., 2021 ###reference_b71###), T2T-ViT (Yuan et al., 2021 ###reference_b83###) and MiniViT (Zhang et al., 2022 ###reference_b85###)). We highlight the contributions of MT-ViT as follows.\nWe propose a novel approach that optimizes the number of patches used to represent input images, thereby reducing the computational cost of the vision transformer. Our approach servers as a general architecture for efficient inference of vision transformer and can integrate with various ViT backbones.\nEmpirical results demonstrate MT-ViT\u2019s ability to maintain accuracy while achieving an up to 70% reduction in FLOPs on CIFAR100 and TinyImageNet, and achieve an obvious advantage over comparison pruning methods on ImageNet-1K benchmark.\nVisualizations of the tail predictor\u2019s results reveal its capacity to automatically translate the difficulty of the image. Its decision is basically consistent with human\u2019s visual judgement." |
| }, |
| { |
| "section_id": "2", |
| "parent_section_id": null, |
| "section_name": "Related Work", |
| "text": "In this section, we discuss the development of vision transformer first. Then. we introduce some efficient inference methods for vision transformer and neural architecture search methods in transformer." |
| }, |
| { |
| "section_id": "2.1", |
| "parent_section_id": "2", |
| "section_name": "Vision Transformer", |
| "text": "The transformer has been widely used in NLP community (Vaswani et al., 2017 ###reference_b72###; Devlin et al., 2018 ###reference_b19###; Brown et al., 2020 ###reference_b2###) and achieved great success. Inspired by this major success of transformer architectures in the field of NLP, researchers have recently applied transformer to computer vision (CV) tasks (Han et al., 2020 ###reference_b29###; Qiu et al., 2022b ###reference_b62###; He et al., 2021 ###reference_b34###; Qiu et al., 2022a ###reference_b61###; Jiao et al., 2023 ###reference_b42###; Zhao et al., 2022 ###reference_b86###; Rend\u00f3n-Segador et al., 2023 ###reference_b64###; Han et al., 2022 ###reference_b31###; Chopin et al., 2023 ###reference_b15###; Jia et al., 2022 ###reference_b40###; Chen et al., 2023a ###reference_b6###; Gao et al., 2023 ###reference_b23###; Jahanbakht et al., 2022 ###reference_b38###). The early attempts about applying Transformer to vision tasks focus on combining convolution with self-attention.\nDETR (Carion et al., 2020 ###reference_b4###) is proposed for object detection task, which first exploits the convolution layer to extract visual features and then refines features with Transformer. BotNet (Srinivas et al., 2021 ###reference_b65###) replaces the convolution layers with multi-head self-attention layer at the last stage of ResNet (He et al., 2016 ###reference_b33###) and achieves good performance.\nViT (Dosovitskiy et al., 2020 ###reference_b20###) is the first work to introduce a fully Transformer architecture directly into vision tasks. By pre-training on massive datasets like JFT-300M (Sun et al., 2017 ###reference_b67###) and ImageNet-21K (Deng et al., 2009 ###reference_b18###), ViT achieves state-of-the-art performance on various image recognition benchmarks. However, ViT\u2019s performance is relatively modest when trained on mid-sized datasets such as ImageNet-1K (Deng et al., 2009 ###reference_b18###). In comparison to ResNet (He et al., 2016 ###reference_b33###) of similar size, ViT obtains slightly lower accuracy. The primary reason for this discrepancy is that transformers lack certain inductive biases about images such as locality and translation equivariance. These biases are critical for generalization, particularly when training vision transformers with limited training data. Later, DeiT (Touvron et al., 2021 ###reference_b71###) manages to solve this data-efficient problem by simply modifying the Transformer and proposing a Knowledge Distillation (KD) (Hinton et al., 2015 ###reference_b36###) optimization strategy, with improved accuracy in ImageNet-1K. Some following works (Yuan et al., 2021 ###reference_b83###; Han et al., 2021 ###reference_b30###; Chen et al., 2021a ###reference_b5###) focus on exploiting the local information of the image, which lead to a significantly increasing performance. Also, some works (Liu et al., 2021 ###reference_b54###; Pan et al., 2021b ###reference_b60###; Lee et al., 2022 ###reference_b46###) try to adopt a deep-narrow structure like CNNs to produce multi-scale features for the downstream intensive prediction task. Since the vision transformer is known to suffer from a huge number of parameters, there are also some attempts to achieve parameter reduction while retaining the same performance (Mehta and Rastegari, 2021 ###reference_b57###; Zhang et al., 2022 ###reference_b85###; Wu et al., 2022 ###reference_b78###).\nAt present, vision transformer has been applied into various vision tasks, such as medical image analysis (Grigas et al., 2023 ###reference_b26###; Xu et al., 2022a ###reference_b79###), NeRF (Chen et al., 2023b ###reference_b9###), supersampling (Guo et al., 2023 ###reference_b28###), and multi-information fusion (Xu et al., 2023 ###reference_b81###; Odusami et al., 2023 ###reference_b58###; Zhang et al., 2023 ###reference_b84###)." |
| }, |
| { |
| "section_id": "2.2", |
| "parent_section_id": "2", |
| "section_name": "Efficient Transformer", |
| "text": "Despite ViT\u2019s impressive performance in vision tasks, achieving good performance requires significant computational resources. Consequently, researchers are interested in developing a more efficient Transformer architecture (Fournier et al., 2023 ###reference_b22###; Chitty-Venkata et al., 2023b ###reference_b12###; Kim et al., 2023 ###reference_b43###). Network pruning and compression have been widely used in CNNs to speed up neural network inference (i.e., filter pruning (He et al., 2017b ###reference_b35###; Liu et al., 2017 ###reference_b53###; Tang et al., 2020 ###reference_b70###; Chitty-Venkata and Somani, 2020 ###reference_b13###)) and adopt light-weight nerual network architecture (Wang et al., 2018a ###reference_b75###, b ###reference_b76###). Some researchers (Goyal et al., 2020 ###reference_b25###; Rao et al., 2021 ###reference_b63###; Tang et al., 2022 ###reference_b69###; Su et al., 2022 ###reference_b66###; Chen et al., 2021c ###reference_b8###; Pan et al., 2021a ###reference_b59###; Xu et al., 2022b ###reference_b80###; Yin et al., 2022 ###reference_b82###; Liang et al., 2022 ###reference_b50###) have been inspired by this idea and have attempted to use token pruning to identify and eliminate inferior tokens in order to improve efficiency.\nMore recently, A-ViT (Yin et al., 2022 ###reference_b82###) achieves efficient computing by adaptively halting tokens that are deemed irrelevant to the task, thereby enabling dense computation only on the active informative tokens. This module reuses existing block parameters and utilizes a single neuron from the last dense layer in each block to compute the halting probability, requiring no additional parameters or computations. EViT (Liang et al., 2022 ###reference_b50###) highlights the importance of class tokens and images by their attention scores. ATS (Fayyaz et al., 2022 ###reference_b21###) introduces a parameter-free module that scores and adaptively samples significant tokens. Above pruning methods are mostly based on scores, which only keep tokens with highest score. However, this selecting scheme can cause redundancy and information loss. Token Pooling (Marin et al., 2023 ###reference_b56###) takes a different path to downsample tokens. It forms multiple clusters to approximate the set of tokens, then selects the cluster centers. Thus, the output tokens are a more accurate representation of the original token set than the score-based methods.\nSome adaptive methods aim to reduce the inference time of the model conditionally. Motivated by this idea, (Bakhtiarnia et al., 2022 ###reference_b1###) proposes several multi-exit architectures for dynamic inference in vision transformers. The core idea is to conduct an early exit in the middle layer when the prediction confidence is above the threshold. Dynamic-Vision-Transformer (DVT) (Wang et al., 2021b ###reference_b74###) reduces the computational cost by considering cascading three Transformers with increasing numbers of tokens, which are sequentially activated in an adaptive fashion during the inference time. Specifically, an image is first sent into the Transformer with a fewer number of tokens. By investigating the prediction confidence, DVT decides whether to proceed to use the next Transformer. The subsequent model also considers utilizing the intermediate feature of the former Transformer." |
| }, |
| { |
| "section_id": "2.3", |
| "parent_section_id": "2", |
| "section_name": "Neural Architecture Search", |
| "text": "Neural Architecture Search (NAS) (Chitty-Venkata et al., 2023a ###reference_b11###) is designed to automatically create neural architectures for networks. Early NAS methods were computationally intensive, requiring the training and evaluation of a large number of architectures. However, differentiable NAS approaches like DARTS (Liu et al., 2018 ###reference_b52###), DNAS (Wu et al., 2019 ###reference_b77###), and ProxylessNAS (Cai et al., 2019 ###reference_b3###) have emerged, enabling joint and differentiable optimization of model weights and architecture parameters through gradient descent. This significantly reduces computational costs.\nRecently, vision transformer has draw considerable attention in the research community and some researchers are exploring the use of neural architecture search (NAS) (Chitty-Venkata et al., 2022 ###reference_b10###; Chitty-Venkata and Somani, 2022 ###reference_b14###) to find an efficient ViT architecture. For example, AutoFormer (Chen et al., 2021b ###reference_b7###) combines the weights of various blocks in the same layers during supernet training. NASViT (Gong et al., 2021 ###reference_b24###) aims to alleviate the gradient conflict issue in NAS and ensure the efficiency of the ViT model. BossNAS (Li et al., 2021 ###reference_b47###) implemented the search with an self-supervised training scheme and leveraged a hybrid CNN-transformer search space for boosting the performance.\nOur paper introduces a differentiable training approach for the tail predictor, which bears similarities to NAS training. However, our paper\u2019s objective differs, as we do not seek a specific structure for the Vision Transformer. Instead, our focus is on enhancing the efficiency of the Vision Transformer by dynamically allocating different samples with varying computational budgets, which is achieved through the use of dynamic networks." |
| }, |
| { |
| "section_id": "3", |
| "parent_section_id": null, |
| "section_name": "Preliminaries", |
| "text": "A standard Transformer in NLP tasks normally requires a 1D sequence of token embedding as the input. To handle 2D images, the tail of ViT splits an image into independent patches in a resolution and then projects each local patch into a dimensions embedding to form a visual sequence .\nSimilar to BERT (Devlin et al., 2018 ###reference_b19###), ViT also introduces a learnable class token into the input sequence (i.e., ). Subsequently, ViT learns a representation of the image with the following -layers Transformer encoder, which mainly consists of two alternating components (i.e., Multi-head Self-Attention (MSA) module and Multi-Layer Perceptron (MLP) module)." |
| }, |
| { |
| "section_id": "3.1", |
| "parent_section_id": "3", |
| "section_name": "Multi-head Self-Attention", |
| "text": "MSA module calculates the relationship between any two tokens to generate the attention map with self-attention layer. Given a sequential input , standard self-attention module first projects linearly into three embedding called query Q, key K and value V respectively,\nwhere denotes the linear projection operator. Then, the attention map is calculated by the dot operation between query Q and key K and then the attention map is finally applied to weight the value embedding V. The whole process of self-attention can be defined as follows,\nCompared to the vanilla self-attention, multi-head self-attention runs self-attention operations in parallel. The output of MSA can be formulated as follows," |
| }, |
| { |
| "section_id": "3.2", |
| "parent_section_id": "3", |
| "section_name": "Multi-Layer Perceptron", |
| "text": "The MLP module is applied after MSA module for representing feature and introducing non-linearity. Denoting as the output of MSA module, MLP can be defined as follows,\nwhere and denote the fully-connected layer, and denotes the non-linear activate function (e.g., GELU)." |
| }, |
| { |
| "section_id": "3.3", |
| "parent_section_id": "3", |
| "section_name": "Transformer Encoder", |
| "text": "The Transformer encoder is constructed by stacking MSA module and MLP module with residual connection. Therefore, the encoder can be defined as\nwhere is the layer normalization for stable the training of Transformer." |
| }, |
| { |
| "section_id": "3.4", |
| "parent_section_id": "3", |
| "section_name": "Analysis of the computation complexity", |
| "text": "The floating-point operations (FLOPs) is commonly used as a metric to measure the theoretical computational cost of the model. After summing up the FLOPs of all the operations in the Transformer encoder, we find that MSA and MLP contribute the most to FLOPs. Specifically, the FLOPs of MSA are and the FLOPs of MLP are , where is the dimension expansion ratio of the fully-connected layer in MLP.\nFrom the analysis above, we can observe that the FLOPs of the Transformer encoder is quadratic to the embedding dimension and number of tokens .\nDue to the large number of and (usually hundred), the computational cost of vision transformer is large. However, if we reduce these two hyper-parameters for ViT model, we can easily obtain a rapid decrease in computational cost without modifying the architecture of the Transformer.\nAdjusting the embedding dimension has been well considered in various ViT backbones. By setting from a small value to a large value, we can obtain ViT model with increasing FLOPs (i.e., ViT-Small, ViT-Base and ViT-Large). In return for the high complexity, the larger the model size is, the higher accuracy the model will normally achieve. Another way is to consider the number of tokens , which can be up to the resolution of the local patch. For an image of 224224 size, ViT model normally splits the image into non-overlap patches with 1616 size, so the number of tokens is equal to . By setting different patch resolutions, we can obtain different ViT models, e.g., ViT-Base/14, ViT-Base/16 and ViT-Base/32.\n###figure_2###" |
| }, |
| { |
| "section_id": "4", |
| "parent_section_id": null, |
| "section_name": "Methodology", |
| "text": "While the Vision Transformer demonstrates promising performance in vision tasks, the escalating computational cost for efficient inference has become a focal point for researchers. As discussed in Section 3 ###reference_###, decreasing the number of tokens can be an effective way to ensure a significant reduction on FLOPs. Existing Transformer pruning methods have explored the screening out of uninformative tokens at intermediate layers. Nevertheless, there remains potential for even greater computational cost reduction by reducing the number of tokens at earlier positions. Based on this idea, we further explore reducing tokens at the \u201cimage-to-tokens\u201d step for a potential FLOPs reduction.\nIn the \u201cimage-to-tokens\u201d step before the Transformer, an image is decomposed into a sequence of non-overlap patches with size and the number of patches is equal to . Therefore, the resolution of the local patch can determine the length of the visual sequence and further affects the accuracy and computational cost of ViT. With a fine-grained patch size, ViT could reach higher performance with increasing FLOPs. On the contrary, with a coarse-grained patch size, the performance of ViT decreases with a reduction of FLOPs.\nMotivated by this idea, we consider leveraging the advantages of both fine-grained patch size and coarse-grained patch size to achieve the trade-off between accuracy and efficiency. Intuitively, easy images could be accurately recognized with a coarse-grained patch size while difficult images often require a fine-grained patch split to achieve an accurate prediction.\nTo achieve this, we firstly need a network module that can be compatible with both fine-grained patch and coarse-grained patch. A natural idea is to construct a stack of independent vision transformers , which are pre-trained with its corresponding patch sizes (i.e., ).\nTo decide which Transformer is suitable for the image, a one-hot decision vector is introduced to determine which patch size is proper for the given image. With an optimal decision , we can minimize computational cost as far as possible while preserving the accuracy of the model.\nThe basic workflow is illustrated as follows. Denoting as the number of classes and as the logit of the -th stacked ViT model, the final prediction of the whole stacked model is the sum of each Transformer with weight , which can be formulated as\nThe classification loss for image can be written as\nwhere ) is the softmax cross-entropy loss and is the class label related to ." |
| }, |
| { |
| "section_id": "4.1", |
| "parent_section_id": "4", |
| "section_name": "Multi-Tailed Vision Transformer", |
| "text": "In the stacked Transformer framework, a stack of independent vision transformers is used to process multi-scale visual sequences respectively. But the stacked Transformers directly lead to a times number of parameters than that of a single vision transformer model, which is horribly parameter-inefficient. Normally, setting some shared parameters and layers among these independent Transformers is essential for reducing the parameter of the model when processing multi-scale input.\nInspired by this idea, we adopt a shared Transformer encoder in the stacked Transformer , which results in the multi-tailed vision transformer (MT-ViT). The \u201cimage-to-tokens\u201d step is referred as the tail of ViT. MT-ViT adopts independent tails for projecting patches in different sizes into a -dimension embedding, and then shares all parameters of Transformer encoders and classification head, as shown in Figure 2 ###reference_###. The instance-aware tails are conditioned to images and projects patches of different sizes into vectors of the same dimension to satisfy the input requirement of the public Transformer encoder. As the inside architecture of Transformer encoder has not been changed, we can make MT-ViT serve as a general backbone that is compatible with the mainstream ViT backbones.\nMT-ViT adopts multiple tails before the Transformer encoder. For the -th tail, it should receive patches of size and then proceed into the projection function to get the embedding , where .\nFor different vision transformer backbones, the projection function can be quite different, therefore we should re-design the tail to produce dynamic sequences of -dimension embedding when MT-ViT is equipped with different ViT backbones." |
| }, |
| { |
| "section_id": "4.2", |
| "parent_section_id": "4", |
| "section_name": "Dynamic Tail Selection", |
| "text": "Optimizing MT-ViT (Eq. (8 ###reference_###)) could be intractable for the following reason. In the classical image classification task, images and their corresponding label are the only available information in the training set, while the optimal tail selection remains unknown. We therefore expect an appropriate estimation on decision , so that we can well indicate the \u201ceasiness\u201d of the image and make full use of the multi-tailed vision transformer backbone. To solve this problem, we propose a CNN-based tail predictor to automatically distinguish the \u201ceasiness\u201d of the image and output the proper decision . Given the image , the policy outputs the categorical distribution,\nwhere represents the probability of choosing the -th patch size. To generate decision , we need to sample from or select the highest probability of to get the one-hot vector. However, both sampling from softmax distribution and selecting the highest value in are non-differentiable, which precludes the back-propagation and impedes the end-to-end training." |
| }, |
| { |
| "section_id": "4.2.1", |
| "parent_section_id": "4.2", |
| "section_name": "4.2.1 Differentiable Tail Sampling", |
| "text": "Gumbel-Softmax trick (Jang et al., 2016 ###reference_b39###) is a technique that can be used to train models with discrete latent variables through backpropagation. It introduces a continuous relaxation of discrete random variables, allowing for the use of gradient-based optimization methods. Therefore, we apply the Gumbel-Softmax trick to ensure a differentiable tail sampling during the forward propagation.\nFirst, we form a Gumbel-Softmax distribution and transform the soft probability to a discrete variable as follows,\nwhere are i.i.d random noise samples drawn from Gumbel(0, 1) distribution. The Gumbel distribution has shown to be stable under operations (Maddison et al., 2016 ###reference_b55###).\nBy applying the inverse-CDF transformation, can be computed as\nwhere is the Uniform distribution. Since the in Eq. (10 ###reference_###) is still a non-differentiable operation, we further use the softmax function as a continuous, differentiable approximation to replace . A differentiable sampling can be written as,\nwhere is the temperature parameter to adjust the Gumbel-Softmax distribution. can be regarded as continuous relaxations of one-hot vectors . As the softmax temperature approaches 0, samples from the Gumbel-Softmax distribution approximate one-hot vectors. The sampling of the approximated one-hot vector is refactored into a deterministic function-componentwise addition followed by of the parameters and fixed Gumbel distribution . The non-differentiable part is therefore transferred to the samples from the Gumbel distribution.\nThis reparameterization trick allows gradients to flow from decision to the and both tail predictor and MT-ViT can be optimized as a whole with Eq. (8 ###reference_###).\nHowever in our setting, we are constrained to sample discrete values strictly because we can only choose one tail of MT-ViT to make the prediction. Inspired by Straight-Through Gumbel Estimator, in the forward propagation, we discretize using in Eq. (10 ###reference_###) but use the continuous approximation in the backward propagation by approximating ." |
| }, |
| { |
| "section_id": "4.2.2", |
| "parent_section_id": "4.2", |
| "section_name": "4.2.2 Optimization", |
| "text": "The basic loss function (Eq. (8 ###reference_###)) defines a softmax cross-entropy loss between the prediction of MT-ViT and the ground-truth label. The multi-tailed vision transformer backbone and the tail predictor are optimized jointly with the Gumbel-Softmax trick. However, Eq. (8 ###reference_###) only considers achieving better accuracy, which makes the tail predictor always encourage MT-ViT to activate the tail with the highest accuracy and therefore lead to the collapse of the predictor\u2019s training. To solve this problem, we consider adding a FLOPs constraint regularization on the choice of different tails. The total loss can be written as follows,\nwhere is the FLOPs regularization and is the hyper-parameter to achieve the trade-off between accuracy and FLOPs. The FLOPs regularization can punish the situation when the predictor selects the tail with high computational cost. On the contrary, in those situations where the predictor selects the tail with low FLOPs, the regularization should not contribute any penalty to the total loss. In this spirit, our designed FLOPs constraint regularization is formulated as follows\nwhere is the normalized FLOPs of the -th branch and is the hyper-parameter to adjust the threshold of penalty. While the FLOPs of the selected model is larger than the threshold , will punish the prediction.\nIn summary, while the first cross-entropy loss encourages the predictor to choose the tail that leads to high accuracy, the second FLOPs regularizer will penalize the situation when the predictor chooses the tail with high FLOPs. Hence, the mechanism of MT-ViT can maintain the accuracy while reducing the computational cost at most." |
| }, |
| { |
| "section_id": "4.3", |
| "parent_section_id": "4", |
| "section_name": "Discussion", |
| "text": "In this subsection, we discuss the difference of MT-ViT with several close-related works (e.g., Multi-scale patch methods and token-pruning methods)." |
| }, |
| { |
| "section_id": "4.3.1", |
| "parent_section_id": "4.3", |
| "section_name": "4.3.1 Compare with multi-scale patch methods", |
| "text": "The idea of multi-scale patch has been investigated in some related works, such as CrossViT (Chen et al., 2021a ###reference_b5###) and MPViT (Lee et al., 2022 ###reference_b46###).\nCrossViT and MPViT are two great works that utilize the idea of multi-scale patch to design a network backbone. By conducting multi-scale feature aggregation in the transformer layer, they enhance the performance of visual recognition, yet introduce more computational cost. Despite the idea of multi-scale patch embedding also appears in MT-ViT, there are still several distinctive aspects that make MT-ViT different. (1) MT-ViT achieves efficient inference through multi-scale patches, while in CrossViT and MPViT, multi-scale patches primarily contribute to generating multi-scale features, and the aggregation of multi-scale features can further enhance the performance of visual recognition. (2) CrossViT and MPViT employ fixed networks, activating all branches for various images. MT-ViT introduces a dynamic approach, activating only one tail per image, each corresponding to different computational budgets. The tail predictor dynamically selects the appropriate tail, resulting in a dynamic neural network. (3) CrossViT and MPViT utilize specially designed and fixed networks. In contrast, MT-ViT offers flexibility by allowing the switch to different Vision Transformer backbones, providing adaptability and versatility in the model architecture.\nThe idea of multi-scale tokens has also been used in DVT (Wang et al., 2021b ###reference_b74###) and we will discuss the empirical results between DVT and MT-ViT in the experiment." |
| }, |
| { |
| "section_id": "4.3.2", |
| "parent_section_id": "4.3", |
| "section_name": "4.3.2 Compare with token-pruning methods", |
| "text": "Token-pruning methods aim to progressively prune uninformative tokens in the transformer layer and accelerate the inference of Transformer, since the computational cost of Transformer is quadric to the number of tokens. The difference between token-pruning methods and MT-ViT can be summarized in the following aspects.\nFirst, pruning-based methods normally start to reduce the number of tokens at the middle layer. It could still be possible for a greater reduction of computational cost by reducing the number of tokens at the earlier position. Based on this idea, our method reduces tokens by adjusting the patch size in different tails at the very beginning before the transformer encoder. Empirical results also demonstrate the effectiveness of MT-ViT over pruning methods.\nSecond, pruning methods normally set the predictor/gate module inside the transformer architecture, while the predictor in MT-ViT is placed at the beginning of the transformer. This design could keep the original architecture of transformer as a whole and therefore makes it easier to switch from different backbones.\nThe pros of token-pruning methods lie in its training cost compared to MT-ViT. Token pruning methods can be conducted on existing ViT backbones. Since the trained ViT model is easy to acquire, token pruning methods do not require to train the model from scratch. Normally, they only need to jointly finetune the transformer backbone and token pruning modules for 30 epochs. However, multi-tailed approach added two tails before the transformer encoder. It requires 300 epochs pre-training from scratch to ensure a satisfactory performance for all tails." |
| }, |
| { |
| "section_id": "5", |
| "parent_section_id": null, |
| "section_name": "Experiment", |
| "text": "In this section, we conduct extensive experimental analysis on the small-scale datasets, CIFAR100 (Krizhevsky et al., 2009 ###reference_b44###) and TinyImageNet (Chrabaszcz et al., 2017 ###reference_b16###), and large-scale dataset, ImageNet-1K benchmark (Deng et al., 2009 ###reference_b18###) to show the performance of our proposed Multi-Tailed Vision Transformer in various aspects. Additionally, we evaluate the performance of MT-ViT on other vision tasks like object detection." |
| }, |
| { |
| "section_id": "5.1", |
| "parent_section_id": "5", |
| "section_name": "Experiment Setting", |
| "text": "" |
| }, |
| { |
| "section_id": "5.1.1", |
| "parent_section_id": "5.1", |
| "section_name": "5.1.1 Datasets", |
| "text": "CIFAR100 (Krizhevsky et al., 2009 ###reference_b44###) is a widely-used dataset for image recognition tasks, which contains 50,000 training images and 10,000 test images. There are 100 classes in this dataset, grouped into 20 superclasses. Each image has a \u201cfine label\u201d as the class label and a \u201ccoarse label\u201d as the superclass. In our experiment, we only make use of fine labels. TinyImageNet (Chrabaszcz et al., 2017 ###reference_b16###) is a subset of the well-known ImageNet-1K benchmark, which contains 100,000 images with 6464 resolution with 100 categories.\nImageNet-1K (Deng et al., 2009 ###reference_b18###) is the most popular benchmark to evaluate the classification performance of the deep learning model. It contains 1.28 million training images and 50,000 validation images with 1000 categories. Details of these three datasets are summarized in Table 1 ###reference_###.\n###table_1### ###table_2###" |
| }, |
| { |
| "section_id": "5.1.2", |
| "parent_section_id": "5.1", |
| "section_name": "5.1.2 Backbones", |
| "text": "We implement MT-ViT on top of two popular ViT backbones (i.e., DeiT-Ti/S (Touvron et al., 2021 ###reference_b71###) and T2T-ViT-7/12 (Yuan et al., 2021 ###reference_b83###)). The FLOPs is given in Table 2 ###reference_###.\nAs for the tail predictor, we choose the light-weight MobileNetv3-small (Howard et al., 2019 ###reference_b37###) as the backbone since we want to minimize the computational influence of scale predictor to the whole framework as far as possible. The number of parameters and FLOPs of MobileNetv3-small is 2.54M and 0.06G, respectively.\nFollowed with DVT (Wang et al., 2021b ###reference_b74###), MT-ViT employs three tails, i.e., Short Tail (ST), Middle Tail (MT) and Long Tail (LT), to output different numbers of tokens, i.e., 77, 1010, 1414. For DeiT, a convolutional kernel with size and stride can be used to create non-overlap tokens. So for a 224224 image, we can obtain 77, 1010 and 1414 tokens by setting to 32, 23 and 16 in different tails. Notice that for the middle tail, the images should be firstly resized to 230230 resolution.\nAs for T2T-ViT, the T2T module is used to produce a sequence of tokens. The three soft split procedures are mainly responsible for controlling the number of tokens in T2T-ViT. In each soft split, the patch size is with overlapping and padding on the image, where is similar to the stride in convolution operation. So for an image , the number of output tokens after soft split is\nIn the long tail, the patch size for the three soft splits is , and the overlapping is stride , which reduces the size of the input image from 224224 to 1414. By setting and , the middle tail can produce 1010 tokens. By setting and , the short tail can produce 77 tokens. Therefore, we can adopt multiple tails and obtain MT-ViT backbone.\n###table_3###" |
| }, |
| { |
| "section_id": "5.1.3", |
| "parent_section_id": "5.1", |
| "section_name": "5.1.3 Metric", |
| "text": "The reported FLOPs considers both tail predictor and MT-ViT.\nSupposing , and are the FLOPs of three tails, the overall FLOPs is calculated by\n\n, where , and are numbers of images that have been processed by each individual tail and is the FLOPs of the tail predictor." |
| }, |
| { |
| "section_id": "5.1.4", |
| "parent_section_id": "5.1", |
| "section_name": "5.1.4 Implementation Details", |
| "text": "The whole training mainly contains two processes. We first pre-train the MT-ViT backbone and then jointly finetune both tail predictor and MT-ViT in an end-to-end fashion. In the backbone pre-training, all tails are activated. The pre-training setting is basically the same as that in the official implementation of DeiT and T2T-ViT. MT-ViT backbone is trained for 300 epochs on ImageNet-1K.\nFor small-scale experiments, we transfer the pre-trained MT-ViT backbone to the downstream datasets such as CIFAR100 and TinyImageNet. Following the implementation of T2T-ViT, we finetune the pre-trained MT-ViT backbone for 60 epochs by using an SGD optimizer and cosine learning rate decay.\nWhen it comes to the finetune step, a tail predictor is introduced to determine which tail is suitable for the image. We jointly finetune the MT-ViT backbone and the predictor for 30 epochs." |
| }, |
| { |
| "section_id": "5.2", |
| "parent_section_id": "5", |
| "section_name": "Experiment on Small-Scale Datasets", |
| "text": "We first conduct small-scale experiments on CIFAR100 and TinyImageNet. We implement MT-ViT on four versions of DeiT/T2T-ViT backbones, i.e., DeiT-Ti, DeiT-S, T2T-ViT-7 and T2T-ViT-12. The experimental results are shown in Table 3 ###reference_###. MT-ViT(A*) and MT-ViT(S*) are the same methods but they use different and to adjust the trade-off between accuracy and speed. The former aims to achieve higher accuracy while the latter pursues a lower computational cost.\n###table_4### Compared to the baseline, it can be clearly observed that MT-ViT(A*) can outperform the baseline by 1%-2% while still keeping a visible advantage on FLOPs in both CIFAR100 and TinyImageNet datasets.\nFor example, MT-ViT(A*) achieves 82.8%, 85.7%, 84.9%, and 87.9% on CIFAR100 with four ViT backbones, which gains an improvement of 1.1%, 2.2%, 2.0% and 0.9% over baseline, respectively. With this clear advantage in accuracy, we can still witness a decline in FLOPs of MT-ViT(A*), which is 35.2%, 34.2%, 37.2%, and 43.6%. The same thing happens in TinyImageNet. MT-ViT(A*) has roughly the same improvement in accuracy while retaining a 15%-30% reduction in computational cost with four ViT backbones.\nBy contrast, MT-ViT(S*) retains a similar accuracy with baseline but it has a significant reduction in FLOPs. For instance, MT-ViT(S*) implemented on top of DeiT-S has a similar accuracy of 87.0% in CIFAR100 datasets, however, there is a significant decline (over 70%) in the computational cost compared to the baseline." |
| }, |
| { |
| "section_id": "5.3", |
| "parent_section_id": "5", |
| "section_name": "Experiment on Large-Scale Datasets", |
| "text": "In this subsection, we investigate the performance of MT-ViT on the large-scale benchmark ImageNet-1K." |
| }, |
| { |
| "section_id": "5.3.1", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.1 Compared Methods", |
| "text": "We compare our method with several state-of-the-art methods including model pruning methods and ViT-based methods that consider exploiting multi-scale features. SCOP (Tang et al., 2020 ###reference_b70###) is the state-of-the-art method in pruning the channel of CNNs. Inspired by this, we re-implement it to reduce the patches in vision transformers. PoWER (Goyal et al., 2020 ###reference_b25###) accelerates BERT\u2019s inference by progressively pruning tokens in BERT\u2019s layer. We directly transfer PoWER from BERT to vision transformer. DynamicViT (Rao et al., 2021 ###reference_b63###), PS-ViT (Tang et al., 2022 ###reference_b69###), SViTE (Pan et al., 2021a ###reference_b59###), Evo-ViT (Xu et al., 2022b ###reference_b80###), IA-RED (Pan et al., 2021a ###reference_b59###) and EViT (Liang et al., 2021 ###reference_b49###) are six highly related compared methods that consider accelerating the inference by progressively pruning tokens of the middle layer with self-slimming or a gate function.\n###figure_3### ###table_5### ###table_6### ###table_7###" |
| }, |
| { |
| "section_id": "5.3.2", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.2 Performance", |
| "text": "The results are shown in Table 4 ###reference_###. Since SCOP (Tang et al., 2020 ###reference_b70###) is designed for the pruning in the CNNs model, it is not surprising that SCOP does not perform quite well in this migration. Though there is an approximating 40% reduction on the FLOPs, its accuracy drops rapidly as well (i.e., -3.3% in DeiT-Ti and -2.3% in DeiT-S).\nPoWER (Goyal et al., 2020 ###reference_b25###) performs only slightly better than SCOP, which suggests that the Transformer pruning in NLP fields is not the optimal solution for the vision transformer. By contrast, the pruning methods designed for vision transformer, i.e., PS-ViT (Tang et al., 2022 ###reference_b69###), SViTE (Pan et al., 2021a ###reference_b59###), DynamicViT (Rao et al., 2021 ###reference_b63###), Evo-ViT (Xu et al., 2022b ###reference_b80###), IA-RED(Pan et al., 2021a ###reference_b59###) and EViT (Liang et al., 2021 ###reference_b49###), are significantly better. For example, all these three methods can decrease the FLOPs up to in DeiT-S, with a small reduction of accuracy (less than 0.7%).\nOur MT-ViT performs even better than these two methods, which achieves a similar FLOPs reduction but reaches higher accuracy.\nFor example, with DeiT-Ti model, a nearly 40% reduction in FLOPs makes MT-ViT comparable with other comparison methods in the computational cost. However, MT-ViT can outperform the baseline by 0.7% in Top-1 accuracy and 0.2% in Top-5 accuracy while other competing methods are clearly below the baseline. When running in a high-speed mode with the DeiT-S model, the accuracy and FLOPs of MT-ViT are comparable with other methods. When it comes to the high-accuracy mode, MT-ViT can outperform competing methods at least 0.5%.\nWe also investigate the performance of MT-ViT when using pruning methods as backbone. We choose one of the most representative pruning methods DynamicViT here. Based on the pre-trained Multi-Tailed Vision Transformer backbone, we employ 3 groups of predictors for 3 tails, and each group contains 3 predictors in the 3rd, 6th, and 9th layer of transformer respectively. During the pre-training, we jointly optimize the corresponding predictors and transformer in each tail for 10 epochs, which is in total 30 epochs. After that, we incorporate the tail predictor and train MT-ViT for 15 epochs. The results are shown in Table 5 ###reference_### and we can observe that after applying multi-tailed approach, DynamicViT could achieve further FLOPs reduction without losing too much accuracy." |
| }, |
| { |
| "section_id": "5.3.3", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.3 Influence of and", |
| "text": "We also investigate how the hyper-parameter and in FLOPs constraint regularization would influence the performance of MT-ViT. The results are shown in the Figure 3 ###reference_###. In the left figure, we maintain a fixed value of (0.5) and vary the values of from 0 to 1. Conversely, in the right figure, we keep fixed at 0.5 and adjust in the range of 0.25 to 1. The FLOPs regularization can punish the situation when the predictor selects the tail with high computational cost. Therefore, when is large, it has more impact on the total loss , and the learned predictor tends to let more images to go through short tail, which results in low accuracy and low FLOPs. When is small, it will increase and lead to an decreasing in both accuracy and FLOPs. We can clearly observe this trend in Figure 3 ###reference_###. When setting a larger value for and a smaller value for , we generally observe a decrease in accuracy accompanied by an increase in FLOPs reduction. By adjusting different and , we can achieve different trade-offs for MT-ViT." |
| }, |
| { |
| "section_id": "5.3.4", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.4 Throughput", |
| "text": "We choose several compared methods that have officially released models and compare their throughput on the ImageNet-1K validation set. The results are reported in the following Table 6 ###reference_###.\nIn our experiment, we report the dataset-wise throughput, which is measured on the ImageNet-1K validation set. The throughput is evaluated on a single 2080Ti with 512 batch size. From the table, we can observe that the throughput of MT-ViT is the best among all compared methods with no degradation in accuracy." |
| }, |
| { |
| "section_id": "5.3.5", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.5 Number of Tokens for Different Tails", |
| "text": "Following DVT\u2019s setting (Wang et al., 2021b ###reference_b74###), the three tails in MT-ViT are set to output 77, 1010 and 1414 tokens, respectively. However, the optimal number of tokens for each tail still remains to be explored. As a result, we further investigate how the number of tokens of each tail could influence the performance of MT-ViT. Specifically, a new MT-ViT backbone (based on Deit-Ti) with 44, 77 and 1414 tokens is pre-trained to conduct the experiment. The result is provided in Table 7 ###reference_###. MT-ViT backbone with 44, 77 and 1414 tokens has a lower computational cost and a relatively lower accuracy. We also observe that MT-ViT with fewer tokens is inferior to MT-ViT with more tokens in both accuracy and FLOPs after the fine-tuning." |
| }, |
| { |
| "section_id": "5.3.6", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.6 Compare with DVT and MiniViT", |
| "text": "DVT is a great method for efficient inference in vision transformer. By adjusting the confident threshold of output logits, the trade-off of DVT between accuracy and FLOPs can be flexible.\nTo provide a thorough comparison with DVT, we draw a FLOPs-Accuracy curve to compare the performance of both methods, which are based on DeiT-S.\nFrom Figure 4 ###reference_###, we can observe that when running in a relatively lower FLOPs mode (less than 2.5G), MT-ViT achieves a higher accuracy than DVT. When running in a high FLOPs mode, the accuracy of DVT tends to be higher than MT-ViT. This could be attributed to the large number of parameters in DVT, which enables a stronger backbone after pre-training. However, obtaining such a backbone should also require additional computational costs. First, the number of parameters in DVT is much larger than MT-ViT. For DeiT-S, the number of DVT\u2019s parameters is 70.4M, which is around 3 times more than that of the vanilla DeiT-S (22M) and that of MT-ViT (24.5M). This could require large memory during the training and inference. Second, the training cost of DVT is much higher than MT-ViT. The training speed for DVT and MT-ViT is 751.86 img/s and 1864.87 img/s respectively. Since both methods need to pre-train the backbone for 300 epochs, it is clear that DVT requires significantly more computational resources.\nBased on the results and analysis above, we think MT-ViT is a better choice than DVT in practice, especially when the computation resources are limited.\n###table_8### ###figure_4### MiniViT (Zhang et al., 2022 ###reference_b85###) is a great ViT method, which can significantly reduce the model size. From Table 8 ###reference_###, we can observe that MiniViT can reduce the parameter size of the model by 50% and there is no reduction in the computational cost. This is due to the design of MiniViT, which conducts weight sharing strategy to re-use some parameters. But this does not change either the network size or the length of the input sequence. Hence, it cannot accelerate the model for efficient inference. By contrast, the advantage of MT-ViT is that it can greatly reduce the computational cost of ViT. The additional parameters introduced by the predictor and tails are not significant." |
| }, |
| { |
| "section_id": "5.3.7", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.7 Predictor backbone", |
| "text": "MobileNet-v3-Small (Howard et al., 2019 ###reference_b37###) is served as the backbone for the tail predictor in the paper due to its low computational cost. However, the lower computational cost may also lead to a worse representation ability and wrong prediction. To find out how the network scale could influence the performance of the tail predictor, we use a relatively large backbone (i.e., ResNet with four basic blocks). The result in Table 9 ###reference_### shows that improving the network scale of the predictor can only slightly improve its prediction, however, it leads to a clear growth on FLOPs. FLOPs(G) denotes the summed FLOPs of both the tail predictor and multi-tailed vision transformer backbone. Considering the trade-off between accuracy and FLOPs, we choose to use MobileNet as the backbone of the tail predictor.\n###figure_5### ###table_9###" |
| }, |
| { |
| "section_id": "5.3.8", |
| "parent_section_id": "5.3", |
| "section_name": "5.3.8 Visualization", |
| "text": "To have a better understanding of the role of the tail predictor, we further visualize the decision of the tail predictor on ImageNet-1K, as shown in Figure 5 ###reference_###. If an image can be accurately classified by the ViT model, its cross-entropy loss will be negligible. Otherwise, its cross-entropy loss will be relatively large. An \u2018easy\u2019 image can be accurately classified by both three tails, which means no matter which tail the predictor chooses, the term will be negligible. Under such a case, optimizing will only make drops, which encourages the predictor to choose short tail for \u2018easy\u2019 image. On the other hand, for an \u2018difficult\u2019 image that can only be accurately classified by long tails, choosing short or middle tail will result in a large . Therefore, minimizing will take minimizing as a priority. Under such a case, choosing long tail will results in smaller , and the predictor will be encouraged to choose \u2018difficult\u2019 image.\nTo verify this idea, we use the trained tail predictor in MT-ViT(A*) (with DeiT-S as the backbone) to make the prediction. The samples are chosen from ImageNet-1K with four categories \u2018Soccer\u2019, \u2018Pineapple\u2019, \u2018Car\u2019, and \u2018Castle\u2019 to illustrate how the tail predictor translates instance difficulty.\nIntuitively, an image with clear and large objects could be identified as an \u2018easy\u2019 image, which can also be related to a relatively large prediction confidence of the model. An \u2018easy\u2019 image also means that it is simple to be correctly recognized by humans. We hypothesize that decisions of the tail predictor can basically follow the human\u2019s visual judgment. From Figure 5 ###reference_###, we can easily observe that the selections of short tail and middle tail are relatively easy to identify since they often have a single frontal view object located in the center of the images. However, the objects in \u2018difficult\u2019 images selected by long tail are often blurry and in irregular shape. This confirms our motivation that the tail predictor can measure the instance difficulty. The \u201csorting\u201d into easy or hard images falls out automatically, which is learned by MT-ViT." |
| }, |
| { |
| "section_id": "5.4", |
| "parent_section_id": "5", |
| "section_name": "Experiments on Object Detection", |
| "text": "We conducted additional experiments using the COCO 2017 dataset (Lin et al., 2014 ###reference_b51###) and applied MT-ViT to the object detection task, which enabled us to test our method\u2019s generalizability ability.\nThe COCO dataset is a comprehensive collection of image data that facilitates the identification, segmentation, key-point detection, and captioning of objects. This dataset comprises 328,000 images, containing 250,000 person instances labeled with keypoints, bounding boxes, and per-instance segmentation masks. The dataset also features 91 object categories, and detecting many of these objects is highly reliant on contextual information.\nTo incorporate our proposed method into the detection task, we use multi-tailed vision transformer backbone as the feature extractor, while Mask R-CNN (He et al., 2017a ###reference_b32###) is used as the detector. To generate multi-scale features for the following Mask R-CNN detector, we follow the idea of (Li et al., 2022 ###reference_b48###) and use only the feature map of the last transformer layer to generate multi-scale features.\nIn object detection task, we only adopt two different tails (i.e., ST and LT) in MT-ViT. The size of the image is set to 512512. The patch size is set to be 32 and 16 in ST and LT, which results in 256 and 1024 tokens respectively. Following common practice, we use the multi-tailed vision transformer that is pre-trained on the ImageNet-1K classification task to initialize the backbone. We train the model for 25 epochs on 4 RTX 4090 GPUs with a batch size of 32. Our implementation and training hyper-parameters are based on this repo111https://github.com/ViTAE-Transformer/ViTDet. The FLOPs of both ST and MT in detection is measured by the official tool in mm-detection222https://github.com/open-mmlab/mmdetection/. The results are shown in the following Table 10 ###reference_###.\n###table_10### For the object detection task, we observe that using MT-ViT can reduce the FLOPs by 15.5% with only slight degradation on the (-0.2). These results of our experiments on COCO 2017 and dense prediction tasks provide further evidence of the robustness and versatility of our approach, demonstrating its ability to generalize across diverse datasets and tasks." |
| }, |
| { |
| "section_id": "6", |
| "parent_section_id": null, |
| "section_name": "Conclusion", |
| "text": "In this paper, we propose an efficient vision transformer called Multi-Tailed Vision Transformer (MT-ViT) by reducing the number of tokens in the tail of the vision transformer. MT-ViT adopts multiple tails for splitting images into sequences with varying lengths, each of which results in different computational costs and accuracy during inference. We conditionally send images to different tails by introducing a tail predictor that determines which tail is appropriate for the image. During training, the Gumbel-Softmax trick ensures that both modules can be optimized in an end-to-end fashion. The empirical results demonstrate that MT-ViT outperforms baseline and other comparison methods on small-scale datasets (i.e., CIFAR100 and TinyImageNet) and large-scale datasets (i.e., ImageNet-1K). The visualized results of the tail predictor also identify our motivation that the tail predictor can be employed to automatically translate instance difficulty." |
| } |
| ], |
| "appendix": [], |
| "tables": { |
| "1": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T1\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>Detailed information of datasets used for training.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T1.2\">\n<tr class=\"ltx_tr\" id=\"S5.T1.2.3\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T1.2.3.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.1.1\" style=\"background-color:#E6E6E6;\">Dataset</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.3.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.2.1\" style=\"background-color:#E6E6E6;\">Train size</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.3.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.3.1\" style=\"background-color:#E6E6E6;\">Val size</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.3.4\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.4.1\" style=\"background-color:#E6E6E6;\">Classes</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T1.2.3.5\" style=\"padding-left:4.3pt;padding-right:4.3pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T1.2.3.5.1\" style=\"background-color:#E6E6E6;\">Size</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">CIFAR100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">50,000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.4\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">10,000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.5\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T1.1.1.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">3232</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S5.T1.2.2.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">TinyImageNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.2.2.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">100,000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.2.2.4\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">10,000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.2.2.5\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">100</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T1.2.2.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">6464</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T1.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T1.2.4.1\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">ImageNet</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.2.4.2\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">1,281,167</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.2.4.3\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">50,000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.2.4.4\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">1,000</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T1.2.4.5\" style=\"padding-left:4.3pt;padding-right:4.3pt;\">N/A</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 1: Detailed information of datasets used for training." |
| }, |
| "2": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T2\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>The Design and FLOPs of each tail in MT-ViT.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T2.1\">\n<tr class=\"ltx_tr\" id=\"S5.T2.1.1\">\n<td class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.1\" style=\"padding-left:5.1pt;padding-right:5.1pt;\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S5.T2.1.1.2\" style=\"padding-left:5.1pt;padding-right:5.1pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T2.1.1.3\" style=\"padding-left:5.1pt;padding-right:5.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.1.3.1\" style=\"background-color:#E6E6E6;\">FLOPs(G)<span class=\"ltx_text ltx_font_medium\" id=\"S5.T2.1.1.3.1.1\"></span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.2\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S5.T2.1.2.1\" style=\"padding-left:5.1pt;padding-right:5.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.2.1.1\" style=\"position:relative; bottom:5.6pt;background-color:#E6E6E6;\">Backbone</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.2.2\" style=\"padding-left:5.1pt;padding-right:5.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.2.2.1\" style=\"position:relative; bottom:5.6pt;background-color:#E6E6E6;\">Patch Embedding</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.3\" style=\"padding-left:5.1pt;padding-right:5.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.2.3.1\" style=\"background-color:#E6E6E6;\">ST</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.4\" style=\"padding-left:5.1pt;padding-right:5.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.2.4.1\" style=\"background-color:#E6E6E6;\">MT</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T2.1.2.5\" style=\"padding-left:5.1pt;padding-right:5.1pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T2.1.2.5.1\" style=\"background-color:#E6E6E6;\">LT</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T2.1.3.1\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">T2T-ViT-7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.3.2\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">T2T-module</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.3.3\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">0.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.3.4\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">0.55</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T2.1.3.5\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">1.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S5.T2.1.4.1\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">T2T-ViT-12</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.4.2\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">T2T-module</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.4.3\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">0.33</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.4.4\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.4.5\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">1.78</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S5.T2.1.5.1\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">DeiT-Ti</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.5.2\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">Convolution</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.5.3\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">0.31</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.5.4\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">0.61</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.5.5\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">1.25</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S5.T2.1.6.1\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">DeiT-S</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.6.2\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">Convolution</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.6.3\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">1.14</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.6.4\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">2.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T2.1.6.5\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">4.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T2.1.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T2.1.7.1\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">DeiT-B</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.1.7.2\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">Convolution</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.1.7.3\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">4.41</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.1.7.4\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">8.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T2.1.7.5\" style=\"padding-left:5.1pt;padding-right:5.1pt;\">17.6</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 2: The Design and FLOPs of each tail in MT-ViT." |
| }, |
| "3": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T3\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>The performance of MT-ViT in CIFAR100 and TinyImageNet. and are set to 0.25, 0.75 for MT-ViT(A*), and 1, 0.25 for MT-ViT(S*).</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T3.5\">\n<tr class=\"ltx_tr\" id=\"S5.T3.5.1\">\n<td class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T3.5.1.1\"></td>\n<td class=\"ltx_td ltx_border_r ltx_border_t\" id=\"S5.T3.5.1.2\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T3.5.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.1.3.1\" style=\"background-color:#E6E6E6;\">CIFAR100<span class=\"ltx_text ltx_font_medium\" id=\"S5.T3.5.1.3.1.1\"></span></span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S5.T3.5.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.1.4.1\" style=\"background-color:#E6E6E6;\">TinyImageNet<span class=\"ltx_text ltx_font_medium\" id=\"S5.T3.5.1.4.1.1\"></span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.2\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r\" id=\"S5.T3.5.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.2.1.1\" style=\"position:relative; bottom:5.6pt;background-color:#E6E6E6;\">Backbone</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.2.2.1\" style=\"position:relative; bottom:5.6pt;background-color:#E6E6E6;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.2.3.1\" style=\"background-color:#E6E6E6;\">Top-1 Acc.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.2.4.1\" style=\"background-color:#E6E6E6;\">FLOPs.(G)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.2.5.1\" style=\"background-color:#E6E6E6;\">Top-1 Acc.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T3.5.2.6.1\" style=\"background-color:#E6E6E6;\">FLOPs.(G)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T3.5.3.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.5.3.1.1\">T2T-ViT-7</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.3.2\">MT-ViT(A*)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.3.3\">82.8(+1.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.3.4\">0.71(-35.2%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.3.5\">86.6(+1.6)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.3.6\">0.93(-15.9%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.4\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.4.1\">MT-ViT(S*)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.4.2\">81.8(+0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.4.3\">0.49(-55.9%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.4.4\">85.1(+0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.4.5\">0.57(-48.2%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.5\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.1\">Baseline</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.2\">81.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.3\">1.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.4\">85.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.5.5\">1.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T3.5.6.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.5.6.1.1\">T2T-ViT-12</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.6.2\">MT-ViT(A*)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.6.3\">85.7(+2.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.6.4\">1.17(-34.2%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.6.5\">90.0(+1.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.6.6\">1.41(-21.0%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.7\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.7.1\">MT-ViT(S*)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.7.2\">84.0(+0.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.7.3\">0.53(-70.4%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.7.4\">88.6(+0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.7.5\">0.85(-58.5%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.8.1\">Baseline</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.8.2\">83.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.8.3\">1.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.8.4\">88.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.8.5\">1.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.9\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T3.5.9.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.5.9.1.1\">DeiT-Ti</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.9.2\">MT-ViT(A*)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.9.3\">84.9(+2.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.9.4\">0.78(-37.2%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.9.5\">88.8(+2.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.9.6\">0.99(-20.86%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.10\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.10.1\">MT-ViT(S*)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.10.2\">83.4(+0.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.10.3\">0.47(-62.0%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.10.4\">86.7(+0.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.10.5\">0.47(-64.0%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.11.1\">Baseline</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.11.2\">82.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.11.3\">1.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.11.4\">86.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T3.5.11.5\">1.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.12\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T3.5.12.1\" rowspan=\"3\"><span class=\"ltx_text\" id=\"S5.T3.5.12.1.1\">DeiT-S</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.12.2\">MT-ViT(A*)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.12.3\">87.9(+0.9)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.12.4\">2.59(-43.6%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.12.5\">94.3(+1.4)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T3.5.12.6\">3.18(-31.0%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.13\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.13.1\">MT-ViT(S*)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.13.2\">87.0(+0.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.13.3\">1.2(-72.0%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.13.4\">93.5(+0.6)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T3.5.13.5\">2.21(-52.0%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T3.5.14\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.14.1\">Baseline</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.14.2\">87.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.14.3\">4.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.14.4\">92.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T3.5.14.5\">4.6</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 3: The performance of MT-ViT in CIFAR100 and TinyImageNet. and are set to 0.25, 0.75 for MT-ViT(A*), and 1, 0.25 for MT-ViT(S*)." |
| }, |
| "4": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T4\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>Main results of MT-ViT and other compared methods with three different ViT backbones on ImageNet-1K benchmark.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T4.10\">\n<tr class=\"ltx_tr\" id=\"S5.T4.1.1\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.2.1\" style=\"background-color:#E6E6E6;\">Backbone</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.3.1\" style=\"background-color:#E6E6E6;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.4.1\" style=\"background-color:#E6E6E6;\">Top-1 Acc.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.5.1\" style=\"background-color:#E6E6E6;\">Top-5 Acc.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.6.1\" style=\"background-color:#E6E6E6;\">FLOPs(G)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.1.1.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.1.1.1.1\" style=\"background-color:#E6E6E6;\">FLOPs(%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.11\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T4.10.11.1\" rowspan=\"10\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text\" id=\"S5.T4.10.11.1.1\">DeiT-Ti</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T4.10.11.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Baseline</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T4.10.11.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">72.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T4.10.11.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">91.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T4.10.11.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T4.10.11.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.12\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.12.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">SCOP <cite class=\"ltx_cite ltx_citemacro_citep\">(Tang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib70\" title=\"\">2020</a>)</cite> (NeurIPS, 2020)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.12.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">68.9(-3.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.12.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">89.0(-2.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.12.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.12.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-38.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.13\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.13.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">PoWER <cite class=\"ltx_cite ltx_citemacro_citep\">(Goyal et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib25\" title=\"\">2020</a>)</cite> (ICML, 2020)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.13.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">69.4(-2.8)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.13.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">89.2(-1.9)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.13.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.13.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-38.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.14\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.14.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">HVT-Ti <cite class=\"ltx_cite ltx_citemacro_citep\">(Pan et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib60\" title=\"\">2021b</a>)</cite> (ICCV, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.14.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">69.6(-2.6)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.14.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">89.4(-1.7)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.14.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.14.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-46.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.15\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.15.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">SViTE <cite class=\"ltx_cite ltx_citemacro_citep\">(Chen et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib8\" title=\"\">2021c</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.15.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">71.8(-0.4)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.15.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">90.8(-0.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.15.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.15.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-23.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.2.2.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">DynamicViT-/0.7 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.2.2.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">71.0(-1.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.2.2.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">90.4(-0.7)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.2.2.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.2.2.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-38.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.3.3.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">DynamicViT-/0.9 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.3.3.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">72.3(+0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.3.3.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">91.2(+0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.3.3.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.3.3.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-23.1</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.16\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.16.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Evo-ViT <cite class=\"ltx_cite ltx_citemacro_citep\">(Xu et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib80\" title=\"\">2022b</a>)</cite> (AAAI, 2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.16.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">72.0(-0.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.16.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">91.0(-0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.16.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.16.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-38.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.17\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.17.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">PS-ViT <cite class=\"ltx_cite ltx_citemacro_citep\">(Tang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib69\" title=\"\">2022</a>)</cite> (CVPR, 2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.17.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">72.0(-0.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.17.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">91.0(-0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.17.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.17.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-46.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.18\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.18.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">A-ViT <cite class=\"ltx_cite ltx_citemacro_citep\">(Yin et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib82\" title=\"\">2022</a>)</cite> (CVPR, 2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.18.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">71.0(-0.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.18.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">90.4(-0.7)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.18.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.18.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-38.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.19\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S5.T4.10.19.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.10.19.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">MT-ViT (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.19.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.10.19.3.1\">72.9(+0.7)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.19.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.10.19.4.1\">91.3(+0.2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.19.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.19.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-38.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.20\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.10.20.1\" rowspan=\"13\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text\" id=\"S5.T4.10.20.1.1\">DeiT-S</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.10.20.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Baseline</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.20.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.20.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">95.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.20.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">4.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.20.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.21\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.21.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">SCOP <cite class=\"ltx_cite ltx_citemacro_citep\">(Tang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib70\" title=\"\">2020</a>)</cite> (NeurIPS, 2020)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.21.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">77.5(-2.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.21.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">93.5(-1.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.21.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.21.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-43.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.22\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.22.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">PoWER <cite class=\"ltx_cite ltx_citemacro_citep\">(Goyal et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib25\" title=\"\">2020</a>)</cite> (ICML, 2020)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.22.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">78.3(-1.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.22.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.0(-1.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.22.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.7</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.22.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-41.3</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.23\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.23.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">HVT-S <cite class=\"ltx_cite ltx_citemacro_citep\">(Pan et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib60\" title=\"\">2021b</a>)</cite> (ICCV, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.23.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">78.0(-1.8)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.23.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">93.8(-1.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.23.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.23.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-47.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.24\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.24.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">SViTE <cite class=\"ltx_cite ltx_citemacro_citep\">(Chen et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib8\" title=\"\">2021c</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.24.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.2(-0.6)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.24.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.5(-0.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.24.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.24.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-34.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.4.4.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">IA-RED <cite class=\"ltx_cite ltx_citemacro_citep\">(Pan et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib59\" title=\"\">2021a</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.4.4.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.1(-0.7)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.4.4.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.5(-0.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.4.4.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.4.4.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-30.4</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.5.5.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">DynamicViT-/0.7 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.5.5.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.3(-0.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.5.5.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.7(-0.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.5.5.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.5.5.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-37.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.6.6.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">DynamicViT-/0.9 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.6.6.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.8(-0.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.6.6.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.6.6.3.1\">94.9(-0.1)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.6.6.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">4.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.6.6.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-13.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.25\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.25.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Evo-ViT <cite class=\"ltx_cite ltx_citemacro_citep\">(Xu et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib80\" title=\"\">2022b</a>)</cite> (AAAI, 2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.25.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.4(-0.4)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.25.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.8(-0.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.25.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.25.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-34.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.26\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.26.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">PS-ViT <cite class=\"ltx_cite ltx_citemacro_citep\">(Tang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib69\" title=\"\">2022</a>)</cite> (CVPR, 2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.26.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.4(-0.4)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.26.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.7(-0.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.26.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.26.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-43.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.27\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.27.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">EViT <cite class=\"ltx_cite ltx_citemacro_citep\">(Xu et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib80\" title=\"\">2022b</a>)</cite> (ICLR, 2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.27.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.5(-0.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.27.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.8(-0.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.27.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.0</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.27.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-34.8</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.28\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.28.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">ATS <cite class=\"ltx_cite ltx_citemacro_citep\">(Fayyaz et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib21\" title=\"\">2022</a>)</cite> (ECCV, 2022)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.28.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.7(-0.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.28.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.9(-0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.28.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.28.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-37.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.29\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.10.29.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">MT-ViT(S*) (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.29.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.5(-0.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.29.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.4(-0.6)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.29.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">2.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.29.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-45.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.30\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S5.T4.10.30.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.30.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">MT-ViT(A*) (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.30.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.10.30.3.1\">80.3(+0.5)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.30.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.10.30.4.1\">94.9(-0.1)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.30.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">3.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.30.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-23.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.31\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.10.31.1\" rowspan=\"6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text\" id=\"S5.T4.10.31.1.1\">DeiT-B</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.10.31.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Baseline</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.31.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">81.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.31.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">95.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.31.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">17.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.31.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.32\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.32.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">SCOP <cite class=\"ltx_cite ltx_citemacro_citep\">(Tang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib70\" title=\"\">2020</a>)</cite> (NeurIPS, 2020)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.32.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">79.7(-2.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.32.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.5(-1.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.32.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.32.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-42.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.33\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.33.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">PoWER <cite class=\"ltx_cite ltx_citemacro_citep\">(Goyal et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib25\" title=\"\">2020</a>)</cite> (ICML, 2020)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.33.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">80.1(-1.7)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.33.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">94.6(-1.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.33.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">10.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.33.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-39.2</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.7.7.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">DynamicViT-/0.7 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.7.7.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">81.3(-0.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.7.7.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">95.3(-0.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.7.7.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">11.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.7.7.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-35.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.8.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.8.8.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">DynamicViT-/0.9 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.8.8.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.8.8.2.1\">81.8(-0.0)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.8.8.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">95.5(-0.1)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.8.8.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">15.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.8.8.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-14.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.34\">\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.10.34.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">MT-ViT(Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.34.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.10.34.2.1\">81.8(-0.0)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.34.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.10.34.3.1\">95.6(-0.0)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.34.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">13.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.34.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-21.0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.35\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T4.10.35.1\" rowspan=\"6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text\" id=\"S5.T4.10.35.1.1\">T2T-ViT-12</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T4.10.35.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">Baseline</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.35.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">76.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.35.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">93.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.35.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T4.10.35.6\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">0</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.36\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.36.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">PoWER <cite class=\"ltx_cite ltx_citemacro_citep\">(Goyal et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib25\" title=\"\">2020</a>)</cite> (ICML, 2020)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.36.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">74.5(-2.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.36.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">92.6(-0.9)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.36.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.36.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-32.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.9.9\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.9.9.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">DynamicViT-/0.7 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.9.9.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">76.1(-0.4)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.9.9.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">93.1(-0.4)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.9.9.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.9.9.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-32.6</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.10\">\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S5.T4.10.10.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">DynamicViT-/0.9 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite> (NeurIPS, 2021)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.10.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">76.8(+0.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.10.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">93.5(-0.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.10.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T4.10.10.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-16.7</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T4.10.37\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.10.37.1\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">MT-ViT (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.10.37.2\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.10.37.2.1\">77.2(+0.7)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.10.37.3\" style=\"padding-left:1.7pt;padding-right:1.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T4.10.37.3.1\">93.7(+0.2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.10.37.4\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">1.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T4.10.37.5\" style=\"padding-left:1.7pt;padding-right:1.7pt;\">-16.7</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 4: Main results of MT-ViT and other compared methods with three different ViT backbones on ImageNet-1K benchmark." |
| }, |
| "5": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T5\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 5: </span>Performance of MT-ViT when equipped with DynamicViT.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T5.1\">\n<tr class=\"ltx_tr\" id=\"S5.T5.1.1\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T5.1.1.1.1\" style=\"background-color:#E6E6E6;\">Method</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T5.1.1.2.1\" style=\"background-color:#E6E6E6;\">Top-1 Acc.</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T5.1.1.3.1\" style=\"background-color:#E6E6E6;\">Top-5 Acc.</span></td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S5.T5.1.1.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T5.1.1.4.1\" style=\"background-color:#E6E6E6;\">FLOPs</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T5.1.2.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">DynamicViT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T5.1.2.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">79.3%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T5.1.2.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">94.7%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_tt\" id=\"S5.T5.1.2.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">2.9G</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T5.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T5.1.3.1\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">MT-DynamicViT</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.1.3.2\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">79.2%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.1.3.3\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">94.5%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T5.1.3.4\" style=\"padding-left:8.0pt;padding-right:8.0pt;\">2.4G</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 5: Performance of MT-ViT when equipped with DynamicViT." |
| }, |
| "6": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T6\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 6: </span>Throughput of MT-ViT and compared methods on ImageNet-1K validation set.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T6.1\">\n<tr class=\"ltx_tr\" id=\"S5.T6.1.2\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T6.1.2.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.2.1.1\" style=\"background-color:#E6E6E6;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T6.1.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.2.2.1\" style=\"background-color:#E6E6E6;\">Top-1 Acc.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T6.1.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.2.3.1\" style=\"background-color:#E6E6E6;\">FLOPs(G)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T6.1.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.2.4.1\" style=\"background-color:#E6E6E6;\">Throughput(img/s)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T6.1.3.1\">Baseline(DeiT-S)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T6.1.3.2\">79.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T6.1.3.3\">4.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T6.1.3.4\">795.53</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S5.T6.1.4.1\">MT-ViT (Ours)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.4.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.4.2.1\">79.5(-0.3)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.4.3\">2.5(-45.7%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.4.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.4.4.1\">1677.22</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" colspan=\"4\" id=\"S5.T6.1.5.1\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T6.1.5.1.1\">Compare with Dynamic methods</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T6.1.6.1\">DVT <cite class=\"ltx_cite ltx_citemacro_citep\">(Wang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib74\" title=\"\">2021b</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T6.1.6.2\">79.3(-0.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T6.1.6.3\">2.4(-47.8%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T6.1.6.4\">1647.47</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S5.T6.1.1.1\">DynamicViT--0.7 <cite class=\"ltx_cite ltx_citemacro_citep\">(Rao et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib63\" title=\"\">2021</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.1.2\">79.3(-0.5)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.1.3\">2.9(-37.0%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.1.4\">1222.21</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S5.T6.1.7.1\">Evo-ViT <cite class=\"ltx_cite ltx_citemacro_citep\">(Xu et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib80\" title=\"\">2022b</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.7.2\">79.4(-0.4)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.7.3\">3.0(-34.8%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T6.1.7.4\">1220.08</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T6.1.8\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T6.1.8.1\">E-ViT <cite class=\"ltx_cite ltx_citemacro_citep\">(Liang et\u00a0al., <a class=\"ltx_ref\" href=\"https://arxiv.org/html/2203.01587v3#bib.bib49\" title=\"\">2021</a>)</cite>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T6.1.8.2\">79.5(-0.3)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T6.1.8.3\">3.0(-34.8%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T6.1.8.4\">1219.68</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 6: Throughput of MT-ViT and compared methods on ImageNet-1K validation set." |
| }, |
| "7": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T7\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 7: </span>Performance of MT-ViT when setting different numbers of tokens for different tails.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T7.6\">\n<tr class=\"ltx_tr\" id=\"S5.T7.6.7\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T7.6.7.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.6.7.1.1\" style=\"background-color:#E6E6E6;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T7.6.7.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.6.7.2.1\" style=\"background-color:#E6E6E6;\">ST</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T7.6.7.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.6.7.3.1\" style=\"background-color:#E6E6E6;\">MT</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T7.6.7.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.6.7.4.1\" style=\"background-color:#E6E6E6;\">LT</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T7.6.7.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.6.7.5.1\" style=\"background-color:#E6E6E6;\">Top-1 Acc.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T7.6.7.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.6.7.6.1\" style=\"background-color:#E6E6E6;\">Top-5 Acc.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T7.6.7.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.6.7.7.1\" style=\"background-color:#E6E6E6;\">FLOPs(G)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.3.3\">\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T7.3.3.4\" rowspan=\"2\"><span class=\"ltx_text\" id=\"S5.T7.3.3.4.1\">MT-ViT</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T7.1.1.1\">77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T7.2.2.2\">1010</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T7.3.3.3\">1414</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T7.3.3.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.3.3.5.1\">72.9(+0.7)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T7.3.3.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.3.3.6.1\">91.3(+0.2)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T7.3.3.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T7.3.3.7.1\">0.8(-38.5%)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.6.6\">\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T7.4.4.1\">44</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T7.5.5.2\">77</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T7.6.6.3\">1414</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T7.6.6.4\">72.4(+0.2)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T7.6.6.5\">91.1(+0.0)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T7.6.6.6\">0.9(-30.8%)</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T7.6.8\">\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T7.6.8.1\">DeiT-Ti</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T7.6.8.2\">N/A</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T7.6.8.3\">72.2</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T7.6.8.4\">91.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r ltx_border_t\" id=\"S5.T7.6.8.5\">1.3</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 7: Performance of MT-ViT when setting different numbers of tokens for different tails." |
| }, |
| "8": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T8\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 8: </span>Comparison results of DVT, MiniViT and MT-ViT, which are implemented on top of DeiT-S. We provide number of parameters, Top-1 accuracy, and FLOPs of all three methods.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T8.2\">\n<tr class=\"ltx_tr\" id=\"S5.T8.2.3\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T8.2.3.1\" style=\"padding-left:2.6pt;padding-right:2.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.3.1.1\" style=\"background-color:#E6E6E6;\">Method</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T8.2.3.2\" style=\"padding-left:2.6pt;padding-right:2.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.3.2.1\" style=\"background-color:#E6E6E6;\">Param.(M)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T8.2.3.3\" style=\"padding-left:2.6pt;padding-right:2.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.3.3.1\" style=\"background-color:#E6E6E6;\">Accuracy.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T8.2.3.4\" style=\"padding-left:2.6pt;padding-right:2.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.3.4.1\" style=\"background-color:#E6E6E6;\">FLOPs(G)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.2.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T8.2.4.1\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">Baseline (DeiT-S)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T8.2.4.2\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">22M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T8.2.4.3\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">79.8%</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T8.2.4.4\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">4.6G</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.2.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T8.2.5.1\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">DVT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T8.2.5.2\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">70.4M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T8.2.5.3\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">79.5% (-0.3%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T8.2.5.4\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">2.5G</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S5.T8.1.1.1\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">MT-ViT(S)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T8.1.1.2\" style=\"padding-left:2.6pt;padding-right:2.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.1.1.2.1\">22M+2.5M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T8.1.1.3\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">79.5% (-0.3%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T8.1.1.4\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">2.5G</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.2.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T8.2.6.1\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">Mini-DeiT-S</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T8.2.6.2\" style=\"padding-left:2.6pt;padding-right:2.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.6.2.1\">11M</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T8.2.6.3\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">80.0% (+0.2%)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T8.2.6.4\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">4.6G</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T8.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T8.2.2.1\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">MT-ViT(A)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T8.2.2.2\" style=\"padding-left:2.6pt;padding-right:2.6pt;\">22M+2.5M</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T8.2.2.3\" style=\"padding-left:2.6pt;padding-right:2.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.2.3.1\">80.3% (+0.5%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T8.2.2.4\" style=\"padding-left:2.6pt;padding-right:2.6pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T8.2.2.4.1\">3.5G</span></td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 8: Comparison results of DVT, MiniViT and MT-ViT, which are implemented on top of DeiT-S. We provide number of parameters, Top-1 accuracy, and FLOPs of all three methods." |
| }, |
| "9": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T9\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 9: </span>Comparison results of different predictor backbones. The resolution is the input size of the tail predictor. The results are based on MT-ViT (DeiT-S).</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T9.4\">\n<tr class=\"ltx_tr\" id=\"S5.T9.1.1\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T9.1.1.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T9.1.1.2.1\" style=\"background-color:#E6E6E6;\">Backbone</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T9.1.1.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T9.1.1.3.1\" style=\"background-color:#E6E6E6;\">Resolution</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T9.1.1.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T9.1.1.4.1\" style=\"background-color:#E6E6E6;\">FLOPs(G)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T9.1.1.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T9.1.1.5.1\" style=\"background-color:#E6E6E6;\">Acc.(%)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T9.1.1.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">\n<span class=\"ltx_text ltx_font_bold\" id=\"S5.T9.1.1.1.1\" style=\"background-color:#E6E6E6;\">FLOPs(G)</span>\n</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T9.2.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T9.2.2.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">MobileNetv3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T9.2.2.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">224\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T9.2.2.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">0.06</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T9.2.2.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">80.3</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T9.2.2.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">3.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T9.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S5.T9.3.3.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">ResNet-10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T9.3.3.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">112\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T9.3.3.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">0.24</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T9.3.3.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">80.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S5.T9.3.3.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">3.5</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T9.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T9.4.4.2\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">ResNet-10</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T9.4.4.1\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">224\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T9.4.4.3\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">0.89</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T9.4.4.4\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">80.4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T9.4.4.5\" style=\"padding-left:2.8pt;padding-right:2.8pt;\">4.3</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 9: Comparison results of different predictor backbones. The resolution is the input size of the tail predictor. The results are based on MT-ViT (DeiT-S)." |
| }, |
| "10": { |
| "table_html": "<figure class=\"ltx_table\" id=\"S5.T10\">\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 10: </span>The performance on COCO object detection task using Mask R-CNN. We train the models for 25 epochs with a batch size of 32. FLOPs are computed on images.</figcaption>\n<table class=\"ltx_tabular ltx_centering ltx_align_middle\" id=\"S5.T10.5\">\n<tr class=\"ltx_tr\" id=\"S5.T10.5.4\">\n<td class=\"ltx_td ltx_border_l ltx_border_r ltx_border_t\" id=\"S5.T10.5.4.1\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_t\" colspan=\"2\" id=\"S5.T10.5.4.2\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T10.5.4.2.1\" style=\"background-color:#E6E6E6;\">FLOPs</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S5.T10.5.4.3\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T10.5.4.3.1\" style=\"background-color:#E6E6E6;\">Metric<span class=\"ltx_text ltx_font_medium\" id=\"S5.T10.5.4.3.1.1\"></span></span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T10.5.3\" style=\"background-color:#E6E6E6;\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S5.T10.5.3.4\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T10.5.3.4.1\" style=\"position:relative; bottom:5.6pt;background-color:#E6E6E6;\">Backbone</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T10.5.3.5\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T10.5.3.5.1\" style=\"background-color:#E6E6E6;\">Overall</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T10.5.3.6\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"><span class=\"ltx_text ltx_font_bold\" id=\"S5.T10.5.3.6.1\" style=\"background-color:#E6E6E6;\">Backbone</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T10.3.1.1\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T10.4.2.2\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S5.T10.5.3.3\" style=\"padding-left:0.7pt;padding-right:0.7pt;\"></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T10.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_tt\" id=\"S5.T10.5.5.1\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">DeiT-Ti</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T10.5.5.2\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">46.6</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T10.5.5.3\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">6.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T10.5.5.4\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">32.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T10.5.5.5\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">51.8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_tt\" id=\"S5.T10.5.5.6\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">33.9</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S5.T10.5.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S5.T10.5.6.1\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">MT-ViT</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T10.5.6.2\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">39.4<span class=\"ltx_text ltx_font_bold\" id=\"S5.T10.5.6.2.1\">(-15.5%)</span>\n</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T10.5.6.3\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">5.5</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T10.5.6.4\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">31.9</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T10.5.6.5\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">50.1</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S5.T10.5.6.6\" style=\"padding-left:0.7pt;padding-right:0.7pt;\">33.8</td>\n</tr>\n</table>\n</figure>", |
| "capture": "Table 10: The performance on COCO object detection task using Mask R-CNN. We train the models for 25 epochs with a batch size of 32. FLOPs are computed on images." |
| } |
| }, |
| "image_paths": { |
| "1": { |
| "figure_path": "2203.01587v3_figure_1.png", |
| "caption": "Figure 1: The throughput and confidence of prediction from DeiT-S, with a different number of tokens, e.g., 7\u00d7\\times\u00d77, 10\u00d7\\times\u00d710 and 14\u00d7\\times\u00d714. The \u2018tick\u2019 and \u2018cross\u2019 sign denote the right and false prediction respectively.", |
| "url": "http://arxiv.org/html/2203.01587v3/x1.png" |
| }, |
| "2": { |
| "figure_path": "2203.01587v3_figure_2.png", |
| "caption": "Figure 2: The framework of proposed method contains two main components: scale predictor \u03c0\u03b8subscript\ud835\udf0b\ud835\udf03\\pi_{\\theta}italic_\u03c0 start_POSTSUBSCRIPT italic_\u03b8 end_POSTSUBSCRIPT and multi-tailed vision transformer (MT-ViT). The number of scale K\ud835\udc3eKitalic_K is set to 3. (a) The tail predictor is a CNN-based model that determines the appropriate tail for the image. (b) By using the multiple tails in MT-ViT, patches with different sizes are all projected into a d\ud835\udc51ditalic_d dimension embedding. This makes it possible to share the Transformer encoders and MLP head.", |
| "url": "http://arxiv.org/html/2203.01587v3/x2.png" |
| }, |
| "3": { |
| "figure_path": "2203.01587v3_figure_3.png", |
| "caption": "Figure 3: Ablation study on the hyper-parameter \u03b1\ud835\udefc\\alphaitalic_\u03b1 and \u03b7\ud835\udf02\\etaitalic_\u03b7 in FLOPs regularization term. \u03b1\ud835\udefc\\alphaitalic_\u03b1 is fixed to 0.5 in the left figure and \u03b7\ud835\udf02\\etaitalic_\u03b7 is fixed to 0.5 in the right figure.", |
| "url": "http://arxiv.org/html/2203.01587v3/x3.png" |
| }, |
| "4": { |
| "figure_path": "2203.01587v3_figure_4.png", |
| "caption": "Figure 4: The Accuracy-FLOPs curve of DVT and MT-ViT based on DeiT-S.", |
| "url": "http://arxiv.org/html/2203.01587v3/x4.png" |
| }, |
| "5": { |
| "figure_path": "2203.01587v3_figure_5.png", |
| "caption": "Figure 5: The visualized results of tail predictor in ImageNet-1K. The images in each row are from the class \u2018Soccer\u2019, \u2018Pineapple\u2019, \u2018Car\u2019 and \u2018Castle\u2019, respectively. The decision of each tail predictor illustrates how the predictor translates to instance difficulty.", |
| "url": "http://arxiv.org/html/2203.01587v3/x5.png" |
| } |
| }, |
| "validation": true, |
| "references": [ |
| { |
| "1": { |
| "title": "Single-layer vision transformers for more accurate early exits with less overhead.", |
| "author": "Bakhtiarnia, A., Zhang, Q., Iosifidis, A., 2022.", |
| "venue": "Neural Networks 153, 461\u2013473.", |
| "url": null |
| } |
| }, |
| { |
| "2": { |
| "title": "Language models are few-shot learners.", |
| "author": "Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al., 2020.", |
| "venue": "Advances in neural information processing systems 33, 1877\u20131901.", |
| "url": null |
| } |
| }, |
| { |
| "3": { |
| "title": "Proxylessnas: Direct neural architecture search on target task and hardware, in: International Conference on Learning Representations.", |
| "author": "Cai, H., Zhu, L., Han, S., 2019.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "4": { |
| "title": "End-to-end object detection with transformers, in: European Conference on Computer Vision, Springer. pp. 213\u2013229.", |
| "author": "Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S., 2020.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "5": { |
| "title": "Crossvit: Cross-attention multi-scale vision transformer for image classification, in: Proceedings of the IEEE/CVF international conference on computer vision, pp. 357\u2013366.", |
| "author": "Chen, C.F.R., Fan, Q., Panda, R., 2021a.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "6": { |
| "title": "A transformer-based deep neural network model for ssvep classification.", |
| "author": "Chen, J., Zhang, Y., Pan, Y., Xu, P., Guan, C., 2023a.", |
| "venue": "Neural Networks 164, 521\u2013534.", |
| "url": null |
| } |
| }, |
| { |
| "7": { |
| "title": "Autoformer: Searching transformers for visual recognition, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12270\u201312280.", |
| "author": "Chen, M., Peng, H., Fu, J., Ling, H., 2021b.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "8": { |
| "title": "Chasing sparsity in vision transformers: An end-to-end exploration.", |
| "author": "Chen, T., Cheng, Y., Gan, Z., Yuan, L., Zhang, L., Wang, Z., 2021c.", |
| "venue": "Advances in Neural Information Processing Systems 34.", |
| "url": null |
| } |
| }, |
| { |
| "9": { |
| "title": "Interactive segment anything nerf with feature imitation.", |
| "author": "Chen, X., Tang, J., Wan, D., Wang, J., Zeng, G., 2023b.", |
| "venue": "arXiv preprint arXiv:2305.16233 .", |
| "url": null |
| } |
| }, |
| { |
| "10": { |
| "title": "Neural architecture search for transformers: A survey.", |
| "author": "Chitty-Venkata, K.T., Emani, M., Vishwanath, V., Somani, A.K., 2022.", |
| "venue": "IEEE Access 10, 108374\u2013108412.", |
| "url": null |
| } |
| }, |
| { |
| "11": { |
| "title": "Neural architecture search benchmarks: Insights and survey.", |
| "author": "Chitty-Venkata, K.T., Emani, M., Vishwanath, V., Somani, A.K., 2023a.", |
| "venue": "IEEE Access 11, 25217\u201325236.", |
| "url": null |
| } |
| }, |
| { |
| "12": { |
| "title": "A survey of techniques for optimizing transformer inference.", |
| "author": "Chitty-Venkata, K.T., Mittal, S., Emani, M., Vishwanath, V., Somani, A.K., 2023b.", |
| "venue": "Journal of Systems Architecture , 102990.", |
| "url": null |
| } |
| }, |
| { |
| "13": { |
| "title": "Calibration data-based cnn filter pruning for efficient layer fusion, in: 2020 IEEE 22nd International Conference on High Performance Computing and Communications; IEEE 18th International Conference on Smart City; IEEE 6th International Conference on Data Science and Systems (HPCC/SmartCity/DSS), IEEE. pp. 1300\u20131307.", |
| "author": "Chitty-Venkata, K.T., Somani, A.K., 2020.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "14": { |
| "title": "Neural architecture search survey: A hardware perspective.", |
| "author": "Chitty-Venkata, K.T., Somani, A.K., 2022.", |
| "venue": "ACM Computing Surveys 55, 1\u201336.", |
| "url": null |
| } |
| }, |
| { |
| "15": { |
| "title": "Interaction transformer for human reaction generation.", |
| "author": "Chopin, B., Tang, H., Otberdout, N., Daoudi, M., Sebe, N., 2023.", |
| "venue": "IEEE Transactions on Multimedia .", |
| "url": null |
| } |
| }, |
| { |
| "16": { |
| "title": "A downsampled variant of imagenet as an alternative to the cifar datasets.", |
| "author": "Chrabaszcz, P., Loshchilov, I., Hutter, F., 2017.", |
| "venue": "arXiv preprint arXiv:1707.08819 .", |
| "url": null |
| } |
| }, |
| { |
| "17": { |
| "title": "Twins: Revisiting the design of spatial attention in vision transformers.", |
| "author": "Chu, X., Tian, Z., Wang, Y., Zhang, B., Ren, H., Wei, X., Xia, H., Shen, C., 2021.", |
| "venue": "Advances in Neural Information Processing Systems 34, 9355\u20139366.", |
| "url": null |
| } |
| }, |
| { |
| "18": { |
| "title": "Imagenet: A large-scale hierarchical image database, in: 2009 IEEE conference on computer vision and pattern recognition, Ieee. pp. 248\u2013255.", |
| "author": "Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L., 2009.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "19": { |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding.", |
| "author": "Devlin, J., Chang, M.W., Lee, K., Toutanova, K., 2018.", |
| "venue": "arXiv preprint arXiv:1810.04805 .", |
| "url": null |
| } |
| }, |
| { |
| "20": { |
| "title": "An image is worth 16x16 words: Transformers for image recognition at scale.", |
| "author": "Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al., 2020.", |
| "venue": "arXiv preprint arXiv:2010.11929 .", |
| "url": null |
| } |
| }, |
| { |
| "21": { |
| "title": "Adaptive token sampling for efficient vision transformers, in: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part XI, Springer. pp. 396\u2013414.", |
| "author": "Fayyaz, M., Koohpayegani, S.A., Jafari, F.R., Sengupta, S., Joze, H.R.V., Sommerlade, E., Pirsiavash, H., Gall, J., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "22": { |
| "title": "A practical survey on faster and lighter transformers.", |
| "author": "Fournier, Q., Caron, G.M., Aloise, D., 2023.", |
| "venue": "ACM Computing Surveys 55, 1\u201340.", |
| "url": null |
| } |
| }, |
| { |
| "23": { |
| "title": "Generalized image outpainting with u-transformer.", |
| "author": "Gao, P., Yang, X., Zhang, R., Goulermas, J.Y., Geng, Y., Yan, Y., Huang, K., 2023.", |
| "venue": "Neural Networks 162, 1\u201310.", |
| "url": null |
| } |
| }, |
| { |
| "24": { |
| "title": "Nasvit: Neural architecture search for efficient vision transformers with gradient conflict aware supernet training, in: International Conference on Learning Representations.", |
| "author": "Gong, C., Wang, D., Li, M., Chen, X., Yan, Z., Tian, Y., Chandra, V., et al., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "25": { |
| "title": "Power-bert: Accelerating bert inference via progressive word-vector elimination, in: International Conference on Machine Learning, PMLR. pp. 3690\u20133699.", |
| "author": "Goyal, S., Choudhury, A.R., Raje, S., Chakaravarthy, V., Sabharwal, Y., Verma, A., 2020.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "26": { |
| "title": "Improving structural mri preprocessing with hybrid transformer gans.", |
| "author": "Grigas, O., Maskeli\u016bnas, R., Dama\u0161evi\u010dius, R., 2023.", |
| "venue": "Life 13, 1893.", |
| "url": null |
| } |
| }, |
| { |
| "27": { |
| "title": "Cmt: Convolutional neural networks meet vision transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12175\u201312185.", |
| "author": "Guo, J., Han, K., Wu, H., Tang, Y., Chen, X., Wang, Y., Xu, C., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "28": { |
| "title": "Ultra-high resolution svbrdf recovery from a single image.", |
| "author": "Guo, J., Lai, S., Tu, Q., Tao, C., Zou, C., Guo, Y., 2023.", |
| "venue": "ACM Transactions on Graphics .", |
| "url": null |
| } |
| }, |
| { |
| "29": { |
| "title": "A survey on visual transformer.", |
| "author": "Han, K., Wang, Y., Chen, H., Chen, X., Guo, J., Liu, Z., Tang, Y., Xiao, A., Xu, C., Xu, Y., et al., 2020.", |
| "venue": "arXiv preprint arXiv:2012.12556 .", |
| "url": null |
| } |
| }, |
| { |
| "30": { |
| "title": "Transformer in transformer.", |
| "author": "Han, K., Xiao, A., Wu, E., Guo, J., Xu, C., Wang, Y., 2021.", |
| "venue": "Advances in Neural Information Processing Systems 34, 15908\u201315919.", |
| "url": null |
| } |
| }, |
| { |
| "31": { |
| "title": "Dual transformer for point cloud analysis.", |
| "author": "Han, X.F., Jin, Y.F., Cheng, H.X., Xiao, G.Q., 2022.", |
| "venue": "IEEE Transactions on Multimedia .", |
| "url": null |
| } |
| }, |
| { |
| "32": { |
| "title": "Mask r-cnn, in: Proceedings of the IEEE international conference on computer vision, pp. 2961\u20132969.", |
| "author": "He, K., Gkioxari, G., Doll\u00e1r, P., Girshick, R., 2017a.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "33": { |
| "title": "Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770\u2013778.", |
| "author": "He, K., Zhang, X., Ren, S., Sun, J., 2016.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "34": { |
| "title": "Transreid: Transformer-based object re-identification, in: Proceedings of the IEEE/CVF international conference on computer vision, pp. 15013\u201315022.", |
| "author": "He, S., Luo, H., Wang, P., Wang, F., Li, H., Jiang, W., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "35": { |
| "title": "Channel pruning for accelerating very deep neural networks, in: Proceedings of the IEEE international conference on computer vision, pp. 1389\u20131397.", |
| "author": "He, Y., Zhang, X., Sun, J., 2017b.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "36": { |
| "title": "Distilling the knowledge in a neural network.", |
| "author": "Hinton, G., Vinyals, O., Dean, J., 2015.", |
| "venue": "arXiv preprint arXiv:1503.02531 .", |
| "url": null |
| } |
| }, |
| { |
| "37": { |
| "title": "Searching for mobilenetv3, in: Proceedings of the IEEE/CVF international conference on computer vision, pp. 1314\u20131324.", |
| "author": "Howard, A., Sandler, M., Chu, G., Chen, L.C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al., 2019.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "38": { |
| "title": "Sediment prediction in the great barrier reef using vision transformer with finite element analysis.", |
| "author": "Jahanbakht, M., Xiang, W., Azghadi, M.R., 2022.", |
| "venue": "Neural Networks 152, 311\u2013321.", |
| "url": null |
| } |
| }, |
| { |
| "39": { |
| "title": "Categorical reparameterization with gumbel-softmax.", |
| "author": "Jang, E., Gu, S., Poole, B., 2016.", |
| "venue": "arXiv preprint arXiv:1611.01144 .", |
| "url": null |
| } |
| }, |
| { |
| "40": { |
| "title": "Learning disentangled representation implicitly via transformer for occluded person re-identification.", |
| "author": "Jia, M., Cheng, X., Lu, S., Zhang, J., 2022.", |
| "venue": "IEEE Transactions on Multimedia .", |
| "url": null |
| } |
| }, |
| { |
| "41": { |
| "title": "All tokens matter: Token labeling for training better vision transformers.", |
| "author": "Jiang, Z.H., Hou, Q., Yuan, L., Zhou, D., Shi, Y., Jin, X., Wang, A., Feng, J., 2021.", |
| "venue": "Advances in Neural Information Processing Systems 34, 18590\u201318602.", |
| "url": null |
| } |
| }, |
| { |
| "42": { |
| "title": "Dilateformer: Multi-scale dilated transformer for visual recognition.", |
| "author": "Jiao, J., Tang, Y.M., Lin, K.Y., Gao, Y., Ma, J., Wang, Y., Zheng, W.S., 2023.", |
| "venue": "IEEE Transactions on Multimedia .", |
| "url": null |
| } |
| }, |
| { |
| "43": { |
| "title": "Full stack optimization of transformer inference: a survey.", |
| "author": "Kim, S., Hooper, C., Wattanawong, T., Kang, M., Yan, R., Genc, H., Dinh, G., Huang, Q., Keutzer, K., Mahoney, M.W., et al., 2023.", |
| "venue": "arXiv preprint arXiv:2302.14017 .", |
| "url": null |
| } |
| }, |
| { |
| "44": { |
| "title": "Learning multiple layers of features from tiny images .", |
| "author": "Krizhevsky, A., Hinton, G., et al., 2009.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "45": { |
| "title": "Imagenet classification with deep convolutional neural networks.", |
| "author": "Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012.", |
| "venue": "Advances in neural information processing systems 25, 1097\u20131105.", |
| "url": null |
| } |
| }, |
| { |
| "46": { |
| "title": "Mpvit: Multi-path vision transformer for dense prediction, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7287\u20137296.", |
| "author": "Lee, Y., Kim, J., Willette, J., Hwang, S.J., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "47": { |
| "title": "Bossnas: Exploring hybrid cnn-transformers with block-wisely self-supervised neural architecture search, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 12281\u201312291.", |
| "author": "Li, C., Tang, T., Wang, G., Peng, J., Wang, B., Liang, X., Chang, X., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "48": { |
| "title": "Exploring plain vision transformer backbones for object detection, in: Computer Vision\u2013ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23\u201327, 2022, Proceedings, Part IX, Springer. pp. 280\u2013296.", |
| "author": "Li, Y., Mao, H., Girshick, R., He, K., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "49": { |
| "title": "Evit: Expediting vision transformers via token reorganizations, in: International Conference on Learning Representations.", |
| "author": "Liang, Y., Chongjian, G., Tong, Z., Song, Y., Wang, J., Xie, P., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "50": { |
| "title": "Not all patches are what you need: Expediting vision transformers via token reorganizations.", |
| "author": "Liang, Y., Ge, C., Tong, Z., Song, Y., Wang, J., Xie, P., 2022.", |
| "venue": "arXiv preprint arXiv:2202.07800 .", |
| "url": null |
| } |
| }, |
| { |
| "51": { |
| "title": "Microsoft coco: Common objects in context, in: Computer Vision\u2013ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, Springer. pp. 740\u2013755.", |
| "author": "Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., Zitnick, C.L., 2014.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "52": { |
| "title": "Darts: Differentiable architecture search.", |
| "author": "Liu, H., Simonyan, K., Yang, Y., 2018.", |
| "venue": "arXiv preprint arXiv:1806.09055 .", |
| "url": null |
| } |
| }, |
| { |
| "53": { |
| "title": "Learning efficient convolutional networks through network slimming, in: Proceedings of the IEEE international conference on computer vision, pp. 2736\u20132744.", |
| "author": "Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., Zhang, C., 2017.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "54": { |
| "title": "Swin transformer: Hierarchical vision transformer using shifted windows, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012\u201310022.", |
| "author": "Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., Guo, B., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "55": { |
| "title": "The concrete distribution: A continuous relaxation of discrete random variables.", |
| "author": "Maddison, C.J., Mnih, A., Teh, Y.W., 2016.", |
| "venue": "arXiv preprint arXiv:1611.00712 .", |
| "url": null |
| } |
| }, |
| { |
| "56": { |
| "title": "Token pooling in vision transformers for image classification, in: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 12\u201321.", |
| "author": "Marin, D., Chang, J.H.R., Ranjan, A., Prabhu, A., Rastegari, M., Tuzel, O., 2023.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "57": { |
| "title": "Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer, in: International Conference on Learning Representations.", |
| "author": "Mehta, S., Rastegari, M., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "58": { |
| "title": "Pixel-level fusion approach with vision transformer for early detection of alzheimer\u2019s disease.", |
| "author": "Odusami, M., Maskeli\u016bnas, R., Dama\u0161evi\u010dius, R., 2023.", |
| "venue": "Electronics 12, 1218.", |
| "url": null |
| } |
| }, |
| { |
| "59": { |
| "title": "Ia-red2: Interpretability-aware redundancy reduction for vision transformers.", |
| "author": "Pan, B., Panda, R., Jiang, Y., Wang, Z., Feris, R., Oliva, A., 2021a.", |
| "venue": "Advances in Neural Information Processing Systems 34.", |
| "url": null |
| } |
| }, |
| { |
| "60": { |
| "title": "Scalable vision transformers with hierarchical pooling, in: Proceedings of the IEEE/cvf international conference on computer vision, pp. 377\u2013386.", |
| "author": "Pan, Z., Zhuang, B., Liu, J., He, H., Cai, J., 2021b.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "61": { |
| "title": "Learning spatiotemporal frequency-transformer for compressed video super-resolution, in: European Conference on Computer Vision, Springer. pp. 257\u2013273.", |
| "author": "Qiu, Z., Yang, H., Fu, J., Fu, D., 2022a.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "62": { |
| "title": "Ivt: An end-to-end instance-guided video transformer for 3d pose estimation, in: Proceedings of the 30th ACM International Conference on Multimedia, pp. 6174\u20136182.", |
| "author": "Qiu, Z., Yang, Q., Wang, J., Fu, D., 2022b.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "63": { |
| "title": "Dynamicvit: Efficient vision transformers with dynamic token sparsification.", |
| "author": "Rao, Y., Zhao, W., Liu, B., Lu, J., Zhou, J., Hsieh, C.J., 2021.", |
| "venue": "Advances in neural information processing systems 34, 13937\u201313949.", |
| "url": null |
| } |
| }, |
| { |
| "64": { |
| "title": "Crimenet: Neural structured learning using vision transformer for violence detection.", |
| "author": "Rend\u00f3n-Segador, F.J., \u00c1lvarez-Garc\u00eda, J.A., Salazar-Gonz\u00e1lez, J.L., Tommasi, T., 2023.", |
| "venue": "Neural networks 161, 318\u2013329.", |
| "url": null |
| } |
| }, |
| { |
| "65": { |
| "title": "Bottleneck transformers for visual recognition, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16519\u201316529.", |
| "author": "Srinivas, A., Lin, T.Y., Parmar, N., Shlens, J., Abbeel, P., Vaswani, A., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "66": { |
| "title": "Vitas: Vision transformer architecture search, in: European Conference on Computer Vision, Springer. pp. 139\u2013157.", |
| "author": "Su, X., You, S., Xie, J., Zheng, M., Wang, F., Qian, C., Zhang, C., Wang, X., Xu, C., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "67": { |
| "title": "Revisiting unreasonable effectiveness of data in deep learning era, in: Proceedings of the IEEE international conference on computer vision, pp. 843\u2013852.", |
| "author": "Sun, C., Shrivastava, A., Singh, S., Gupta, A., 2017.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "68": { |
| "title": "Efficientnet: Rethinking model scaling for convolutional neural networks, in: International Conference on Machine Learning, PMLR. pp. 6105\u20136114.", |
| "author": "Tan, M., Le, Q., 2019.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "69": { |
| "title": "Patch slimming for efficient vision transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12165\u201312174.", |
| "author": "Tang, Y., Han, K., Wang, Y., Xu, C., Guo, J., Xu, C., Tao, D., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "70": { |
| "title": "Scop: Scientific control for reliable neural network pruning.", |
| "author": "Tang, Y., Wang, Y., Xu, Y., Tao, D., Xu, C., Xu, C., Xu, C., 2020.", |
| "venue": "Advances in Neural Information Processing Systems 33, 10936\u201310947.", |
| "url": null |
| } |
| }, |
| { |
| "71": { |
| "title": "Training data-efficient image transformers & distillation through attention, in: International Conference on Machine Learning, PMLR. pp. 10347\u201310357.", |
| "author": "Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., J\u00e9gou, H., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "72": { |
| "title": "Attention is all you need, in: Advances in neural information processing systems, pp. 5998\u20136008.", |
| "author": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., Polosukhin, I., 2017.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "73": { |
| "title": "Crossformer: A versatile vision transformer hinging on cross-scale attention, in: International Conference on Learning Representations.", |
| "author": "Wang, W., Yao, L., Chen, L., Lin, B., Cai, D., He, X., Liu, W., 2021a.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "74": { |
| "title": "Not all images are worth 16x16 words: Dynamic transformers for efficient image recognition.", |
| "author": "Wang, Y., Huang, R., Song, S., Huang, Z., Huang, G., 2021b.", |
| "venue": "Advances in Neural Information Processing Systems 34, 11960\u201311973.", |
| "url": null |
| } |
| }, |
| { |
| "75": { |
| "title": "Towards evolutionary compression, in: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2476\u20132485.", |
| "author": "Wang, Y., Xu, C., Qiu, J., Xu, C., Tao, D., 2018a.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "76": { |
| "title": "Learning versatile filters for efficient convolutional neural networks.", |
| "author": "Wang, Y., Xu, C., Xu, C., Xu, C., Tao, D., 2018b.", |
| "venue": "Advances in Neural Information Processing Systems 31.", |
| "url": null |
| } |
| }, |
| { |
| "77": { |
| "title": "Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search, in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10734\u201310742.", |
| "author": "Wu, B., Dai, X., Zhang, P., Wang, Y., Sun, F., Wu, Y., Tian, Y., Vajda, P., Jia, Y., Keutzer, K., 2019.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "78": { |
| "title": "Tinyvit: Fast pretraining distillation for small vision transformers, in: European Conference on Computer Vision, Springer. pp. 68\u201385.", |
| "author": "Wu, K., Zhang, J., Peng, H., Liu, M., Xiao, B., Fu, J., Yuan, L., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "79": { |
| "title": "Lssanet: A long short slice-aware network for pulmonary nodule detection, in: International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer. pp. 664\u2013674.", |
| "author": "Xu, R., Luo, Y., Du, B., Kuang, K., Yang, J., 2022a.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "80": { |
| "title": "Evo-vit: Slow-fast token evolution for dynamic vision transformer, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 2964\u20132972.", |
| "author": "Xu, Y., Zhang, Z., Zhang, M., Sheng, K., Li, K., Dong, W., Zhang, L., Xu, C., Sun, X., 2022b.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "81": { |
| "title": "A multi-information fusion vit model and its application to the fault diagnosis of bearing with small data samples.", |
| "author": "Xu, Z., Tang, X., Wang, Z., 2023.", |
| "venue": "Machines 11, 277.", |
| "url": null |
| } |
| }, |
| { |
| "82": { |
| "title": "A-vit: Adaptive tokens for efficient vision transformer, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10809\u201310818.", |
| "author": "Yin, H., Vahdat, A., Alvarez, J.M., Mallya, A., Kautz, J., Molchanov, P., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "83": { |
| "title": "Tokens-to-token vit: Training vision transformers from scratch on imagenet, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 558\u2013567.", |
| "author": "Yuan, L., Chen, Y., Wang, T., Yu, W., Shi, Y., Jiang, Z.H., Tay, F.E., Feng, J., Yan, S., 2021.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "84": { |
| "title": "Vit-llmr: Vision transformer-based lower limb motion recognition from fusion signals of mmg and imu.", |
| "author": "Zhang, H., Yang, K., Cao, G., Xia, C., 2023.", |
| "venue": "Biomedical Signal Processing and Control 82, 104508.", |
| "url": null |
| } |
| }, |
| { |
| "85": { |
| "title": "Minivit: Compressing vision transformers with weight multiplexing, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12145\u201312154.", |
| "author": "Zhang, J., Peng, H., Wu, K., Liu, M., Xiao, B., Fu, J., Yuan, L., 2022.", |
| "venue": null, |
| "url": null |
| } |
| }, |
| { |
| "86": { |
| "title": "Spatial-channel enhanced transformer for visible-infrared person re-identification.", |
| "author": "Zhao, J., Wang, H., Zhou, Y., Yao, R., Chen, S., El Saddik, A., 2022.", |
| "venue": "IEEE Transactions on Multimedia .", |
| "url": null |
| } |
| }, |
| { |
| "87": { |
| "title": "Deepvit: Towards deeper vision transformer.", |
| "author": "Zhou, D., Kang, B., Jin, X., Yang, L., Lian, X., Jiang, Z., Hou, Q., Feng, J., 2021.", |
| "venue": "arXiv preprint arXiv:2103.11886 .", |
| "url": null |
| } |
| } |
| ], |
| "url": "http://arxiv.org/html/2203.01587v3" |
| } |