mishig HF Staff commited on
Commit
abc3bba
·
verified ·
1 Parent(s): 3a299ac

Add 1 files

Browse files
Files changed (1) hide show
  1. 2310/2310.18737.md +492 -0
2310/2310.18737.md ADDED
@@ -0,0 +1,492 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Pre-training with Random Orthogonal Projection Image Modeling
2
+
3
+ URL Source: https://arxiv.org/html/2310.18737
4
+
5
+ Published Time: Wed, 01 May 2024 12:39:03 GMT
6
+
7
+ Markdown Content:
8
+ \newcites
9
+
10
+ latexReferences \stackMath
11
+
12
+ Maryam Haghighat∗,†,††††absent†{}^{*,\dagger,\dagger\!\dagger}start_FLOATSUPERSCRIPT ∗ , † , † † end_FLOATSUPERSCRIPT, Peyman Moghadam§,†, Shaheer Mohamed§,†, Piotr Koniusz,§,‡{}^{\;\,,\S,{\ddagger}}start_FLOATSUPERSCRIPT , § , ‡ end_FLOATSUPERSCRIPT
13
+
14
+ §Data61 \usym 2665CSIRO ††\;\;{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT Queensland University of Technology ‡‡\;\;{}^{\ddagger}start_FLOATSUPERSCRIPT ‡ end_FLOATSUPERSCRIPT Australian National University
15
+
16
+ †name.lastname@qut.edu.au, §name.lastname@data61.csiro.au Corresponding authors. †††absent†\;{}^{\dagger\!\dagger}start_FLOATSUPERSCRIPT † † end_FLOATSUPERSCRIPT MH conducted this work during the employment with CSIRO. PK also in charge of the theory. The code is available at [https://github.com/csiro-robotics/ROPIM](https://github.com/csiro-robotics/ROPIM).
17
+
18
+ ###### Abstract
19
+
20
+ Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels. MIM applies random crops to input images, processes them with an encoder, and then recovers the masked inputs with a decoder, which encourages the network to capture and learn structural information about objects and scenes. The intermediate feature representations obtained from MIM are suitable for fine-tuning on downstream tasks. In this paper, we propose an Image Modeling framework based on random orthogonal projection instead of binary masking as in MIM. Our proposed Random Orthogonal Projection Image Modeling (ROPIM) reduces spatially-wise token information under guaranteed bound on the noise variance and can be considered as masking entire spatial image area under locally varying masking degrees. Since ROPIM uses a random subspace for the projection that realizes the masking step, the readily available complement of the subspace can be used during unmasking to promote recovery of removed information. In this paper, we show that using random orthogonal projection leads to superior performance compared to crop-based masking. We demonstrate state-of-the-art results on several popular benchmarks.
21
+
22
+ ###### Abstract
23
+
24
+ Below we include remaining experiments and details of our proposed Random Orthogonal Projection Image Modeling (ROPIM). Appendix [A](https://arxiv.org/html/2310.18737v2#A1 "Appendix A Runtimes ‣ Pre-training with Random Orthogonal Projection Image Modeling") presents a comparative analysis of the training cost of ROPIM in contrast to state-of-the-art methods. In Appendix [B](https://arxiv.org/html/2310.18737v2#A2 "Appendix B Transfer learning for smaller scale datasets ‣ Pre-training with Random Orthogonal Projection Image Modeling"), we delve into additional experiments related to transfer learning with smaller-scale datasets. Appendix [C](https://arxiv.org/html/2310.18737v2#A3 "Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling") includes ablation studies on the sketching ratio and the effects of varying pre-training and fine-tuning epochs. Detailed discussion on pre-training and fine-tuning settings are included in Appendix [D](https://arxiv.org/html/2310.18737v2#A4 "Appendix D Details of Pre-training and Fine-tuning Setups ‣ Pre-training with Random Orthogonal Projection Image Modeling"). Additional works are discussed in Appendix [E](https://arxiv.org/html/2310.18737v2#A5 "Appendix E More Related Works ‣ Pre-training with Random Orthogonal Projection Image Modeling").
25
+
26
+ 1 Introduction
27
+ --------------
28
+
29
+ ![Image 1: Refer to caption](https://arxiv.org/html/2310.18737v2/)
30
+
31
+ Figure 1: Training efficiency of ROPIM _vs_. other methods. ROPIM achieves a higher accuracy (see also LGP-ROPIM) with a lower training time. The blue and yellow regions indicate fast methods and high-accuracy methods, respectively. ROPIM has both high accuracy and is fast (the green region).
32
+
33
+ Masked Image Modeling (MIM)(Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2); He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)) has achieved promising performance by pre-training backbones that are then fine-tuned on different downstream tasks such as image classification or semantic segmentation.
34
+
35
+ Most MIM techniques follow the general paradigm of self-prediction, _i.e_., they randomly mask out some regions in the input data and then learn to recover the missing data. Current MIM methods (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2); He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)) mainly apply masking in the spatial domain by randomly excluding image patches. Since raw image pixels are highly correlated within their spatial neighbourhood, a high masking ratio (60%-75%) leads to high quality features (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)).
36
+
37
+ Existing MIM approaches typically replace a random set of input tokens with a special learnable symbol, called MASK, and aim to recover either masked image pixels (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)), masked content features (Wei et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib35)) or latent representations (Baevski et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib1)). This additional learnable MASK token is applied over large masked areas, up to 75% of image (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18)), and is not used in the fine-tuning stage (Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15)).
38
+
39
+ ![Image 2: Refer to caption](https://arxiv.org/html/2310.18737v2/)
40
+
41
+ (a)
42
+
43
+ ![Image 3: Refer to caption](https://arxiv.org/html/2310.18737v2/)
44
+
45
+ (b)
46
+
47
+ Figure 2: Our proposed Random Orthogonal Projection Image Modeling (ROPIM) _vs_. Masked Image Modeling (MIM). MIM in Fig. [2(a)](https://arxiv.org/html/2310.18737v2#S1.F2.sf1 "In Figure 2 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") performs masking on patches of an input image, passed to the backbone, followed by unmasking. Our ROPIM in Fig. [2(b)](https://arxiv.org/html/2310.18737v2#S1.F2.sf2 "In Figure 2 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") performs the orthogonal projection of patch embeddings onto a random subspace, passed to the backbone, followed by application of the complement of orthogonal projection. Thus, the loss focuses on the recovery of the lost information.
48
+
49
+ In this paper, we propose a new Random Orthogonal Projection Image Modeling (ROPIM) pre-training framework, which uses a simple projection strategy with provable noise bounds due to the loss of information. ROPIM is based on orthogonal projection (Charikar et al., [2002](https://arxiv.org/html/2310.18737v2#bib.bib6)) which is applicable to raw pixels and latent feature representations. Figure [1](https://arxiv.org/html/2310.18737v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") compares the top-1 accuracy _vs_. total pre-training (PT) time with sate-of-the-art SSL methods. ROPIM achieves higher accuracy while requiring significantly less total pre-training time. Total PT time is calculated as time per epoch multiplied by number of epochs. For fair comparisons, the reported times in Figure [1](https://arxiv.org/html/2310.18737v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") are derived from the use of the same resources (8×\times×P100 GPUs) and maximum possible batch size per GPU for each method. More details are discussed in Table [5](https://arxiv.org/html/2310.18737v2#A1.T5 "Table 5 ‣ Appendix A Runtimes ‣ Pre-training with Random Orthogonal Projection Image Modeling") of Appendix [A](https://arxiv.org/html/2310.18737v2#A1 "Appendix A Runtimes ‣ Pre-training with Random Orthogonal Projection Image Modeling").
50
+
51
+ ![Image 4: Refer to caption](https://arxiv.org/html/2310.18737v2/)
52
+
53
+ Figure 3: For MIM, unmasked parts of the recovered image, combined with the masked parts do approximate the input image. Our tokens, randomly projected and complement of the projection (equivalent of unmasking) along spatial modes, also approximately recover the input when added together.
54
+
55
+ Figure [2](https://arxiv.org/html/2310.18737v2#S1.F2 "Figure 2 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows our ROPIM approach. Our framework does not require a separate tokenizer network as in BEiT (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2)) and CIM (Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15)) or a large decoder that requires additional computations. Figure [3](https://arxiv.org/html/2310.18737v2#S1.F3 "Figure 3 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows that no matter whether an input image is randomly masked or projected to an orthogonal subspace, the network is encouraged to recover its complement. Adding masked/projected image to its complement subspace has to approximate the original image. ROPIM projects the features of patch embeddings along their spatial mode into a random subspace. Subsequently, we use the complement of this random subspace to guide the loss function to recover the removed information. We apply Random Orthogonal Projection (ROP) at the token level, hence the imposed computation overhead is negligible. We note that our proposed approach does not require MASK tokens.
56
+
57
+ ![Image 5: Refer to caption](https://arxiv.org/html/2310.18737v2/)
58
+
59
+ Figure 4: Left to right: original image, masking, unmasking, ROP, complement of ROP. Notice the “continuous” masking nature of ROP and complement of ROP.
60
+
61
+ Figure [4](https://arxiv.org/html/2310.18737v2#S1.F4 "Figure 4 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") compares visually binary masking in MIM with Random Orthogonal Projection (ROP). Compared with ROP, binary masking creates limited number of patterns, _e.g_., for 4 tokens one gets 2 4 superscript 2 4 2^{4}2 start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT masking and unmasking patterns only. Such a randomness is limited–the network cannot learn to recover from masking patterns that never occurred. In contrast, ROP is a linear interpolation between several tokens. Thus, it can be considered as a “continuous” masking where multiple locations are combined into a coefficient by the projection pattern. Since this “combination” is achieved by the projection matrix, we readily have the complement space needed for recovery of the removed information via a lightweight projection step. Hence, the network learns faster (Fig.[1](https://arxiv.org/html/2310.18737v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling")) as it is challenged by richer masking-unmasking patterns. Moreover, ROP is a form of randomized data corruption, or rather a lossy projection step with a guaranteed bound on the noise variance it introduces. In contrast, binary masking in MIM methods is prone to remove crucial image regions, potentially resulting in performance degradation (Li et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib21)), especially when high masking ratio is applied. Injecting a bounded noise is hence critical to learn semantically meaningful features.
62
+
63
+ Our contributions can be summarized as follows:
64
+
65
+ 1. i.We propose ROPIM, a simple but effective image modeling based on the so-called count sketching, with the aim of reducing local semantic information under the bounded noise variance.
66
+ 2. ii.In contrast to the binary masking (MIM), ROP forms “continuous” masking by a known projection matrix which has an easily-obtainable complement space matrix, which we use in the reconstruction loss to guide the recovery of the removed input information.
67
+ 3. iii.We propose to project patch tokens along their spatial mode into a random subspace, which is computationally negligible, enjoying the high throughput of MIM methods.
68
+
69
+ Our results show that proposed “continuous” masking/unmasking strategy creates a rich model with less pre-training cost, without the use of an auxiliary network, large decoder or a tokenizer.
70
+
71
+ 2 Related Work
72
+ --------------
73
+
74
+ Transformers(Vaswani et al., [2017](https://arxiv.org/html/2310.18737v2#bib.bib32)), popular in natural language processing (BERT (Devlin et al., [2018](https://arxiv.org/html/2310.18737v2#bib.bib13)) and GPT-3 (Brown et al., [2020](https://arxiv.org/html/2310.18737v2#bib.bib3))), capture attention between tokens. Image Transformers (Parmar et al., [2018](https://arxiv.org/html/2310.18737v2#bib.bib26)) and Vision Transformers (ViT) (Dosovitskiy et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib14)) also achieve persuasive results in supervised and unsupervised learning on large-scale datasets. ViT inspired data-efficient models such as DeiT (Touvron et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib30)), self-supervised DINO (Caron et al., [2021b](https://arxiv.org/html/2310.18737v2#bib.bib5)), CrossViT (Chen et al., [2021a](https://arxiv.org/html/2310.18737v2#bib.bib7)), and general-purpose architectures such as Swin-T (Liu et al., [2021b](https://arxiv.org/html/2310.18737v2#bib.bib23)) and Twins (Chu et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib11)). Kindly notice, in this work, we do not propose new transformers.
75
+
76
+ Self-supervised Learning(Liu et al., [2021a](https://arxiv.org/html/2310.18737v2#bib.bib22)) is essential for data hungry architectures with transformers. The lack of labels has led the vision community to study self-supervised learning based on contrastive or generative setting. Many self-supervised models use pretext tasks (Liu et al., [2021a](https://arxiv.org/html/2310.18737v2#bib.bib22)). Generative techniques such as Denoising AutoEncoder (DAE) (Vincent et al., [2008](https://arxiv.org/html/2310.18737v2#bib.bib33)) inject noise into the input data and train a network with a bottleneck to recover the original input. Many methods build on DAE under different corruption strategies, _e.g_., masking pixels or removing color channels. Contrastive models such as DINO and MoCo v3 (Caron et al., [2021a](https://arxiv.org/html/2310.18737v2#bib.bib4); Chen et al., [2021b](https://arxiv.org/html/2310.18737v2#bib.bib10)) use data augmentations to generate different image views, pull positive feature pairs while pushing away negative pairs. BYOL (Grill et al., [2020](https://arxiv.org/html/2310.18737v2#bib.bib16)) and SimSiam (Chen & He, [2021b](https://arxiv.org/html/2310.18737v2#bib.bib9)) eliminate negative sampling and prevent dimensional collapse. COSTA (Zhang et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib40)) eliminates multiple views. Finally, COLES (Zhu et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib45)), EASE (Zhu & Koniusz, [2022a](https://arxiv.org/html/2310.18737v2#bib.bib43)) and GLEN (Zhu & Koniusz, [2022b](https://arxiv.org/html/2310.18737v2#bib.bib44)) introduce the negative sampling into Laplacian Eigenmaps.
77
+
78
+ Masked Image Modeling (MIM) techniques (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2); He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)) learn representations from images corrupted by masking. Inspired by success in transformer-based masked language modeling(Devlin et al., [2018](https://arxiv.org/html/2310.18737v2#bib.bib13)), Dosovitskiy _et al_.(Dosovitskiy et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib14)) explored the prediction of masked image patches for self-supervision for visual data. Recent works (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2); He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39); Baevski et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib1); Wei et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib35)) use MIM with a transformer-based architecture (Vaswani et al., [2017](https://arxiv.org/html/2310.18737v2#bib.bib32)) and various objective functions.
79
+
80
+ Most MIM methods (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2); He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39); Baevski et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib1); Wei et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib35); Mishra et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib24)) use masking in the spatial domain by randomly excluding image patches or tokens. MAE (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18)) and SimMIM (Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)) recover masked raw pixels. BEiT (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2)) uses a discrete VAE (dVAE) network to transform image patches to visual tokens. During pre-training, the semantic tokens are recovered. However, BEiT requires an additional dVAE network to be pre-trained on patches. iBOT (Zhou et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib42)) uses a teacher network online tokenizer and performs self-distillation on masked patch and class tokens. Data2vec (Baevski et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib1)) uses a teacher-student framework to reconstruct latent representations. MaskedFeat (Wei et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib35)) recovers Histograms of Oriented Gradients. Tian _et al_.(Tian et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib29)) use different learning objectives for image degradation, including zoom-in, zoom-out, fish-eye distortion, blur and de-colorization. MFM (Xie et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib38)) uses Fast Fourier Transform for masked frequency modeling. Recent approaches CAN (Mishra et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib24)) and LGP (Jiang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib19)) combine Contrastive Learning (CL) with MIM for due to their complementarity. LGP (Jiang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib19)) implements layer grafted pre-training in a sequential fashion. Corrupted Image Modeling (CIM) (Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15)) uses an auxiliary generator with a trainable BEiT network to corrupt images in a better way than artificial MASK tokens. Similar to BEiT, CIM requires an additional dVAE network–its pre-training per epoch is 2×2\times 2 × slower than BEiT (Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15)).
81
+
82
+ Our ROPIM differs from the above models: (i) we do not mask patches, but perform projection onto a random subspace along the spatial mode of tokens, (ii) we do not perform unmasking, but use the complement of the subspace to support the loss of ROPIM in recovering the removed information.
83
+
84
+ Count Sketching is a widely used unsupervised dimensionality reduction technique (Weinberger et al., [2009](https://arxiv.org/html/2310.18737v2#bib.bib36); Cormode & Muthukrishnan, [2005](https://arxiv.org/html/2310.18737v2#bib.bib12)). Several variants of count sketching have been proposed in the literature, including Count-Min Sketch (Cormode & Muthukrishnan, [2005](https://arxiv.org/html/2310.18737v2#bib.bib12)). However, the core concept is based on capturing a small sketch of the data with a random projection function.
85
+
86
+ 3 Approach
87
+ ----------
88
+
89
+ ![Image 6: Refer to caption](https://arxiv.org/html/2310.18737v2/)
90
+
91
+ Figure 5: Understanding the projection of ϕ bold-italic-ϕ\bm{\phi}bold_italic_ϕ on the unitary projection matrix 𝐏 𝐏\mathbf{P}bold_P (subspace), given as 𝐏⁢ϕ 𝐏 bold-italic-ϕ\mathbf{P}\bm{\phi}bold_P bold_italic_ϕ, and its retraction given as ϕ′=𝐏†⁢𝐏⁢ϕ superscript bold-italic-ϕ′superscript 𝐏†𝐏 bold-italic-ϕ\bm{\phi}^{\prime}\!=\!\mathbf{P}^{\dagger}\mathbf{P}\bm{\phi}bold_italic_ϕ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P bold_italic_ϕ. Projection matrix 𝐏¯¯𝐏\bar{\mathbf{P}}over¯ start_ARG bold_P end_ARG (subspace) complementary to 𝐏 𝐏\mathbf{P}bold_P is also indicated. Vector ϕ bold-italic-ϕ\bm{\phi}bold_italic_ϕ projected on 𝐏¯¯𝐏\bar{\mathbf{P}}over¯ start_ARG bold_P end_ARG and then retracted from it is given as ϕ′′=𝐏¯†⁢𝐏¯⁢ϕ superscript bold-italic-ϕ′′superscript¯𝐏†¯𝐏 bold-italic-ϕ\bm{\phi}^{\prime\prime}\!\!=\!\bar{\mathbf{P}}^{\dagger}\bar{\mathbf{P}}\bm{\phi}bold_italic_ϕ start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT = over¯ start_ARG bold_P end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT over¯ start_ARG bold_P end_ARG bold_italic_ϕ. Notice that ϕ′+ϕ′′=ϕ superscript bold-italic-ϕ′superscript bold-italic-ϕ′′bold-italic-ϕ\bm{\phi}^{\prime}\!+\!\bm{\phi}^{\prime\prime}\!\!=\!\bm{\phi}bold_italic_ϕ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT + bold_italic_ϕ start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT = bold_italic_ϕ. The lossy nature of this projection occurs when 𝐏†⁢𝐏+𝐏¯†⁢𝐏¯≠𝕀 superscript 𝐏†𝐏 superscript¯𝐏†¯𝐏 𝕀\mathbf{P}^{\dagger}\mathbf{P}+\bar{\mathbf{P}}^{\dagger}\bar{\mathbf{P}}\neq% \bm{\mathds{I}}bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P + over¯ start_ARG bold_P end_ARG start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT over¯ start_ARG bold_P end_ARG ≠ blackboard_bold_I, _i.e_., not the full diagonal matrix is recovered.
92
+
93
+ Below, we detail our ROPIM pipeline. Firstly, we explain our notations and ROP. Then, we introduce our problem formulation and ROPIM pipeline.
94
+
95
+ ### 3.1 Preliminaries
96
+
97
+ Notations. Let 𝐱∈ℝ d 𝐱 superscript ℝ 𝑑\mathbf{x}\!\in\!\mathbb{R}^{d}bold_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT be a d 𝑑 d italic_d-dimensional feature vector. ℐ N subscript ℐ 𝑁\mathcal{I}_{N}caligraphic_I start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT stands for the index set {1,2,⋯,N}1 2⋯𝑁\{1,2,\cdots,N\}{ 1 , 2 , ⋯ , italic_N }. We define 𝟏=[1,…,1]T 1 superscript 1…1 𝑇\mathbf{1}\!=\!\left[1,...,1\right]^{T}bold_1 = [ 1 , … , 1 ] start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT (‘all ones’ vector). Capitalised bold symbols such as 𝚽 𝚽\bm{\Phi}bold_Φ denote matrices, lowercase bold symbols such as ϕ bold-italic-ϕ\bm{\phi}bold_italic_ϕ denote vectors, and regular fonts denote scalars _e.g_., Φ i,j subscript Φ 𝑖 𝑗\Phi_{i,j}roman_Φ start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT, ϕ i subscript italic-ϕ 𝑖\phi_{i}italic_ϕ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, n 𝑛 n italic_n or Z 𝑍 Z italic_Z. Φ i,j subscript Φ 𝑖 𝑗\Phi_{i,j}roman_Φ start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT is the (i,j)𝑖 𝑗(i,j)( italic_i , italic_j )-th entry of 𝚽 𝚽\bm{\Phi}bold_Φ. Symbol δ⁢(x)=1 𝛿 𝑥 1\delta(x)\!=\!1 italic_δ ( italic_x ) = 1 if x=0 𝑥 0 x\!=\!0 italic_x = 0, δ⁢(x)=0 𝛿 𝑥 0\delta(x)\!=\!0 italic_δ ( italic_x ) = 0 if x≠0 𝑥 0 x\!\neq\!0 italic_x ≠ 0, and 𝕀 𝕀\bm{\mathds{I}}blackboard_bold_I is the identity matrix. Operator ∥⋅∥1 subscript delimited-∥∥⋅1\lVert\cdot\rVert_{1}∥ ⋅ ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT on matrix is the ℓ 1 subscript ℓ 1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT norm of vectorised matrix.
98
+
99
+ ###### Proposition 1.
100
+
101
+ Let K 𝐾 K italic_K and K′superscript 𝐾′K^{\prime}italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT be the sizes of the input and the projected output. Let vector 𝐡∈ℐ K′K 𝐡 superscript subscript ℐ superscript 𝐾′𝐾\mathbf{h}\!\in\!\mathcal{I}_{K^{\prime}}^{K}bold_h ∈ caligraphic_I start_POSTSUBSCRIPT italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT contain K 𝐾 K italic_K uniformly drawn integer numbers from {1,⋯,K′}1⋯superscript 𝐾′\{1,\cdots,K^{\prime}\}{ 1 , ⋯ , italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT } and vector 𝐬∈{−1,1}K 𝐬 superscript 1 1 𝐾\mathbf{s}\!\in\!\{-1,1\}^{K}bold_s ∈ { - 1 , 1 } start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT contain K 𝐾 K italic_K uniformly drawn values from {−1,1}1 1\{-1,1\}{ - 1 , 1 }. The projection matrix 𝐏∈{−1,0,1}K′×K 𝐏 superscript 1 0 1 superscript 𝐾′𝐾\mathbf{P}\!\in\!\{-1,0,1\}^{K^{\prime}\times K}bold_P ∈ { - 1 , 0 , 1 } start_POSTSUPERSCRIPT italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT × italic_K end_POSTSUPERSCRIPT is given as P i⁢j⁢(𝐡,𝐬)=s j⋅δ⁢(h j−i)subscript 𝑃 𝑖 𝑗 𝐡 𝐬⋅subscript 𝑠 𝑗 𝛿 subscript ℎ 𝑗 𝑖 P_{ij}({\bm{h}},\bm{s})\!=\!s_{j}\cdot\delta(h_{j}\!-\!i)italic_P start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( bold_italic_h , bold_italic_s ) = italic_s start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ⋅ italic_δ ( italic_h start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT - italic_i ) and the projection Π:ℝ K→ℝ K′:Π→superscript ℝ 𝐾 superscript ℝ superscript 𝐾′\Pi:\mathbb{R}^{K}\!\rightarrow\!\mathbb{R}^{K^{\prime}}roman_Π : blackboard_R start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT → blackboard_R start_POSTSUPERSCRIPT italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT is a linear operation Π 𝐡,𝐬⁢(ϕ)=𝐏⁢(𝐡,𝐬)⁢ϕ subscript Π 𝐡 𝐬 bold-ϕ 𝐏 𝐡 𝐬 bold-ϕ\Pi_{{\bm{h}},\bm{s}}(\bm{\phi})\!=\!\mathbf{P}({\bm{h}},\bm{s})\bm{\phi}roman_Π start_POSTSUBSCRIPT bold_italic_h , bold_italic_s end_POSTSUBSCRIPT ( bold_italic_ϕ ) = bold_P ( bold_italic_h , bold_italic_s ) bold_italic_ϕ (or simply Π⁢(ϕ)=𝐏⁢ϕ Π bold-ϕ 𝐏 bold-ϕ\Pi(\bm{\phi})\!=\!\mathbf{P}\bm{\phi}roman_Π ( bold_italic_ϕ ) = bold_P bold_italic_ϕ).
102
+
103
+ ![Image 7: Refer to caption](https://arxiv.org/html/2310.18737v2/)
104
+
105
+ Figure 6: Overview of the Random Orthogonal Projection Image Modeling (ROPIM) pipeline. An image is divided into patch tokens, and embedded. Sketching matrix 𝐏∼𝒫 similar-to 𝐏 𝒫\mathbf{P}\!\sim\!\mathcal{P}bold_P ∼ caligraphic_P is drawn and ROP (with its inverse) is applied to embeddings 𝚽 𝚽\mathbf{\Phi}bold_Φ. Subsequently, 𝚽′superscript 𝚽′\mathbf{\Phi}^{\prime}\!bold_Φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is passed through transformer f⁢(⋅)𝑓⋅f(\cdot)italic_f ( ⋅ ). Operator ⊕direct-sum\oplus⊕ is an addition. CLS and PE are the class token and positional embedding. Finally, decoder (only one linear projection layer) and the reconstruction loss which targets the inverse projection, are applied. Once ROPIM is trained, we use 𝚿 𝚿\mathbf{\Psi}bold_Ψ as feature representations for the downstream task.
106
+
107
+ Following are properties of count sketches we utilize in our work:
108
+
109
+ ###### Property 1.
110
+
111
+ The inner product of count sketches is an unbiased estimator. Specifically, we have 𝔼 𝐡,𝐬⁢[⟨Π 𝐡,𝐬⁢(ϕ x),Π 𝐡,𝐬⁢(ϕ y)⟩−⟨ϕ x,ϕ y⟩]=0 subscript 𝔼 𝐡 𝐬 delimited-[]subscript Π 𝐡 𝐬 subscript bold-ϕ 𝑥 subscript Π 𝐡 𝐬 subscript bold-ϕ 𝑦 subscript bold-ϕ 𝑥 subscript bold-ϕ 𝑦 0\!\mathbb{E}_{{\bm{h}},\bm{s}}\big{[}\left<\Pi_{{\bm{h}},\bm{s}}(\bm{\phi}_{x}% ),\Pi_{{\bm{h}},\bm{s}}(\bm{\phi}_{y})\right>\!-\!\left<\bm{\phi}_{x},\bm{\phi% }_{y}\right>\big{]}\!=\!0 blackboard_E start_POSTSUBSCRIPT bold_italic_h , bold_italic_s end_POSTSUBSCRIPT [ ⟨ roman_Π start_POSTSUBSCRIPT bold_italic_h , bold_italic_s end_POSTSUBSCRIPT ( bold_italic_ϕ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ) , roman_Π start_POSTSUBSCRIPT bold_italic_h , bold_italic_s end_POSTSUBSCRIPT ( bold_italic_ϕ start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ) ⟩ - ⟨ bold_italic_ϕ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , bold_italic_ϕ start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ⟩ ] = 0 with variance bounded by 1 K′⁢(⟨ϕ x,ϕ y⟩2+∥ϕ x∥2 2⁢∥ϕ y∥2 2)1 superscript 𝐾′superscript subscript bold-ϕ 𝑥 subscript bold-ϕ 𝑦 2 superscript subscript delimited-∥∥subscript bold-ϕ 𝑥 2 2 superscript subscript delimited-∥∥subscript bold-ϕ 𝑦 2 2\frac{1}{K^{\prime}}(\left<\bm{\phi}_{x},\bm{\phi}_{y}\right>^{2}\!+\!\lVert% \bm{\phi}_{x}\rVert_{2}^{2}\lVert\bm{\phi}_{y}\rVert_{2}^{2})divide start_ARG 1 end_ARG start_ARG italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG ( ⟨ bold_italic_ϕ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT , bold_italic_ϕ start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ⟩ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + ∥ bold_italic_ϕ start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ∥ bold_italic_ϕ start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).
112
+
113
+ ###### Proof.
114
+
115
+ See Weinberger _et al_.(Weinberger et al., [2009](https://arxiv.org/html/2310.18737v2#bib.bib36)) for proof. ∎
116
+
117
+ ###### Property 2.
118
+
119
+ The unitary projection matrix 𝐏 𝐏\mathbf{P}bold_P enjoys a simple pseudo-inverse 𝐏†=K′K⁢𝐏 T superscript 𝐏†superscript 𝐾′𝐾 superscript 𝐏 𝑇\mathbf{P}^{\dagger}\!=\!\frac{K^{\prime}}{K}\mathbf{P}^{T}\!bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT = divide start_ARG italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_ARG italic_K end_ARG bold_P start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT.
120
+
121
+ ###### Proof.
122
+
123
+ The transpose for inverse follows from the fact that 𝐏 𝐏\mathbf{P}bold_P is constructed as a unitary matrix. ���
124
+
125
+ ###### Property 3.
126
+
127
+ The distance of vector ϕ bold-ϕ\bm{\phi}bold_italic_ϕ to subspace 𝐏 𝐏\mathbf{P}bold_P is given as ∥ϕ−𝐏†⁢𝐏⁢ϕ∥2 subscript delimited-∥∥bold-ϕ superscript 𝐏†𝐏 bold-ϕ 2\lVert\bm{\phi}-\mathbf{P}^{\dagger}\mathbf{P}\bm{\phi}\rVert_{2}∥ bold_italic_ϕ - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P bold_italic_ϕ ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Thus ϕ′=𝐏†⁢𝐏⁢ϕ superscript bold-ϕ′superscript 𝐏†𝐏 bold-ϕ\bm{\phi}^{\prime}\!\!=\!\mathbf{P}^{\dagger}\mathbf{P}\bm{\phi}bold_italic_ϕ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P bold_italic_ϕ is the vector with the removed information resulting from the lossy operations: (i) projection of ϕ bold-ϕ\bm{\phi}bold_italic_ϕ on subspace 𝐏 𝐏\mathbf{P}bold_P followed by (ii) retraction from the subspace into the original feature space.
128
+
129
+ ###### Proof.
130
+
131
+ These relations follow from Grassmann feature maps of subspaces (Harandi et al., [2015](https://arxiv.org/html/2310.18737v2#bib.bib17)). ∎
132
+
133
+ ###### Property 4.
134
+
135
+ As the complement of 𝐏†⁢𝐏 superscript 𝐏†𝐏\mathbf{P}^{\dagger}\mathbf{P}bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P is 𝕀−𝐏†⁢𝐏 𝕀 superscript 𝐏†𝐏\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{P}blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P, the distance of ϕ bold-ϕ\bm{\phi}bold_italic_ϕ to the complement basis of subspace 𝐏 𝐏\mathbf{P}bold_P is ∥𝐏†⁢𝐏⁢ϕ∥2 subscript delimited-∥∥superscript 𝐏†𝐏 bold-ϕ 2\lVert\mathbf{P}^{\dagger}\mathbf{P}\bm{\phi}\rVert_{2}∥ bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P bold_italic_ϕ ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. Thus ϕ′′=(𝕀−𝐏†⁢𝐏)⁢ϕ superscript bold-ϕ′′𝕀 superscript 𝐏†𝐏 bold-ϕ\bm{\phi}^{\prime\prime}\!=\!(\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{% P})\bm{\phi}bold_italic_ϕ start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT = ( blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P ) bold_italic_ϕ is a vector complementary to ϕ′superscript bold-ϕ′\bm{\phi}^{\prime}bold_italic_ϕ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, _i.e_., ϕ′+ϕ′′=ϕ superscript bold-ϕ′superscript bold-ϕ′′bold-ϕ\bm{\phi}^{\prime}\!+\!\bm{\phi}^{\prime\prime}\!=\!\bm{\phi}bold_italic_ϕ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT + bold_italic_ϕ start_POSTSUPERSCRIPT ′ ′ end_POSTSUPERSCRIPT = bold_italic_ϕ.
136
+
137
+ ![Image 8: Refer to caption](https://arxiv.org/html/2310.18737v2/)
138
+
139
+ (a)
140
+
141
+ ![Image 9: Refer to caption](https://arxiv.org/html/2310.18737v2/)
142
+
143
+ (b)
144
+
145
+ ![Image 10: Refer to caption](https://arxiv.org/html/2310.18737v2/)
146
+
147
+ (c)
148
+
149
+ ![Image 11: Refer to caption](https://arxiv.org/html/2310.18737v2/)
150
+
151
+ (d)
152
+
153
+ ![Image 12: Refer to caption](https://arxiv.org/html/2310.18737v2/)
154
+
155
+ (e)
156
+
157
+ ![Image 13: Refer to caption](https://arxiv.org/html/2310.18737v2/)
158
+
159
+ (f)
160
+
161
+ Figure 7: (top) Comparison of errors of ROP by sketching _vs_. masking. 5000 images (normalized in range [0, 1]) randomly sampled from CIFAR10 were divided each into 16×16 16 16 16\times 16 16 × 16 tokens. Sketching ratio ρ=.25 𝜌.25\rho\!=.25 italic_ρ = .25 was applied which corresponds to masking ratio 1−ρ=.75 1 𝜌.75 1\!-\!\rho=.75 1 - italic_ρ = .75. Fig. [7(a)](https://arxiv.org/html/2310.18737v2#S3.F7.sf1 "In Figure 7 ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows histogram-binned ℓ 1 subscript ℓ 1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT errors (green) computed between tokens in 𝐗 𝐗\mathbf{X}bold_X and their sketch projected-retracted lossy versions 𝐏†⁢𝐏𝐗 superscript 𝐏†𝐏𝐗\mathbf{P}^{\dagger}\mathbf{P}\mathbf{X}bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_PX. The ℓ 1 subscript ℓ 1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT errors (red) between tokens in 𝐗 𝐗\mathbf{X}bold_X and the masked versions are shown. As is clear, masking produces many locations with zero error. In contrast, sketching introduces some errors to every token. Fig. [7(b)](https://arxiv.org/html/2310.18737v2#S3.F7.sf2 "In Figure 7 ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows histogram counts of tokens for which the reconstruction error was greater than 0.1. Clearly, sketching modified more regions than masking. Fig. [7(c)](https://arxiv.org/html/2310.18737v2#S3.F7.sf3 "In Figure 7 ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") is as [7(a)](https://arxiv.org/html/2310.18737v2#S3.F7.sf1 "In Figure 7 ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") but the reconstruction error of each token is normalized by the number of tokens in image with error greater than 0.1. Clearly, sketching introduces lesser error per token but modifies plenty more spatial locations than masking, which explains why ROP is superior to masking. (bottom) The same analysis for complement sketching and unmasking. Fig. [7(d)](https://arxiv.org/html/2310.18737v2#S3.F7.sf4 "In Figure 7 ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling"), [7(e)](https://arxiv.org/html/2310.18737v2#S3.F7.sf5 "In Figure 7 ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") and [7(f)](https://arxiv.org/html/2310.18737v2#S3.F7.sf6 "In Figure 7 ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") again show that ROP operates on more regions spatially-wise than unmasking. Unlike other operations, ROP enjoys an easy complement sketching in analogy to unmasking.
162
+
163
+ ### 3.2 Problem Formulation
164
+
165
+ We employ the standard vision transformer (Dosovitskiy et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib14)) for our Random Orthogonal Projection Image Modelling (ROPIM). Figure [6](https://arxiv.org/html/2310.18737v2#S3.F6 "Figure 6 ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") provides an overview of the ROPIM pipeline.
166
+
167
+ Algorithm 1 Random Orthogonal Projection Image Modeling (ROPIM).
168
+
169
+ Input
170
+
171
+ 𝒟 train subscript 𝒟 train\mathcal{D}_{\text{train}}caligraphic_D start_POSTSUBSCRIPT train end_POSTSUBSCRIPT
172
+ : training dataset;
173
+
174
+ τ 𝜏\tau italic_τ
175
+ : iterations;
176
+
177
+ ρ 𝜌\rho italic_ρ
178
+ : sketching ratio; set
179
+
180
+ K′=ρ⁢K superscript 𝐾′𝜌 𝐾 K^{\prime}\!=\!\rho K italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_ρ italic_K
181
+ .
182
+
183
+ for
184
+
185
+ t=1,⋯,τ 𝑡 1⋯𝜏 t=1,\cdots,\tau italic_t = 1 , ⋯ , italic_τ
186
+ do
187
+
188
+ 𝐗∼𝒟 train similar-to 𝐗 subscript 𝒟 train\mathbf{X}\!\sim\!\mathcal{D}_{\text{train}}bold_X ∼ caligraphic_D start_POSTSUBSCRIPT train end_POSTSUBSCRIPT
189
+ (draw an image with tokens)
190
+
191
+ 𝐏∼𝒫⁢(K′)similar-to 𝐏 𝒫 superscript 𝐾′\mathbf{P}\!\sim\!\mathcal{P}(K^{\prime})bold_P ∼ caligraphic_P ( italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT )
192
+ (draw proj. matrix (Propos. [1](https://arxiv.org/html/2310.18737v2#Thmprop1 "Proposition 1. ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")))
193
+
194
+ Update the main network branch by Eq. ([4](https://arxiv.org/html/2310.18737v2#S3.E4 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")):
195
+
196
+ arg⁢min 𝚯∗⁡L rec⁢(𝐗;𝚯∗,𝐏)subscript arg min superscript 𝚯 subscript 𝐿 rec 𝐗 superscript 𝚯 𝐏\quad\operatorname*{arg\,min}\limits_{{\mathbf{\Theta}}^{*}}L_{\text{rec}}(% \mathbf{X};{\mathbf{\Theta}}^{*}\!,\mathbf{P})start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_Θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT rec end_POSTSUBSCRIPT ( bold_X ; bold_Θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , bold_P )
197
+
198
+ end for
199
+
200
+ For an image with dimensions H×W×C 𝐻 𝑊 𝐶{H\times W\times C}italic_H × italic_W × italic_C representing height, width, and the number of channels, we extract a series of 2D patches, reshape them into vectors, and stack into a matrix 𝐗∈ℝ N×(P 2⋅C)𝐗 superscript ℝ 𝑁⋅superscript 𝑃 2 𝐶\mathbf{X}\in\mathbb{R}^{N\times(P^{2}\cdot C)}bold_X ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × ( italic_P start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⋅ italic_C ) end_POSTSUPERSCRIPT, where P×P 𝑃 𝑃 P\times P italic_P × italic_P is the patch size, and N 𝑁 N italic_N is the number of patches extracted with the goal of forming patch embeddings. Patch embeddings are obtained as:
201
+
202
+ 𝚽=𝐗𝐖,𝚽 𝐗𝐖\mathbf{\Phi}=\mathbf{X}\mathbf{W},bold_Φ = bold_XW ,(1)
203
+
204
+ where 𝐖∈ℝ(P 2⋅C)×D 𝐖 superscript ℝ⋅superscript 𝑃 2 𝐶 𝐷\mathbf{W}\in\mathbb{R}^{(P^{2}\cdot C)\times D}bold_W ∈ blackboard_R start_POSTSUPERSCRIPT ( italic_P start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⋅ italic_C ) × italic_D end_POSTSUPERSCRIPT is the linear projection matrix used to obtain the matrix of embeddings 𝚽∈ℝ N×D 𝚽 superscript ℝ 𝑁 𝐷\mathbf{\Phi}\in\mathbb{R}^{N\times D}bold_Φ ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × italic_D end_POSTSUPERSCRIPT. In MIM, a random portion of the input patch embeddings are replaced with a MASK token, and then a network is trained to recover the masked patches.
205
+
206
+ In this paper, instead of patch-wise masking, we apply ROP with the sketching ratio ρ=K′K 𝜌 superscript 𝐾′𝐾\rho\!=\!\frac{K^{\prime}}{K}italic_ρ = divide start_ARG italic_K start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_ARG italic_K end_ARG which determines the lossy effect on projected embeddings. We apply the ROP operation along the spatial mode of the matrix of embeddings. Specifically, we perform the projection matrix followed by retraction, as explained in Property [3](https://arxiv.org/html/2310.18737v2#Thmproperty3 "Property 3. ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling"):
207
+
208
+ 𝚽′=𝐏†⁢𝐏𝐗𝐖,superscript 𝚽′superscript 𝐏†𝐏𝐗𝐖\mathbf{\Phi}^{\prime}\!=\mathbf{P}^{\dagger}\mathbf{P}\mathbf{X}\mathbf{W},bold_Φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_PXW ,(2)
209
+
210
+ where 𝐏 𝐏\mathbf{P}bold_P is the unitary projection matrix with a trivial pseudo-inverse (Property [2](https://arxiv.org/html/2310.18737v2#Thmproperty2 "Property 2. ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")). Matrix 𝚽′superscript 𝚽′\mathbf{\Phi}^{\prime}bold_Φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT contains our projected embeddings of patches. Note 𝚽′∈ℝ N×D superscript 𝚽′superscript ℝ 𝑁 𝐷\mathbf{\Phi}^{\prime}\!\in\!\mathbb{R}^{N\times D}bold_Φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × italic_D end_POSTSUPERSCRIPT&𝚽∈ℝ N×D 𝚽 superscript ℝ 𝑁 𝐷\mathbf{\Phi}\!\in\!\mathbb{R}^{N\times D}bold_Φ ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × italic_D end_POSTSUPERSCRIPT have the same size.
211
+
212
+ Then we add the class token and positional embeddings 𝚽′superscript 𝚽′\mathbf{\Phi}^{\prime}bold_Φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, and we pass 𝚽′superscript 𝚽′\mathbf{\Phi}^{\prime}bold_Φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT into the transformer, _i.e_., 𝚿=f⁢(𝚽′)𝚿 𝑓 superscript 𝚽′\mathbf{\Psi}=f(\mathbf{\Phi}^{\prime})bold_Ψ = italic_f ( bold_Φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ). We use a linear prediction head to reconstruct the raw pixel values via the ℓ 1 subscript ℓ 1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT loss. Specifically, we apply:
213
+
214
+ \stackon[−.5 p t]\stackon[.5 p t]𝐗∼∼=(𝕀−𝐏†𝐏)f(𝚽′)𝐖∗,\stackon[-.5pt]{\stackon[.5pt]{\mathbf{X}}{\scriptstyle\sim}}{\scriptstyle\sim% }\!=(\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{P})f(\mathbf{\Phi}^{% \prime})\mathbf{W^{*}},[ - .5 italic_p italic_t ] [ .5 italic_p italic_t ] bold_X ∼ ∼ = ( blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P ) italic_f ( bold_Φ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) bold_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ,(3)
215
+
216
+ where 𝐖∗∈ℝ D×(P 2⋅C)superscript 𝐖 superscript ℝ 𝐷⋅superscript 𝑃 2 𝐶\mathbf{W^{*}}\!\in\!\mathbb{R}^{D\times(P^{2}\cdot C)}bold_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_D × ( italic_P start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ⋅ italic_C ) end_POSTSUPERSCRIPT is the output linear projection, and (𝕀−𝐏†⁢𝐏)⁢𝚿 𝕀 superscript 𝐏†𝐏 𝚿(\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{P})\mathbf{\Psi}( blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P ) bold_Ψ is the complement map of 𝐏†⁢𝐏 superscript 𝐏†𝐏\mathbf{P}^{\dagger}\mathbf{P}bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P (Property [4](https://arxiv.org/html/2310.18737v2#Thmproperty4 "Property 4. ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")) which explicitly promotes the recovery of the removed input information. Figure [8](https://arxiv.org/html/2310.18737v2#S3.F8 "Figure 8 ‣ 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") illustrates the effects of Eq. ([2](https://arxiv.org/html/2310.18737v2#S3.E2 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")) & ([3](https://arxiv.org/html/2310.18737v2#S3.E3 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")) on images and patches within. We skip 𝐖 𝐖\mathbf{W}bold_W&𝐖∗superscript 𝐖\mathbf{W^{*}}bold_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT for clarity.
217
+
218
+ ![Image 14: Refer to caption](https://arxiv.org/html/2310.18737v2/)
219
+
220
+ Figure 8: The effect of ROP on images sampled from ImageNet (first column). Second/fourth columns: images after applying Eq. ([2](https://arxiv.org/html/2310.18737v2#S3.E2 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")) with 𝐖=𝕀 𝐖 𝕀\mathbf{W}\!=\!\bm{\mathds{I}}bold_W = blackboard_bold_I, _i.e_., 𝐏†⁢𝐏𝐗 superscript 𝐏†𝐏𝐗\mathbf{P}^{\dagger}\mathbf{P}\mathbf{X}bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_PX (sketching ratio ρ=.5 𝜌.5\rho\!=\!.5 italic_ρ = .5 and ρ=.75 𝜌.75\rho\!=\!.75 italic_ρ = .75 respectively). Third/fifth columns: images after applying the complement of Eq. ([2](https://arxiv.org/html/2310.18737v2#S3.E2 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")), _i.e_., (𝕀−𝐏†⁢𝐏)⁢𝐗 𝕀 superscript 𝐏†𝐏 𝐗(\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{P})\mathbf{X}( blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P ) bold_X (for ρ=.5 𝜌.5\rho\!=\!.5 italic_ρ = .5 and ρ=.75 𝜌.75\rho\!=\!.75 italic_ρ = .75 respectively). Notice that adding images in columns two and three (or four and five) recover the original images.
221
+
222
+ Combining the above steps from Eq. ([2](https://arxiv.org/html/2310.18737v2#S3.E2 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")) and ([3](https://arxiv.org/html/2310.18737v2#S3.E3 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")), we obtain the ROPIM pipeline:
223
+
224
+ L rec⁢(𝐗;𝚯∗,𝐏)=∥(𝕀−𝐏†⁢𝐏)⁢(𝐗−𝐗~)∥1,subscript 𝐿 rec 𝐗 superscript 𝚯 𝐏 subscript delimited-∥∥𝕀 superscript 𝐏†𝐏 𝐗~𝐗 1\displaystyle\!\!\!L_{\text{rec}}(\mathbf{X};{\mathbf{\Theta}}^{*}\!,\mathbf{P% })=\lVert(\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{P})(\mathbf{X}\!-\!% \widetilde{\mathbf{X}})\rVert_{1},italic_L start_POSTSUBSCRIPT rec end_POSTSUBSCRIPT ( bold_X ; bold_Θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , bold_P ) = ∥ ( blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P ) ( bold_X - over~ start_ARG bold_X end_ARG ) ∥ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ,(4)
225
+ 𝐗~=f 𝚯⁢(𝐏†⁢𝐏𝐗𝐖)⁢𝐖∗.~𝐗 subscript 𝑓 𝚯 superscript 𝐏†𝐏𝐗𝐖 superscript 𝐖\displaystyle\!\!\!\widetilde{\mathbf{X}}\!=\!f_{\mathbf{\Theta}}(\mathbf{P}^{% \dagger}\mathbf{P}\mathbf{X}\mathbf{W})\mathbf{W^{*}}.over~ start_ARG bold_X end_ARG = italic_f start_POSTSUBSCRIPT bold_Θ end_POSTSUBSCRIPT ( bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_PXW ) bold_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT .(5)
226
+
227
+ Notice that \stackon[−.5 p t]\stackon[.5 p t]𝐗∼∼=(𝕀−𝐏†𝐏)𝐗~\stackon[-.5pt]{\stackon[.5pt]{\mathbf{X}}{\scriptstyle\sim}}{\scriptstyle\sim% }\!=\!(\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{P})\widetilde{\mathbf{X}}[ - .5 italic_p italic_t ] [ .5 italic_p italic_t ] bold_X ∼ ∼ = ( blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P ) over~ start_ARG bold_X end_ARG but we move the complement ROP (with its inverse), 𝕀−𝐏†⁢𝐏 𝕀 superscript 𝐏†𝐏\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{P}blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P, into Eq. ([4](https://arxiv.org/html/2310.18737v2#S3.E4 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")) as the complement ROP (with its inverse) has to be also applied to 𝐗 𝐗\mathbf{X}bold_X in order to promote only the recovery of lost information. L rec⁢(𝐗;𝚯∗,𝐏)subscript 𝐿 rec 𝐗 superscript 𝚯 𝐏 L_{\text{rec}}(\mathbf{X};{\mathbf{\Theta}}^{*}\!,\mathbf{P})italic_L start_POSTSUBSCRIPT rec end_POSTSUBSCRIPT ( bold_X ; bold_Θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , bold_P ) is the reconstruction loss we minimize w.r.t.𝚯∗≡{𝚯,𝐖,𝐖∗}superscript 𝚯 𝚯 𝐖 superscript 𝐖{\mathbf{\Theta}}^{*}\!\equiv\!\{{\mathbf{\Theta}},\mathbf{W},\mathbf{W}^{*}\}bold_Θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT ≡ { bold_Θ , bold_W , bold_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT }, that is, network parameters 𝚯 𝚯{\mathbf{\Theta}}bold_Θ, and linear projection matrices 𝐖 𝐖\mathbf{W}bold_W and 𝐖∗superscript 𝐖\mathbf{W}^{*}bold_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. Notice that 𝐗~~𝐗\widetilde{\mathbf{X}}over~ start_ARG bold_X end_ARG depends on several arguments, _i.e_., 𝐗~≡𝐗~⁢(𝐗;𝚯∗,𝐏)~𝐗~𝐗 𝐗 superscript 𝚯 𝐏\widetilde{\mathbf{X}}\equiv\widetilde{\mathbf{X}}(\mathbf{X};{\mathbf{\Theta}% }^{*}\!,\mathbf{P})over~ start_ARG bold_X end_ARG ≡ over~ start_ARG bold_X end_ARG ( bold_X ; bold_Θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT , bold_P ) but we drop them in subsequent equations for brevity.
228
+
229
+ ROPIM is given by Alg. [1](https://arxiv.org/html/2310.18737v2#alg1 "Algorithm 1 ‣ 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling"). We skip mini-batch level operations for simplicity. Note that for each image sample, we draw a new projection matrix 𝐏 𝐏\mathbf{P}bold_P according to Proposition [1](https://arxiv.org/html/2310.18737v2#Thmprop1 "Proposition 1. ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling") and then we simply minimize the reconstruction loss from Eq. ([4](https://arxiv.org/html/2310.18737v2#S3.E4 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")).
230
+
231
+ 4 Experiments
232
+ -------------
233
+
234
+ ### 4.1 Datasets
235
+
236
+ We perform self-supervised pre-training on ImageNet-1k (Russakovsky et al., [2015](https://arxiv.org/html/2310.18737v2#bib.bib27)). For further ablation studies we use ImageNet100 (Tian et al., [2020](https://arxiv.org/html/2310.18737v2#bib.bib28)) to pre-train a smaller variant of ViT. We also use iNaturlaist 2017 (Van Horn et al., [2018](https://arxiv.org/html/2310.18737v2#bib.bib31)) classification dataset and ADE20k segmentation dataset (Zhou et al., [2019](https://arxiv.org/html/2310.18737v2#bib.bib41)) for large-scale networks and datasets evaluations. Flowers102(Nilsback & Zisserman, [2008](https://arxiv.org/html/2310.18737v2#bib.bib25)), CUB-200 (Wah et al., [2011](https://arxiv.org/html/2310.18737v2#bib.bib34)) and CIFAR10/100 are used for evaluation of smaller scale experiments discussed in Appendix [B](https://arxiv.org/html/2310.18737v2#A2 "Appendix B Transfer learning for smaller scale datasets ‣ Pre-training with Random Orthogonal Projection Image Modeling"), Table [6](https://arxiv.org/html/2310.18737v2#A2.T6 "Table 6 ‣ Appendix B Transfer learning for smaller scale datasets ‣ Pre-training with Random Orthogonal Projection Image Modeling").
237
+
238
+ ImageNet-1K(Russakovsky et al., [2015](https://arxiv.org/html/2310.18737v2#bib.bib27)) used by us is ILSVRC-2012 with 1k classes and 1.3M images. ADE20K(Zhou et al., [2019](https://arxiv.org/html/2310.18737v2#bib.bib41)) is a semantic segmentation dataset including 150 semantic categories, 20K training images, 2K validation images, and 3K images for testing. iNaturalist 2017 (iNat17) dataset(Van Horn et al., [2018](https://arxiv.org/html/2310.18737v2#bib.bib31)) contains images from 5089 fine-grained categories of different species of plants and animals. Those categories are annotated with 13 super-categories including 579,184 training images and 95,986 validation images. CIFAR10/CIFAR100(Krizhevsky et al., [2009](https://arxiv.org/html/2310.18737v2#bib.bib20)) consists of 50K and 10K training and testing images of resolution 32×\times×32 from 10 and 100 classes respectively. ImageNet100 is a subset of ImageNet Large Scale Visual Recognition Challenge 2012. It contains random 100 classes proposed by Tian _et al_.(Tian et al., [2020](https://arxiv.org/html/2310.18737v2#bib.bib28)). ImageNet100 train and validation sets contain 1300 and 50 images per class, respectively.
239
+
240
+ Table 1: Top-1 classification accuracy on ImageNet-1k. ∗BEiT and CIM need an additional stage to pre-train dVAE tokenizer.
241
+
242
+ Method Backbone Pre-training epochs Fine-tuning Top1 acc %
243
+ Supervised (Touvron et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib30))ViT-B/16 81.8
244
+ DINO (Caron et al., [2021a](https://arxiv.org/html/2310.18737v2#bib.bib4))ViT-B/16 1600 82.8
245
+ MoCo v3 (Chen et al., [2021b](https://arxiv.org/html/2310.18737v2#bib.bib10))ViT-B/16 600 83.2
246
+ BEiT∗(Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2))ViT-B/16 300 (+dVAE)82.9
247
+ BEiT∗(Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2))ViT-B/16 800 (+dVAE)83.2
248
+ MAE (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18))ViT-B/16 800 83.1
249
+ MFM (Xie et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib38))ViT-B/16 300 83.1
250
+ CIM-RESPIX∗(Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15))ViT-B/16 300 (+dVAE)83.3
251
+ CIM-REVDET∗(Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15))ViT-B/16 300 (+dVAE)83.3
252
+ ROPIM ViT-B/16 300 83.5
253
+ ROPIM ViT-B/16 500 83.7
254
+ ROPIM ViT-B/16 800 84.0
255
+ Supervised (Touvron et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib30))ViT-S/16 79.9
256
+ DINO (Caron et al., [2021a](https://arxiv.org/html/2310.18737v2#bib.bib4))ViT-S/16 1600 81.5
257
+ MoCo v3 (Chen et al., [2021b](https://arxiv.org/html/2310.18737v2#bib.bib10))ViT-S/16 600 81.4
258
+ BEiT∗(Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2))ViT-S/16 300 (+dVAE)81.3
259
+ CIM-RESPIX∗(Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15))ViT-S/16 300 (+dVAE)81.5
260
+ CIM-REVDET∗(Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15))ViT-S/16 300 (+dVAE)81.6
261
+ ROPIM ViT-S/16 300 81.8
262
+ ROPIM ViT-S/16 500 82.0
263
+
264
+ ### 4.2 Experimental Settings
265
+
266
+ We conduct our experiments on ImageNet-1k with ViT-Base (ViT-B) and ViT-Small (ViT-S) (Dosovitskiy et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib14)). For further ablation studies we use ViT-Tiny (ViT-T) (Dosovitskiy et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib14)) which includes 12 layers, 3 heads and embedding size 192 and a total 5.6M number of parameters. The patch size of all ViT models is 16×\times×16 indicated by ‘///16’ In all of our experiments we use relative positional embedding (Dosovitskiy et al., [2021](https://arxiv.org/html/2310.18737v2#bib.bib14)).
267
+
268
+ We train our models using AdamW optimizer, a weight decay of 0.05, β 1 subscript 𝛽 1\beta_{1}italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.9, β 2 subscript 𝛽 2\beta_{2}italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.95, and a cosine learning rate scheduler. ViT-B and ViT-S are pre-trained with an initial 10 epochs linear warm-up procedure and a batch size of 1520. For ROPIM, sketching ratio ρ 𝜌\rho italic_ρ=1 7 1 7\frac{1}{7}divide start_ARG 1 end_ARG start_ARG 7 end_ARG is used unless otherwise mentioned.
269
+
270
+ After pre-training, we evaluate our models on image classification and segmentation benchmarks with end-to-end fine-tuning. Detailed hyperparameters are available in Appendix [D](https://arxiv.org/html/2310.18737v2#A4 "Appendix D Details of Pre-training and Fine-tuning Setups ‣ Pre-training with Random Orthogonal Projection Image Modeling"). DAE representations, _e.g_. MIM, are strong nonlinear features and perform well when a nonlinear head is tuned (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18)). On the other hand, linear probing results of contrastive SSL methods are not well correlated with their transfer learning performance (Chen & He, [2021a](https://arxiv.org/html/2310.18737v2#bib.bib8)). Therefore, similar to prior work (Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15); He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)) fine-tuning results are the main focus of this paper.
271
+
272
+ ### Baselines
273
+
274
+ We compare our ROPIM against several state-of-the-art self-supervised pre-training methods including DINO (Caron et al., [2021a](https://arxiv.org/html/2310.18737v2#bib.bib4)), MoCo v3 (Chen et al., [2021b](https://arxiv.org/html/2310.18737v2#bib.bib10)), BEiT (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2)), MAE (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18)), MFM (Xie et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib38)) and CIM (Fang et al., [2023](https://arxiv.org/html/2310.18737v2#bib.bib15)). We also included models trained in a supervised setting, denoted as “Supervised”, where a classification or segmentation head is used for training. Notice that the “Supervised” baselines do not use the image decoder.
275
+
276
+ ### 4.3 Comparison to the State of the Art
277
+
278
+ Table [1](https://arxiv.org/html/2310.18737v2#S4.T1 "Table 1 ‣ 4.1 Datasets ‣ 4 Experiments ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows the comparison of our ROPIM method with the current self-supervised pre-training approaches on ImageNet-1k. For a fair comparison, the results are reported for a similar backbone, _i.e_. ViT-B or ViT-S. Using ViT-B, our approach achieves a top-1 classification accuracy of 83.5% and 83.7% for 300 and 500 pre-training epochs respectively, outperforming all other baselines without requiring an additional dVAE training to be used as the tokenizer network or a large decoder.
279
+
280
+ #### 4.3.1 Transfer Learning
281
+
282
+ Table 2: Top-1 class. acc. of iNaturalist17, CIFAR10 & CIFAR100 by fine-tuning pre-trained ViT-B/16 (ImageNet-1K).
283
+
284
+ To evaluate the pre-trained models for transfer learning 1 1 1 SSL methods call pre-training and fine-tuning on separate datasets as transfer learning (Grill et al., [2020](https://arxiv.org/html/2310.18737v2#bib.bib16); Caron et al., [2021a](https://arxiv.org/html/2310.18737v2#bib.bib4); He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18)). , we study the performance of our pre-trained ViT-B model on two large-scale classification and segmentation datasets, _i.e_. iNaturlaist 2017 and ADE20K.
285
+
286
+ Classification. Table [2](https://arxiv.org/html/2310.18737v2#S4.T2 "Table 2 ‣ 4.3.1 Transfer Learning ‣ 4.3 Comparison to the State of the Art ‣ 4 Experiments ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows the classification accuracy of iNaturalist 2017, CIFAR10 and CIFAR100 when fine-tuning the model pre-trained on ImageNet-1k. Compared to the reported accuracy in MAE (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18)) with the same backbone, i.e. ViT-B/16, pre-trained for 1600 epochs, we achieve +.8%, +.6% and 3.2% improvement for iNaturalist, CIFAR10 and CIFAR100, respectively, while using a model pre-trained for 300 epochs only.
287
+
288
+ Table 3: ADE20K semantic segmentation (mIoU) using UperNet (Xiao et al., [2018](https://arxiv.org/html/2310.18737v2#bib.bib37)). (Baselines from MAE (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18)).)
289
+
290
+ Semantic segmentation. The efficiency of our proposed ROPIM for transfer learning on a segmentation task is presented in Table [3](https://arxiv.org/html/2310.18737v2#S4.T3 "Table 3 ‣ 4.3.1 Transfer Learning ‣ 4.3 Comparison to the State of the Art ‣ 4 Experiments ‣ Pre-training with Random Orthogonal Projection Image Modeling"). Following (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)), we run experiments on ADE20K using UperNet (Xiao et al., [2018](https://arxiv.org/html/2310.18737v2#bib.bib37)). Table [3](https://arxiv.org/html/2310.18737v2#S4.T3 "Table 3 ‣ 4.3.1 Transfer Learning ‣ 4.3 Comparison to the State of the Art ‣ 4 Experiments ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows that our pre-training improves results over supervised pre-training, tokenizer-based BEiT, and MAE approaches with less pre-training time.
291
+
292
+ ![Image 15: Refer to caption](https://arxiv.org/html/2310.18737v2/)
293
+
294
+ Figure 9: The effect of ROP on images sampled from ImageNet (first column) and patches within. Second column: images after applying Eq. ([2](https://arxiv.org/html/2310.18737v2#S3.E2 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")) with 𝐖=𝕀 𝐖 𝕀\mathbf{W}\!=\!\bm{\mathds{I}}bold_W = blackboard_bold_I, _i.e_., 𝐏†⁢𝐏𝐗 superscript 𝐏†𝐏𝐗\mathbf{P}^{\dagger}\mathbf{P}\mathbf{X}bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_PX (sketching ratio ρ=.25 𝜌.25\rho\!=\!.25 italic_ρ = .25). Third column: images after recovering the complement of Eq. [5](https://arxiv.org/html/2310.18737v2#S3.E5 "In 3.2 Problem Formulation ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling"), _i.e_., (𝕀−𝐏†⁢𝐏)⁢f 𝚯⁢(⋅)⁢𝐖∗𝕀 superscript 𝐏†𝐏 subscript 𝑓 𝚯⋅superscript 𝐖(\bm{\mathds{I}}\!-\!\mathbf{P}^{\dagger}\mathbf{P})f_{\mathbf{\Theta}}(\cdot)% \mathbf{W}^{*}( blackboard_bold_I - bold_P start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT bold_P ) italic_f start_POSTSUBSCRIPT bold_Θ end_POSTSUBSCRIPT ( ⋅ ) bold_W start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT. Fourth column: reconstructed images (second and third columns added).
295
+
296
+ For ADE20K segmentation we use an AdamW optimizer, a weight decay of 0.05, a batch size of 16, and a grid search for learning rate. All models are trained for 160K iterations with an input resolution of 512×512. Following (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)) we initialized the segmentation models using model weights after supervised fine-tuning on ImageNet-1K.
297
+
298
+ ### 4.4 Visualization
299
+
300
+ Figure [9](https://arxiv.org/html/2310.18737v2#S4.F9 "Figure 9 ‣ 4.3.1 Transfer Learning ‣ 4.3 Comparison to the State of the Art ‣ 4 Experiments ‣ Pre-training with Random Orthogonal Projection Image Modeling") displays sample images, their sketched images 2 2 2 By “sketched image”, we mean ROP was applied along the spatial mode, followed by inverse ROP., the predicted images after complement count sketching 3 3 3 By complement sketching, we mean that we applied the complement of the chosen random projection along the spatial mode, followed by its inverse. and the final reconstructed images as “sketched image+predicted complement-sketched image”. Note that sketched image is the available data sent to the network. The visible regions represent non-removed information, whereas the corrupt parts are regions where the lossy nature of sketching removed information. In a similar way, the predicted complement space can be seen as information predicted by the network, which was removed from the input image. As can be seen, combining sketched images with the corresponding recovered complement-sketched images produce the reconstructed images which are visually very close to the original images. The advantage of using ROP is that, as a lossy projection, it is well characterized by the noise variance (the lost information is characterized by the bound on the noise variance). At the same time, the complement of the random subspace enables the recovery of the lost information.
301
+
302
+ ### 4.5 Further Experiments
303
+
304
+ and Ablation Studies
305
+
306
+ Table 4: Top-1 classification acc. on ImageNet100. ∗BEiT tokenizer trained on ImageNet-1K, N/A to other methods.
307
+
308
+ To conduct further experiments and ablation studies, in addition to ViT-S and ViT-B, we train a smaller variant of ViT, ViT-T with ImageNet100. ViT-T is pre-trained for 800 epochs with a batch size of 512. The “Supervised” results for ViT-T are obtained by training the model from scratch with random initialized weights for 800 epochs and a grid search for best performing base learning rate. We apply all data augmentation policies used during our fine-tuning for “Supervised” training.
309
+
310
+ Table [4](https://arxiv.org/html/2310.18737v2#S4.T4 "Table 4 ‣ 4.5 Further Experiments and Ablation Studies ‣ 4 Experiments ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows performance of our implementation of pre-training and fine-tuning baselines on ImageNet100 dataset with ViT-T as backbone. Note for pre-training on ImageNet100 we had to use the tokenizer trained on ImageNet-1k to report BEiT (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2)) results. Thus, the BeiT result in this case are for reference only. For a fair comparison with other methods using ViT-T, we followed the procedure in (He et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib18); Xie et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib39)) where a grid search for the same set of hyper-parameters is applied for all methods (Bao et al., [2022](https://arxiv.org/html/2310.18737v2#bib.bib2)). For all baselines we run both pre-training and fine-tuning with their default (best performing) setup.
311
+
312
+ Sketching ratio ρ 𝜌\rho italic_ρ. Table [9](https://arxiv.org/html/2310.18737v2#A3.T9 "Table 9 ‣ C.2 Ablations on different values of sketching ratio 𝜌 ‣ Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling") of Appendix [C](https://arxiv.org/html/2310.18737v2#A3 "Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows an ablation study on different values of ρ 𝜌\rho italic_ρ for different backbones and datasets. We observed that ρ=1 7 𝜌 1 7\rho=\frac{1}{7}italic_ρ = divide start_ARG 1 end_ARG start_ARG 7 end_ARG achieves the highest performance and hence this value was used for all experiments unless otherwise mentioned.
313
+
314
+ Increasing number of pre-training and fine-tuning epochs. Table [1](https://arxiv.org/html/2310.18737v2#S4.T1 "Table 1 ‣ 4.1 Datasets ‣ 4 Experiments ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows a consistent improvement in performance of ROPIM for ImageNet-1k when increasing the number of pre-training epochs. Top-1 accuracy for ImageNet100 with varying number of pre-training and fine-tuning epochs are shown in Tables [7](https://arxiv.org/html/2310.18737v2#A3.T7 "Table 7 ‣ C.1 The effect of increasing number of pre-training and fine-tuning epochs ‣ Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling") and [8](https://arxiv.org/html/2310.18737v2#A3.T8 "Table 8 ‣ C.1 The effect of increasing number of pre-training and fine-tuning epochs ‣ Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling") of Appendix [C](https://arxiv.org/html/2310.18737v2#A3 "Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling").
315
+
316
+ The final gist. Consider a masking problem with just 2 tokens. For standard binary masking, in total there are 2 2 superscript 2 2 2^{2}2 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT unique masking patterns. Assuming 50% masking ratio, that just limits patterns to mere 2. However, if masking has more “continuous” nature (_e.g_., as ROPIM), one can get for example {0%,25%,50%,75%,100%}percent 0 percent 25 percent 50 percent 75 percent 100\{0\%,25\%,50\%,75\%,100\%\}{ 0 % , 25 % , 50 % , 75 % , 100 % } of original energy preserved per token, which gives 5 2 superscript 5 2 5^{2}5 start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT unique masking patterns. Under 50% masking ratio (assuming it equals to 50% lost information), that yields (0%,100%),(25%,75%),(50%,50%),(75%,25%),(100%,0%)percent 0 percent 100 percent 25 percent 75 percent 50 percent 50 percent 75 percent 25 percent 100 percent 0(0\%,100\%),(25\%,75\%),(50\%,50\%),(75\%,25\%),(100\%,0\%)( 0 % , 100 % ) , ( 25 % , 75 % ) , ( 50 % , 50 % ) , ( 75 % , 25 % ) , ( 100 % , 0 % ) pattern pairs (5 5 5 5 in total). ROPIM has a similar effect to this toy example, but in addition (i) it provides complementary “unmasking” patterns (ii) with bounded/known variance for the implicitly injected noise as per Properties [1](https://arxiv.org/html/2310.18737v2#Thmproperty1 "Property 1. ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling")–[4](https://arxiv.org/html/2310.18737v2#Thmproperty4 "Property 4. ‣ 3.1 Preliminaries ‣ 3 Approach ‣ Pre-training with Random Orthogonal Projection Image Modeling").
317
+
318
+ 5 Conclusions
319
+ -------------
320
+
321
+ We have presented Random Orthogonal Projection Image Modeling (ROPIM), a self-supervised pre-training method for Vision Transformers (ViT). Compared to the popular MIM techniques, ROPIM applies count sketching to project features of patch embeddings along their spatial mode into a random subspace (formed according to the sketch matrix principles) and subsequently retracts them from the subspace into the original feature space. ROPIM incurs minimal computational overheads while providing richer masking-unmasking patterns with a guaranteed bounded variance of the noise, _e.g_., we can “touch” more tokens spatially-wise than masking to create richer masking patterns. We quantify how much we corrupt these tokens and we have a theoretically guaranteed “unsketching” mechanism in analogy to unmasking. ROPIM does not require customized architecture designs, heavy decoders, or tokenizer networks. We hope ROPIM can inspire further research devoted to improving corruption strategies for self-supervised network pre-training.
322
+
323
+ ACKNOWLEDGMENTS
324
+ ---------------
325
+
326
+ This work was funded by CSIRO’s Machine Learning and Artificial Intelligence Future Science Platform (MLAI FSP). MH acknowledges ongoing support from the QUT School of Electrical Engineering and Robotics and the QUT SAIVT (Signal Processing, AI and Vision Technologies) lab.
327
+
328
+ References
329
+ ----------
330
+
331
+ * Baevski et al. (2022) Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. Data2vec: A general framework for self-supervised learning in speech, vision and language. In _International Conference on Machine Learning_, pp.1298–1312. PMLR, 2022.
332
+ * Bao et al. (2022) Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEit: BERT pre-training of image transformers. In _International Conference on Learning Representations_, 2022.
333
+ * Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_, 33:1877–1901, 2020.
334
+ * Caron et al. (2021a) Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pp. 9650–9660, 2021a.
335
+ * Caron et al. (2021b) Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pp. 9650–9660, 2021b.
336
+ * Charikar et al. (2002) Moses Charikar, Kevin Chen, and Martin Farach-Colton. Finding frequent items in data streams. In _Automata, Languages and Programming: 29th International Colloquium, ICALP 2002 Málaga, Spain, July 8–13, 2002 Proceedings 29_, pp. 693–703. Springer, 2002.
337
+ * Chen et al. (2021a) Chun-Fu Richard Chen, Quanfu Fan, and Rameswar Panda. Crossvit: Cross-attention multi-scale vision transformer for image classification. In _Proceedings of the IEEE/CVF international conference on computer vision_, pp. 357–366, 2021a.
338
+ * Chen & He (2021a) Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pp. 15750–15758, June 2021a.
339
+ * Chen & He (2021b) Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 15750–15758, 2021b.
340
+ * Chen et al. (2021b) Xinlei Chen, Saining Xie, and Kaiming He. An empirical study of training self-supervised vision transformers. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pp. 9640–9649, 2021b.
341
+ * Chu et al. (2021) Xiangxiang Chu, Zhi Tian, Yuqing Wang, Bo Zhang, Haibing Ren, Xiaolin Wei, Huaxia Xia, and Chunhua Shen. Twins: Revisiting the design of spatial attention in vision transformers. _Advances in Neural Information Processing Systems_, 34:9355–9366, 2021.
342
+ * Cormode & Muthukrishnan (2005) Graham Cormode and Shan Muthukrishnan. An improved data stream summary: the count-min sketch and its applications. _Journal of Algorithms_, 55(1):58–75, 2005.
343
+ * Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
344
+ * Dosovitskiy et al. (2021) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _International Conference on Learning Representations_, 2021.
345
+ * Fang et al. (2023) Yuxin Fang, Li Dong, Hangbo Bao, Xinggang Wang, and Furu Wei. Corrupted image modeling for self-supervised visual pre-training. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=09hVcSDkea](https://openreview.net/forum?id=09hVcSDkea).
346
+ * Grill et al. (2020) Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. _Advances in neural information processing systems_, 33:21271–21284, 2020.
347
+ * Harandi et al. (2015) M.Harandi, R.Hartley, C.Shen, B.Lovell, and C.Sanderson. Extrinsic methods for coding and dictionary learning on grassmann manifolds. _IJCV_, 2015.
348
+ * He et al. (2022) Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 16000–16009, 2022.
349
+ * Jiang et al. (2023) Ziyu Jiang, Yinpeng Chen, Mengchen Liu, Dongdong Chen, Xiyang Dai, Lu Yuan, Zicheng Liu, and Zhangyang Wang. Layer grafted pre-training: Bridging contrastive learning and masked image modeling for label-efficient representations. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=jwdqNwyREyh](https://openreview.net/forum?id=jwdqNwyREyh).
350
+ * Krizhevsky et al. (2009) Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
351
+ * Li et al. (2021) Zhaowen Li, Zhiyang Chen, Fan Yang, Wei Li, Yousong Zhu, Chaoyang Zhao, Rui Deng, Liwei Wu, Rui Zhao, Ming Tang, et al. Mst: Masked self-supervised transformer for visual representation. _Advances in Neural Information Processing Systems_, 34:13165–13176, 2021.
352
+ * Liu et al. (2021a) Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self-supervised learning: Generative or contrastive. _IEEE Transactions on Knowledge and Data Engineering_, 2021a.
353
+ * Liu et al. (2021b) Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pp. 10012–10022, 2021b.
354
+ * Mishra et al. (2022) Shlok Mishra, Joshua Robinson, Huiwen Chang, David Jacobs, Aaron Sarna, Aaron Maschinot, and Dilip Krishnan. A simple, efficient and scalable contrastive masked autoencoder for learning visual representations. _arXiv preprint arXiv:2210.16870_, 2022.
355
+ * Nilsback & Zisserman (2008) Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In _ICVGIP_, pp. 722–729. IEEE, 2008.
356
+ * Parmar et al. (2018) Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. Image transformer. In _International conference on machine learning_, pp.4055–4064. PMLR, 2018.
357
+ * Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. _International journal of computer vision_, 115:211–252, 2015.
358
+ * Tian et al. (2020) Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In _European conference on computer vision_, pp. 776–794. Springer, 2020.
359
+ * Tian et al. (2022) Yunjie Tian, Lingxi Xie, Jiemin Fang, Mengnan Shi, Junran Peng, Xiaopeng Zhang, Jianbin Jiao, Qi Tian, and Qixiang Ye. Beyond masking: Demystifying token-based pre-training for vision transformers. _arXiv preprint arXiv:2203.14313_, 2022.
360
+ * Touvron et al. (2021) Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In _International Conference on Machine Learning_, pp.10347–10357. PMLR, 2021.
361
+ * Van Horn et al. (2018) Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classification and detection dataset. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 8769–8778, 2018.
362
+ * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
363
+ * Vincent et al. (2008) Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In _Proceedings of the 25th international conference on Machine learning_, pp. 1096–1103, 2008.
364
+ * Wah et al. (2011) Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
365
+ * Wei et al. (2022) Chen Wei, Haoqi Fan, Saining Xie, Chao-Yuan Wu, Alan Yuille, and Christoph Feichtenhofer. Masked feature prediction for self-supervised visual pre-training. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 14668–14678, 2022.
366
+ * Weinberger et al. (2009) Kilian Weinberger, Anirban Dasgupta, John Langford, Alex Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In _ICML_, pp. 1113–1120, 2009. ISBN 978-1-60558-516-1. doi: 10.1145/1553374.1553516. URL [http://doi.acm.org/10.1145/1553374.1553516](http://doi.acm.org/10.1145/1553374.1553516).
367
+ * Xiao et al. (2018) Tete Xiao, Yingcheng Liu, Bolei Zhou, Yuning Jiang, and Jian Sun. Unified perceptual parsing for scene understanding. In _Proceedings of the European conference on computer vision (ECCV)_, pp. 418–434, 2018.
368
+ * Xie et al. (2023) Jiahao Xie, Wei Li, Xiaohang Zhan, Ziwei Liu, Yew-Soon Ong, and Chen Change Loy. Masked frequency modeling for self-supervised visual pre-training. In _The Eleventh International Conference on Learning Representations_, 2023. URL [https://openreview.net/forum?id=9-umxtNPx5E](https://openreview.net/forum?id=9-umxtNPx5E).
369
+ * Xie et al. (2022) Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 9653–9663, 2022.
370
+ * Zhang et al. (2022) Yifei Zhang, Hao Zhu, Zixing Song, Piotr Koniusz, and Irwin King. COSTA: Covariance-preserving feature augmentation for graph contrastive learning. _ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD)_, 2022. URL [https://doi.org/10.1145/3534678.3539425](https://doi.org/10.1145/3534678.3539425).
371
+ * Zhou et al. (2019) Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic understanding of scenes through the ade20k dataset. _International Journal of Computer Vision_, 127:302–321, 2019.
372
+ * Zhou et al. (2022) Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. Image BERT pre-training with online tokenizer. In _International Conference on Learning Representations_, 2022. URL [https://openreview.net/forum?id=ydopy-e6Dg](https://openreview.net/forum?id=ydopy-e6Dg).
373
+ * Zhu & Koniusz (2022a) Hao Zhu and Piotr Koniusz. EASE: Unsupervised discriminant subspace learning for transductive few-shot learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pp. 9078–9088, June 2022a.
374
+ * Zhu & Koniusz (2022b) Hao Zhu and Piotr Koniusz. Generalized laplacian eigenmaps. In S.Koyejo, S.Mohamed, A.Agarwal, D.Belgrave, K.Cho, and A.Oh (eds.), _Advances in Neural Information Processing Systems_, volume 35, pp. 30783–30797. Curran Associates, Inc., 2022b.
375
+ * Zhu et al. (2021) Hao Zhu, Ke Sun, and Peter Koniusz. Contrastive laplacian eigenmaps. _Advances in Neural Information Processing Systems_, 34:5682–5695, 2021.
376
+
377
+ Maryam Haghighat∗,†,††††absent†{}^{*,\dagger,{\dagger\!\dagger}}start_FLOATSUPERSCRIPT ∗ , † , † † end_FLOATSUPERSCRIPT, Peyman Moghadam§,†, Shaheer Mohamed§,†, Piotr Koniusz∗,§,‡
378
+
379
+ §Data61 \usym 2665CSIRO ††\;\;{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT Queensland University of Technology ‡‡\;\;{}^{\ddagger}start_FLOATSUPERSCRIPT ‡ end_FLOATSUPERSCRIPT Australian National University
380
+
381
+ †name.lastname@qut.edu.au, §name.lastname@data61.csiro.au
382
+
383
+ Pre-training with Random Orthogonal Projection Image Modeling (Appendices)
384
+
385
+ Appendix A Runtimes
386
+ -------------------
387
+
388
+ Table 5: Runtimes using the same resources (8×\times×P100 GPUs).
389
+
390
+ Method PT epochs Memory usage per GPU (GB)Time per epoch (min)Total PT time (hour)Top-1 Acc. %
391
+ MoCo v3 \citeplatex MOCOV3_supp 600 16 126 1260 83.2
392
+ BeiT \citeplatex beit_supp 800 16 63 840 83.2
393
+ MAE \citeplatex MAE_supp 800 16 31 413 83.1
394
+ MAE \citeplatex MAE_supp 1600 16 31 826 83.6
395
+ MFM \citeplatex masked_frequency_supp 300 16 47 235 83.1
396
+ CIM \citeplatex corrupted_supp 300 16 120 600 83.3
397
+ CIM \citeplatex corrupted_supp 800 16 120 1600 83.4
398
+ CAN \citeplatex CAN_supp 800 16 75 1000 83.4
399
+ CAN \citeplatex CAN_supp 1600 16 75 2000 83.6
400
+ ROPIM 300 16 47 235 83.5
401
+ ROPIM 500 16 47 391 83.7
402
+ ROPIM 800 16 47 626 84.0
403
+ LGP-MAE \citeplatex LGP_supp 1600 (MAE) + 300 (MoCo v3)16 31 (MAE) + 126 (MoCo v3)826 (MAE) + 630 (MoCo v3)83.9
404
+ LGP-ROPIM 800 (ROPIM) + 300 (MoCo v3)16 47 (ROPIM) + 126 (MoCo v3)626 (ROPIM) + 630 (MoCo v3)84.1
405
+
406
+ Figure [1](https://arxiv.org/html/2310.18737v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") compares top-1 accuracy _vs_. total pre-training (PT) time with SOTA methods. For fair comparisons, we used the same resources (8×\times×P100 GPUs) and maximum possible batch size for each method, _i.e_., GPU memory usage for all methods is 16GB per GPU. Total PT time is time per epoch ×\times× number of PT epochs. Table [5](https://arxiv.org/html/2310.18737v2#A1.T5 "Table 5 ‣ Appendix A Runtimes ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows more details. As seen, MAE has smaller time per epoch, however, it requires larger number of PT epochs to converge and this results in a less efficient total runtime. We note that MAE \citeplatex MAE_supp removes MASK tokens from the encoder to increase pre-training speed. However, recent methods such as CIM \citeplatex corrupted_supp, Data2vec \citeplatex data2vec_supp, SimMIM \citeplatex simmim_supp, and MFM \citeplatex masked_frequency_supp retain MASK tokens in the encoder to ensure compatibility with different architectures, including hierarchical ViTs (_e.g_., Swin) and CNNs. Similarly, our ROPIM offers flexibility to be applied to various architectures. We have additionally incorporated results obtained from the LGP \citeplatex LGP_supp mechanism. LGP \citeplatex LGP_supp combines MIM with contrastive learning in a sequential manner. The initial stage involves training the network with MIM loss functions. Following that, the training process proceeds with contrastive learning, incorporating a learning rate decay strategy. During this phase, the lower layers of the network are assigned a smaller learning rate. LGP employs MAE \citeplatex MAE_supp for MIM and MoCo v3 \citeplatex MOCOV3_supp for contrastive learning. We have labeled this by “LGP-MAE” in Figure [1](https://arxiv.org/html/2310.18737v2#S1.F1 "Figure 1 ‣ 1 Introduction ‣ Pre-training with Random Orthogonal Projection Image Modeling") and Table [5](https://arxiv.org/html/2310.18737v2#A1.T5 "Table 5 ‣ Appendix A Runtimes ‣ Pre-training with Random Orthogonal Projection Image Modeling"). We followed the same settings, replaced MAE with ROPIM in the first stage, and continued training of ROPIM with MoCo v3 for 300 epochs. The corresponding results are indicated by “LGP-ROPIM”.
407
+
408
+ Appendix B Transfer learning for smaller scale datasets
409
+ -------------------------------------------------------
410
+
411
+ To further investigate the efficiency of our pre-trained models for transfer learning we run fine-tuning on the pre-trained ViT-T model on ImageNet100 for Flower102, CUB-200, CIFAR10 and CIFAR100 datasets. For fine-tuning CIFAR10/100 on ViT-T we simply up-sample CIFAR image resolutions from 32×\times×32 to 224 ×\times× 224.
412
+
413
+ Flowers102\citeplatex nilsback_flower102_supp contains images of 102 fine-grained flower species with 1020 train and 6149 test samples. We pre-process this dataset by center-cropping images and resizing crops to 224×\times×224. Caltech-UCSD Birds 200 (CUB-200)\citeplatex caltech-ucsd_supp is annotated with 200 bird species and contains 5994 training images and 5794 testing images.
414
+
415
+ Table [6](https://arxiv.org/html/2310.18737v2#A2.T6 "Table 6 ‣ Appendix B Transfer learning for smaller scale datasets ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows that model pre-trained by ROPIM provides powerful features for transfer learning, outperforming SimMIM\citeplatex simmim_supp and MAE\citeplatex MAE_supp baselines in all four datasets.
416
+
417
+ Table 6: Transfer learning on smaller scale datasets.
418
+
419
+ Appendix C Further ablations studies
420
+ ------------------------------------
421
+
422
+ ### C.1 The effect of increasing number of pre-training and fine-tuning epochs
423
+
424
+ For MIM-based methods in general, training with longer epochs improves their performance \citeplatex MAE_supp, beit_supp. Tables [1](https://arxiv.org/html/2310.18737v2#S4.T1 "Table 1 ‣ 4.1 Datasets ‣ 4 Experiments ‣ Pre-training with Random Orthogonal Projection Image Modeling") and [7](https://arxiv.org/html/2310.18737v2#A3.T7 "Table 7 ‣ C.1 The effect of increasing number of pre-training and fine-tuning epochs ‣ Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling") show the effect of increasing the number of pre-training epochs when the same dataset, ImageNet-1k and ImageNet100, is used during pre-training and fine-tuning. As seen, there is a consistent improvement in classification accuracy with increasing the number of pre-training epochs.
425
+
426
+ Table 7: The effect of increasing the number of pre-training epochs on top-1 accuracy. ImageNet100 is used for both pre-training and fine-tuning of ViT-T. Models are fine-tuned with 100 epochs.
427
+
428
+ Table 8: The effect of increasing the number of fine-tuning epochs on top-1 classification accuracy. ImageNet100 is used for both pre-training and fine-tuning.
429
+
430
+ Table [8](https://arxiv.org/html/2310.18737v2#A3.T8 "Table 8 ‣ C.1 The effect of increasing number of pre-training and fine-tuning epochs ‣ Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling") shows the effect of increasing the number of fine-tuning epochs. ROPIM achieves a classification accuracy of 88.71% on ImageNet100 for 300 fine-tuning epochs which is .71% higher than SimMIM for the same setting.
431
+
432
+ ### C.2 Ablations on different values of sketching ratio ρ 𝜌\rho italic_ρ
433
+
434
+ As seen in Table [9](https://arxiv.org/html/2310.18737v2#A3.T9 "Table 9 ‣ C.2 Ablations on different values of sketching ratio 𝜌 ‣ Appendix C Further ablations studies ‣ Pre-training with Random Orthogonal Projection Image Modeling"), ρ 𝜌\rho italic_ρ=.14 achieves better results for different backbones.
435
+
436
+ Table 9: ROPIM top-1 acc. for pretraining ViT-T, ViT-S and ViT-B with ImageNet100 and ImageNet-1k and using different sketching ratio ρ 𝜌\rho italic_ρ.
437
+
438
+ Appendix D Details of Pre-training and Fine-tuning Setups
439
+ ---------------------------------------------------------
440
+
441
+ Table 10: Fine-tuning hyper-parameters for BEiT, MAE, SimMIM and ROPIM.
442
+
443
+ Main settings of ROPIM, SimMIM, MAE \citeplatex MAE_supp and BeiT \citeplatex beit_supp are similar for fine-tuning, shown in Table [10](https://arxiv.org/html/2310.18737v2#A4.T10 "Table 10 ‣ Appendix D Details of Pre-training and Fine-tuning Setups ‣ Pre-training with Random Orthogonal Projection Image Modeling"). Below we provide additional details.
444
+
445
+ ### Setup of ROPIM
446
+
447
+ The models are pre-trained with linear learning rate (lr) scaling rule, lr=base_lr×batch_size/512 lr base_lr batch_size 512\text{lr}=\text{base\_lr}\times\text{batch\_size}/512 lr = base_lr × batch_size / 512 is used. Data augmentation strategy includes random resize cropping with the scale range of [0.67, 1], an the aspect ratio range of [3/4, 4/3], followed by random horizontal flipping and color normalization. Pre-training base learning rate for ViT-T is 1e-3, and 1.5e-4 for ViT-S and ViT-B.
448
+
449
+ During fine-tuning a weight decay of 0.05, β 1 subscript 𝛽 1\beta_{1}italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.9, β 2 subscript 𝛽 2\beta_{2}italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.999, a stochastic depth ratio of 0.1 is employed. We also follow the data augmentation used in \citeplatex beit_supp, simmim_supp and use Mixup, Cutmix, label smoothing, and random erasing. Vit-B and ViT-T models are fine-tuned for 100 epochs unless otherwise mentioned. ViT-S is fine-tuned for 200 epochs. Kindly note that all other baseline methods use the same or longer fine-tuning epochs compared to our work. For fine-tuning, we run a grid search on {5e-3, 1e-2, 2e-2} and report the highest performing one.
450
+
451
+ In what follows, we provide the pre-training and fine-tuning setup of our ablation studies with ViT-T on ImageNet100. For all baselines, we ran a grid-search on their default learning rate, multiplied by {.1, 1, 2, 4, 10}, and we reported the best performing result. Kindly note that the grid search is a common strategy for selecting hyper-parameters in self-supervised pipelines \citeplatex beit_supp.
452
+
453
+ ### Setup of SimMIM
454
+
455
+ Following the default setting in SimMIM \citeplatex simmim_supp for ViT backbone, we used random masking with a patch size of 32×32 and a mask ratio of 0.6, a linear prediction head with a target image size of 224, the ℓ 1 subscript ℓ 1\ell_{1}roman_ℓ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT loss for masked pixel prediction. The models are pre-trained with the AdamW optimizer, a base learning rate of 1e-3, and a multi-step learning rate scheduler with an initial 10 epochs linear warm-up procedure. A linear lr scaling rule, lr=base_lr×batch_size/512 lr base_lr batch_size 512\text{lr}=\text{base\_lr}\times\text{batch\_size}/512 lr = base_lr × batch_size / 512 is used. Data augmentation strategy during pre-training includes random resize cropping with the scale range of [0.67, 1], an aspect ratio range of [3/4, 4/3], followed by random horizontal flipping and color normalization.
456
+
457
+ For fine-tuning, base learning rate of 5e-3 with a layer-wise lr decay of 0.65 is used.
458
+
459
+ ### Setup of MAE
460
+
461
+ For pre-training, MAE \citeplatex MAE_supp uses the cosine learning rate scheduler with a base learning rate of 1.5e-4, the AdamW optimizer with momentum β 1=0.9,β 2=0.95 formulae-sequence subscript 𝛽 1 0.9 subscript 𝛽 2 0.95\beta_{1}=0.9,\beta_{2}=0.95 italic_β start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT = 0.9 , italic_β start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = 0.95 and linear lr scaling rule lr=base_lr×batch_size/256 lr base_lr batch_size 256\text{lr}=\text{base\_lr}\times\text{batch\_size}/256 lr = base_lr × batch_size / 256. Random-resized cropping and horizontal flipping are used during pre-training. The features of MAE are extracted from the encoder of the pre-trained network and fine-tuned following the standard supervised ViT training.
462
+
463
+ For fine-tuning, a base learning rate of 5e-3 and a layer-wise lr decay of 0.75 are used.
464
+
465
+ ### Setup of BEiT
466
+
467
+ Image tokenizer of BEiT \citeplatex beit_supp is adopted from \citeplatex ramesh2021zero_supp and the vocabulary size of visual tokens is set as 8192. BEiT uses the AdamW optimizer with a base learning rate of 1.5e-3 for pre-training. We follow their default augmentation policies, _i.e_., random-resized cropping, horizontal flipping and color jittering for pre-training the network.
468
+
469
+ For fine-tuning, base lr of 3e-3 and a layer-wise lr decay of 0.65 are used.
470
+
471
+ ### Setup of DINO
472
+
473
+ We pre-train DINO \citeplatex DINO_supp with the AdamW optimizer and a base learning rate of 5e-4. We used linear lr scaling rule, lr=base_lr×batch_size/256 lr base_lr batch_size 256\text{lr}=\text{base\_lr}\times\text{batch\_size}/256 lr = base_lr × batch_size / 256, with warm up epochs set to 10. For augmentations, color jittering, Gaussian blur, solarization and multi-cropping are used following the default setting.
474
+
475
+ During fine-tuning, we used the pre-trained network with a linear classifier and trained end-to-end for 100 epochs. An SGD optimizer with learning rate of 1e-3 and cosine learning rate decay is used. Random-resized crop and random horizontal flip augmentations were applied during fine-tuning.
476
+
477
+ ### Setup of MoCo v3
478
+
479
+ We pre-trained MoCo v3 \citeplatex MOCOV3_supp with base learning rate of 1.5e-4 and lr scaling rule lr=base_lr×batch_size/256 lr base_lr batch_size 256\text{lr}=\text{base\_lr}\times\text{batch\_size}/256 lr = base_lr × batch_size / 256. Random-resized cropping, horizontal flipping, color jittering, grayscale conversion, blurring and solarization were applied following \citeplatex MOCOV3_supp.
480
+
481
+ We fine-tuned the pre-trained model end-to-end for 100 epochs following DEiT \citeplatex DEiT_supp setup. We used the official implementation of MoCo v3 to convert the pre-trained model to DEiT supporting format to perform fine-tuning. Here, we used a learning rate of 5e-4, an AdamW optimizer with cosine learning rate decay and 5 warm-up epochs. Linear lr scaling rule, lr=base_lr×batch_size/512 lr base_lr batch_size 512\text{lr}=\text{base\_lr}\times\text{batch\_size}/512 lr = base_lr × batch_size / 512 is used.
482
+
483
+ We applied the default augmentations in DEiT which include color jitter, rand_augment = 9/0.5, mixup_prob = 0.8, cutmix_prob = 1.0 and erasing_prob = 0.25.
484
+
485
+ Appendix E More Related Works
486
+ -----------------------------
487
+
488
+ Self-supervised learning is an active research area. Researchers from different fields have been creative in using MIM techniques for various applications such as video \citeplatex wang2023videomaev2_supp, point clouds \citeplatex geomae_supp and hyper-spectral images \citeplatex FactoFormer_supp. However, they are not directly related to our work as they research other modality types. Combining multiple pre-training strategies and data from various modalities/sources can also greatly boost the training of large-scale models \citeplatex Su_2023_CVPR_supp. Another very recent pipeline, Correlational Image Modeling (CorIM) \citeplatex li2023correlational_supp, proposes a self-supervised pre-training task leveraging a cropping strategy, a bootstrap encoder, and a correlation decoder. However, the performance of CorIM (83.1% top 1 accuracy for ViT-B) appeared short of the performance of pure ROPIM 83.5% with 300 pre-training epochs. Along with masked image modeling, other strategies utilize multiple contrastive heads \citeplatex wang2023adaptive_supp. Different with masked modeling, noteworthy is also abundance of other self-supervised strategies applied in self-supervised 2D/3D feature matching for re-localization \citeplatex 10341798_supp, image deblurring \citeplatex event_deblurr_supp, image-to-image translation \citeplatex fatima_image_to_image_supp, segmentation self-distillation \citeplatex Kang_2023_CVPR_supp, GAN consistency regularization \citeplatex NEURIPS2023_2c8047bf_supp, keypoint contrastive learning \citeplatex lu_anykeypoints_supp, categorical data learning \citeplatex 10192074_supp, traffic predictive coding \citeplatex 10.1145/3576842.3582362_pred_coding_supp,prabowo2023SCPT_supp, multi-language output self-supervision \citeplatex tas2021simple_supp and graph contrastive collaborative learning \citeplatex NEURIPS2023_d5753be6_supp.
489
+
490
+ \bibliographystylelatex
491
+
492
+ iclr2024_conference\bibliographylatex latex