diff --git "a/2023/Weighted Ensemble Self-Supervised Learning/layout.json" "b/2023/Weighted Ensemble Self-Supervised Learning/layout.json" new file mode 100644--- /dev/null +++ "b/2023/Weighted Ensemble Self-Supervised Learning/layout.json" @@ -0,0 +1,21457 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 105, + 68, + 501, + 86 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 68, + 501, + 86 + ], + "spans": [ + { + "bbox": [ + 105, + 68, + 501, + 86 + ], + "type": "text", + "content": "WEIGHTED ENSEMBLE SELF-SUPERVISED LEARNING" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 121, + 95, + 489, + 146 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 95, + 489, + 146 + ], + "spans": [ + { + "bbox": [ + 121, + 95, + 489, + 146 + ], + "type": "text", + "content": "Yangjun Ruan*† Saurabh Singh Warren Morningstar Alexander A. Alemi \nSergey Ioffe Ian Fischer† Joshua V. Dillon† \nGoogle Research" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 276, + 162, + 335, + 174 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 276, + 162, + 335, + 174 + ], + "spans": [ + { + "bbox": [ + 276, + 162, + 335, + 174 + ], + "type": "text", + "content": "ABSTRACT" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 140, + 178, + 471, + 357 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 178, + 471, + 357 + ], + "spans": [ + { + "bbox": [ + 140, + 178, + 471, + 357 + ], + "type": "text", + "content": "Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning. Advances in self-supervised learning (SSL) enable leveraging large unlabeled corpora for state-of-the-art few-shot and supervised learning performance. In this paper, we explore how ensemble methods can improve recent SSL techniques by developing a framework that permits data-dependent weighted cross-entropy losses. We refrain from ensembling the representation backbone; this choice yields an efficient ensemble method that incurs a small training cost and requires no architectural changes or computational overhead to downstream evaluation. The effectiveness of our method is demonstrated with two state-of-the-art SSL methods, DINO (Caron et al., 2021) and MSN (Assran et al., 2022). Our method outperforms both in multiple evaluation metrics on ImageNet-1K, particularly in the few-shot setting. We explore several weighting schemes and find that those which increase the diversity of ensemble heads lead to better downstream evaluation results. Thorough experiments yield improved prior art baselines which our method still surpasses; e.g., our overall improvement with MSN ViT-B/16 is 3.9 p.p. for 1-shot learning." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 106, + 367, + 206, + 379 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 367, + 206, + 379 + ], + "spans": [ + { + "bbox": [ + 106, + 367, + 206, + 379 + ], + "type": "text", + "content": "1 INTRODUCTION" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 385, + 298, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 385, + 298, + 495 + ], + "spans": [ + { + "bbox": [ + 104, + 385, + 298, + 495 + ], + "type": "text", + "content": "The promise of self-supervised learning (SSL) is to extract information from unlabeled data and leverage this information in downstream tasks (He et al., 2020; Caron et al., 2021); e.g., semi-supervised learning (Chen et al., 2020a,b), robust learning (Radford et al., 2021; Ruan et al., 2022; Lee et al., 2021), few-shot learning (Assran et al., 2022), and supervised learning (Tomasev et al., 2022). These successes have encouraged increasingly advanced SSL techniques" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 495, + 504, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 495, + 504, + 518 + ], + "spans": [ + { + "bbox": [ + 104, + 495, + 504, + 518 + ], + "type": "text", + "content": "(e.g., Grill et al., 2020; Zbontar et al., 2021; He et al., 2022). Perhaps surprisingly however, a simple and otherwise common idea has received limited consideration: ensembling." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 522, + 504, + 579 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 522, + 504, + 579 + ], + "spans": [ + { + "bbox": [ + 104, + 522, + 504, + 579 + ], + "type": "text", + "content": "Ensembling combines predictions from multiple trained models and has proven effective at improving model accuracy (Hansen & Salamon, 1990; Perrone & Cooper, 1992) and capturing predictive uncertainty in supervised learning (Lakshminarayanan et al., 2017; Ovadia et al., 2019). Ensembling in the SSL regime is nuanced, however; since the goal is to learn useful representations from unlabeled data, it is less obvious where and how to ensemble. We explore these questions in this work." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 583, + 506, + 705 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 583, + 506, + 705 + ], + "spans": [ + { + "bbox": [ + 104, + 583, + 506, + 705 + ], + "type": "text", + "content": "We develop an efficient ensemble method tailored for SSL that replicates the non-representation parts (e.g., projection heads) of the SSL model. In contrast with traditional \"post-training\" ensembling, our ensembles are only used during training to facilitate the learning of a single representation encoder, which yields no extra cost in downstream evaluation. We further present a family of weighted cross-entropy losses to effectively train the ensembles. The key component of our losses is the introduction of data-dependant importance weights for ensemble members. We empirically compare different choices from our framework and find that the choice of weighting schemes critically impacts ensemble diversity, and that greater ensemble diversity correlates with improved downstream performance. Our method is potentially applicable to many SSL methods; we focus on DINO (Caron et al., 2021) and MSN (Assran et al., 2022) to demonstrate its effectiveness. Fig. 1 shows DINO improvements from using our ensembling and weighted cross-entropy loss." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 710, + 436, + 721 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 710, + 436, + 721 + ], + "spans": [ + { + "bbox": [ + 116, + 710, + 436, + 721 + ], + "type": "text", + "content": "*University of Toronto & Vector Institute. Work done as a student researcher at Google." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 118, + 721, + 459, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 721, + 459, + 731 + ], + "spans": [ + { + "bbox": [ + 118, + 721, + 459, + 731 + ], + "type": "inline_equation", + "content": "^{\\dagger}" + }, + { + "bbox": [ + 118, + 721, + 459, + 731 + ], + "type": "text", + "content": "Correspondence to yjruan@cs.toronto.edu, {iansf, jvdillon} @ google.com." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 83, + 276, + 94 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 83, + 276, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 83, + 276, + 94 + ], + "type": "text", + "content": "In summary, our core contributions are to:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 110, + 99, + 505, + 150 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 110, + 99, + 503, + 111 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 99, + 503, + 111 + ], + "spans": [ + { + "bbox": [ + 110, + 99, + 503, + 111 + ], + "type": "text", + "content": "- Develop a downstream-efficient ensemble method suitable for many SSL techniques (Sec. 3.1)." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 110, + 112, + 465, + 124 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 112, + 465, + 124 + ], + "spans": [ + { + "bbox": [ + 110, + 112, + 465, + 124 + ], + "type": "text", + "content": "- Characterize an ensemble loss family of weighted cross-entropy objectives (Sec. 3.2)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 110, + 125, + 505, + 137 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 125, + 505, + 137 + ], + "spans": [ + { + "bbox": [ + 110, + 125, + 505, + 137 + ], + "type": "text", + "content": "- Conduct extensive ablation studies that improve the prior art baselines by up to 6.3 p.p. (Sec. 5.1)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 110, + 137, + 501, + 150 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 137, + 501, + 150 + ], + "spans": [ + { + "bbox": [ + 110, + 137, + 501, + 150 + ], + "type": "text", + "content": "- Further improve those baselines with ensembling (e.g., up to 5.5 p.p. gain for 1-shot) (Table 2)." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 165, + 201, + 177 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 165, + 201, + 177 + ], + "spans": [ + { + "bbox": [ + 105, + 165, + 201, + 177 + ], + "type": "text", + "content": "2 BACKGROUND" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 189, + 506, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 189, + 506, + 222 + ], + "spans": [ + { + "bbox": [ + 104, + 189, + 506, + 222 + ], + "type": "text", + "content": "In this section, we frame SSL methods from the perspective of maximum likelihood estimation (MLE) and use this as the notational basis to describe the state-of-the-art clustering-based SSL methods as well as derive their ensembled variants in Sec. 3." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "spans": [ + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "text", + "content": "From Maximum Likelihood to SSL Denote unnormalized KL divergence (Dikmen et al., 2014) between non-negative integrable functions " + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "inline_equation", + "content": "p, q" + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "text", + "content": " by " + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "inline_equation", + "content": "\\mathsf{K}[p(X), q(X)] = \\mathsf{H}^{\\times}[p(X), q(X)] - \\mathsf{H}[p(X)]" + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "inline_equation", + "content": "\\mathsf{H}^{\\times}[p(X), q(X)] = -\\int_{\\mathcal{X}} p(x) \\log q(x) \\, \\mathrm{d}x + \\int_{\\mathcal{X}} q(x) \\, \\mathrm{d}x - 1" + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "text", + "content": " is the unnormalized cross-entropy (with " + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "inline_equation", + "content": "0 \\log 0 = 0" + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "text", + "content": ") and " + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "inline_equation", + "content": "\\mathsf{H}[p(X)] = \\mathsf{H}^{\\times}[p(X), p(X)]" + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "text", + "content": ". These quantities simplify to their usual definitions when " + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "inline_equation", + "content": "p, q" + }, + { + "bbox": [ + 104, + 234, + 506, + 304 + ], + "type": "text", + "content": " are normalized, but critically they enable flexible weighting of distributions for the derivation of our weighted ensemble losses in Sec. 3.2." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "spans": [ + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "text", + "content": "Let " + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "inline_equation", + "content": "\\nu(X, Y) = \\nu(X)\\nu(Y|X)" + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "text", + "content": " be a natural distribution of input/target pairs over the space " + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "inline_equation", + "content": "\\mathcal{X} \\times \\mathcal{Y}" + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "inline_equation", + "content": "s(Y|\\theta, X)" + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "text", + "content": " be a predictive model of target given the input parameterized by " + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "inline_equation", + "content": "\\theta \\in \\mathcal{T}" + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "text", + "content": ". Supervised maximum likelihood seeks the minimum expected conditional population risk with respect to " + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 309, + 504, + 344 + ], + "type": "text", + "content": "," + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 348, + 504, + 361 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 348, + 504, + 361 + ], + "spans": [ + { + "bbox": [ + 132, + 348, + 504, + 361 + ], + "type": "interline_equation", + "content": "\\mathsf {E} _ {\\nu (X)} \\mathsf {K} [ \\nu (Y | X), s (Y | \\theta , X) ] = \\mathsf {E} _ {\\nu (X)} \\mathsf {H} ^ {\\times} [ \\nu (Y | X), s (Y | \\theta , X) ] - \\mathsf {E} _ {\\nu (X)} \\mathsf {H} [ \\nu (Y | X) ]. \\tag {1}", + "image_path": "6c641804918299210a7a9af991a882bbd584c47a399f10783ce9600f773cb8f2.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "spans": [ + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": "Henceforth omit " + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "\\mathsf{E}_{\\nu(X)} \\mathsf{H}[\\nu(Y|X)]" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": " since it is constant in " + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": ". Since " + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "\\nu(X, Y)" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": " is unknown, a finite sample approximation is often employed. Denote a size-" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": " i.i.d. training set by " + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_n = \\{x_i\\}_{i \\in [n]} \\sim \\nu^{\\otimes n}" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": " and empirical distribution by " + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "\\hat{\\nu}(X, Y) = \\frac{1}{n} \\sum_{x \\in \\mathcal{D}_n, y \\sim \\nu(Y|x)} \\delta(X - x) \\delta(Y - y)" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "\\delta: \\mathbb{R} \\to \\{0, 1\\}" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": " is 1 when " + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "x = 0" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": " and 0 otherwise. The sample risk is thus " + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "inline_equation", + "content": "-\\frac{1}{n} \\sum_{x \\in \\mathcal{D}_n} \\mathsf{H}^\\times[\\hat{\\nu}(Y|x), s(Y|\\theta, x)]" + }, + { + "bbox": [ + 104, + 372, + 504, + 426 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "spans": [ + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": "In SSL, we interpret " + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "inline_equation", + "content": "\\nu(Y|x)" + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": " as being the oracle teacher under a presumption of how the representations will be evaluated on a downstream task. This assumption is similar to that made in Arora et al. (2019); Nozawa et al. (2020). We also assume " + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "inline_equation", + "content": "\\hat{\\nu}(Y|X)" + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": " is inaccessible and/or unreliable. Under this view, some SSL techniques substitute " + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "inline_equation", + "content": "\\hat{\\nu}(Y|x)" + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": " for a weakly learned target or \"teacher\", " + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "inline_equation", + "content": "t(Y|x)" + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": ". We don't generally expect " + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "inline_equation", + "content": "t(Y|x)" + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": " to recover " + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "inline_equation", + "content": "\\nu(Y|x)" + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": "; we only assume that an optimal teacher exists and it is " + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "inline_equation", + "content": "\\nu(Y|x)" + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": ". With the teacher providing the targets, the loss becomes " + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "inline_equation", + "content": "-\\frac{1}{n}\\sum_{x\\in\\mathcal{D}_n}\\mathsf{H}^\\times[t(Y|x), s(Y|\\theta, x)]" + }, + { + "bbox": [ + 104, + 430, + 506, + 513 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 521, + 504, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 521, + 504, + 555 + ], + "spans": [ + { + "bbox": [ + 104, + 521, + 504, + 555 + ], + "type": "text", + "content": "Teacher and student in clustering SSL methods Clustering SSL methods such as SWaV (Caron et al., 2020), DINO (Caron et al., 2021), and MSN (Assran et al., 2022) employ a student model characterized by proximity between learned codebook entries and a data-dependent code," + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 175, + 559, + 504, + 586 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 175, + 559, + 504, + 586 + ], + "spans": [ + { + "bbox": [ + 175, + 559, + 504, + 586 + ], + "type": "interline_equation", + "content": "s (Y | \\theta , x) = \\operatorname {s o f t m a x} \\left(\\left\\{\\frac {1}{\\tau} \\frac {\\left(h _ {\\psi} \\circ r _ {\\omega}\\right) (x) \\cdot \\mu_ {y}}{\\| \\left(h _ {\\psi} \\circ r _ {\\omega}\\right) (x) \\| _ {2} \\| \\mu_ {y} \\| _ {2}}: y \\in [ c ] \\right\\}\\right) \\tag {2}", + "image_path": "6e35e65700cfc187bb29020d6d1e0f6437ac9d2af0b895ec41c69d0930a17b3b.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 209, + 587, + 504, + 601 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 209, + 587, + 504, + 601 + ], + "spans": [ + { + "bbox": [ + 209, + 587, + 504, + 601 + ], + "type": "interline_equation", + "content": "\\theta = \\{\\omega , \\psi , \\left\\{\\mu_ {y} \\right\\} _ {y \\in [ c ]} \\} \\in \\mathcal {T}, \\tag {3}", + "image_path": "cdc594e740c54db61e436a3bb6a0f94af1b9c720c392a696e23f84464c51fef6.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "spans": [ + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "content": "where the encoder " + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "inline_equation", + "content": "r_{\\omega}:\\mathcal{X}\\to \\mathcal{Z}" + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "content": " produces the representations used for downstream tasks, and the projection head " + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "inline_equation", + "content": "h_\\psi :\\mathcal{Z}\\rightarrow \\mathbb{R}^d" + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "content": " and codebook entries " + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "inline_equation", + "content": "\\{\\mu_y\\}_{y\\in \\mathcal{Y}}\\in \\mathbb{R}^d" + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "content": " characterize the SSL loss. Eq. (2) can be viewed as \"soft clustering\", where the input is assigned to those centroids that are closer to the projection head's output. The projection head and codebook are used during training but thrown away for evaluation, which is empirically found vital for downstream tasks (Chen et al., 2020a;b). Hyperparameters " + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "inline_equation", + "content": "\\tau \\in \\mathbb{R}_{>0},c\\in \\mathbb{Z}_{>0}" + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "content": " represent temperature and codebook size. The teacher is defined as " + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "inline_equation", + "content": "t(Y|x) = s(Y|\\mathrm{stopgrad}(g(\\theta)),x)" + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "inline_equation", + "content": "g:\\mathcal{T}\\to \\mathcal{T}" + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "content": ". Commonly " + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "inline_equation", + "content": "g(\\theta)" + }, + { + "bbox": [ + 104, + 605, + 506, + 693 + ], + "type": "text", + "content": " is an exponential moving average of gradient descent iterates and the teacher uses a lower temperature than the student." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 104, + 698, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 506, + 733 + ], + "type": "text", + "content": "To capture desirable invariances and prevent degeneracy, data augmentation and regularization (e.g., Sinkhorn-Knopp normalization (Caron et al., 2020), mean entropy maximization (Assran et al., 2022)) are essential. As these are not directly relevant to our method, we omit them for brevity." + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 174, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 174, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 174, + 94 + ], + "type": "text", + "content": "3 METHOD" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 107, + 506, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 107, + 506, + 185 + ], + "spans": [ + { + "bbox": [ + 104, + 107, + 506, + 185 + ], + "type": "text", + "content": "Ensembling is a technique that combines models to boost performance, and has been especially successful in supervised learning. We are interested in ensembling methods that carry over this success to SSL approaches. However, SSL has key differences, such as throw-away \"projection heads\", from supervised learning that result in a multitude of possibilities for how to ensemble. With this in mind, we propose first where to ensemble, and then how to ensemble. Those proposals result in an efficient \"peri-training\" ensembling technique specifically tailored for SSL and a family of weighted ensemble objectives; we subsequently suggest different ways to select the weights." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 201, + 233, + 213 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 201, + 233, + 213 + ], + "spans": [ + { + "bbox": [ + 105, + 201, + 233, + 213 + ], + "type": "text", + "content": "3.1 WHERE TO ENSEMBLE?" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "spans": [ + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "text", + "content": "Denote the teacher/student ensembles by " + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "inline_equation", + "content": "\\{t_i(Y|x)\\}_{i\\in [m]}" + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "inline_equation", + "content": "\\{s(Y|\\theta_j,x)\\}_{j\\in [m]}" + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "text", + "content": " and define each as in Sec. 2; parameters " + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "inline_equation", + "content": "\\theta = \\{\\theta_{j}\\}_{j\\in [m]}\\in \\mathcal{T}^{m}" + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "text", + "content": " are independently initialized, all students use one temperature and all teachers another. We asymmetrically denote " + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "inline_equation", + "content": "t_i(Y|x)" + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "inline_equation", + "content": "s(Y|\\theta_j,x)" + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "text", + "content": " to emphasize that teachers' gradients are zero and that the students are distinct solely by way of " + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "inline_equation", + "content": "\\theta_{i}\\neq \\theta_{j}" + }, + { + "bbox": [ + 104, + 223, + 317, + 336 + ], + "type": "text", + "content": ". Studying heterogeneous architectures and/or different teacher parameterizations is left for future work." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "spans": [ + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": "Recall that " + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "inline_equation", + "content": "\\theta_{j}" + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": " parameterizes the encoder, projection head, and codebook parameters: " + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "inline_equation", + "content": "\\theta_{j} = (\\omega_{j},\\psi_{j},\\{\\mu_{jy}\\}_{y\\in \\mathcal{Y}})" + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": ". We further restrict " + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "inline_equation", + "content": "\\mathcal{T}^m" + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": " such that " + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "inline_equation", + "content": "\\omega_{i} = \\omega_{j}" + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": ", i.e., we limit our consideration to ensembles of projection heads " + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "inline_equation", + "content": "h_{\\psi_j}" + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": " and/or codebooks " + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "inline_equation", + "content": "\\mu_{j}" + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": " but not encoders " + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "inline_equation", + "content": "r_{\\omega_j}" + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": ". This choice makes our ensemble method inherently different from traditional supervised ensembling or encoder " + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "inline_equation", + "content": "r_{\\omega}" + }, + { + "bbox": [ + 104, + 342, + 318, + 441 + ], + "type": "text", + "content": " ensembling: the ensembled parts are not used for evaluation but" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 441, + 506, + 464 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 441, + 506, + 464 + ], + "spans": [ + { + "bbox": [ + 104, + 441, + 506, + 464 + ], + "type": "text", + "content": "for improving the learning of non-enssembled representation encoder during training, thus it requires no change of downstream evaluation or computational cost. Ensembling of " + }, + { + "bbox": [ + 104, + 441, + 506, + 464 + ], + "type": "inline_equation", + "content": "r_{\\omega}" + }, + { + "bbox": [ + 104, + 441, + 506, + 464 + ], + "type": "text", + "content": " is left for future work." + } + ] + } + ], + "index": 6 + }, + { + "type": "image", + "bbox": [ + 322, + 204, + 508, + 350 + ], + "blocks": [ + { + "bbox": [ + 322, + 204, + 508, + 350 + ], + "lines": [ + { + "bbox": [ + 322, + 204, + 508, + 350 + ], + "spans": [ + { + "bbox": [ + 322, + 204, + 508, + 350 + ], + "type": "image", + "image_path": "1a32ce26502f3e27b447c32b32c0018e24f6fc95bb0f146fdb5c4659fa233a6f.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 322, + 357, + 506, + 437 + ], + "lines": [ + { + "bbox": [ + 322, + 357, + 506, + 437 + ], + "spans": [ + { + "bbox": [ + 322, + 357, + 506, + 437 + ], + "type": "text", + "content": "Figure 2: Overview of " + }, + { + "bbox": [ + 322, + 357, + 506, + 437 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 322, + 357, + 506, + 437 + ], + "type": "text", + "content": "-ensemble. Two augmented inputs are encoded by the teacher/student into representations, and then processed by an ensemble of heads. The loss for each head is weighted and summed into the final loss. Strike-through edges indicate stop-gradients. See Appx. A for pseudocode." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 479, + 222, + 491 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 479, + 222, + 491 + ], + "spans": [ + { + "bbox": [ + 105, + 479, + 222, + 491 + ], + "type": "text", + "content": "3.2 HOW TO ENSEMBLE?" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 501, + 506, + 546 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 501, + 506, + 546 + ], + "spans": [ + { + "bbox": [ + 104, + 501, + 506, + 546 + ], + "type": "text", + "content": "We would like to extend the loss to support an ensemble of teacher/student pairs while respecting the MLE intuition of the loss as in Sec. 2. Additionally, we want to facilitate data-dependent importance weights, thus enabling preferential treatment of some teacher/student pairs. We therefore propose a weighted average (unnormized) cross-entropy loss," + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 195, + 554, + 504, + 585 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 554, + 504, + 585 + ], + "spans": [ + { + "bbox": [ + 195, + 554, + 504, + 585 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i, j \\in [ m ]} \\mathrm {H} ^ {\\times} \\left[ w _ {i j Y} \\odot t _ {i} (Y | x), s (Y | \\theta_ {j}, x) \\right] \\tag {4}", + "image_path": "94121557a8a8c811380f941c5abb1779209decf2c28c6b44763bdb4514ff490e.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 168, + 586, + 504, + 609 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 168, + 586, + 504, + 609 + ], + "spans": [ + { + "bbox": [ + 168, + 586, + 504, + 609 + ], + "type": "interline_equation", + "content": "\\text {w h e r e} w _ {i j y} = \\operatorname {s o f t m a x} \\left(\\left\\{\\frac {1}{\\gamma} f _ {i j y} (\\operatorname {s t o p g r a d} (\\theta), x): i, j \\in [ m ] \\right\\}\\right). \\tag {5}", + "image_path": "087d7642c18cdf0d271df977caa3d94de8e78d78055c9015010c342242de837e.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "spans": [ + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "text", + "content": "The notation " + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "inline_equation", + "content": "w_{ijY} \\odot t_i(Y|x)" + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "text", + "content": " denotes a Hadamard product; i.e., the product of event-specific weights and probabilities for each " + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "inline_equation", + "content": "y \\in \\mathcal{V}" + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "text", + "content": ". The hyperparameter " + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "text", + "content": " is the temperature. The function " + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "inline_equation", + "content": "f_{ijy}" + }, + { + "bbox": [ + 104, + 616, + 504, + 651 + ], + "type": "text", + "content": " is defined for brevity and discussed in the following section." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": "This objective admits generality and flexibility for introducing various weighting schemes, as it supports potential interactions between all teacher/student pairs and allows the weights to be both model- and data-dependent. Up to a constant independent of " + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": ", it is an importance weighted average of (unnormized) KL divergences between each teacher and each student; i.e., a mixture of MLE-like objectives. We stop the gradient of " + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "inline_equation", + "content": "w_{ijy}" + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": " to " + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": " in order to keep the overall gradient a weighted average of students' log-likelihood gradients, similar to Eq. (1). We also normalize the weights such that each data point equally contributes to the loss." + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 82, + 211, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 82, + 211, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 82, + 211, + 94 + ], + "type": "text", + "content": "3.3 HOW TO WEIGHT?" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "spans": [ + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "type": "text", + "content": "In this section, we present several instantiations of our losses with different weighting schemes. We empirically show in Sec. 5 that the particular choice of weighting scheme is critical for the representation performance and the induced diversity of " + }, + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "type": "text", + "content": "-ensembles. For simplicity we assume " + }, + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "type": "inline_equation", + "content": "\\gamma = 1" + }, + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "type": "text", + "content": " in this section. We indicate with " + }, + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "type": "inline_equation", + "content": "\\Longleftrightarrow" + }, + { + "bbox": [ + 104, + 102, + 506, + 159 + ], + "type": "text", + "content": " that a loss has the same arg min as Eq. (4). For additional analysis and discussion, see Appx. D." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 169, + 506, + 194 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 169, + 506, + 194 + ], + "spans": [ + { + "bbox": [ + 104, + 169, + 506, + 194 + ], + "type": "text", + "content": "Uniform weighting (UNIF) The simplest strategy is to treat different teacher/student pairs independently and average each with uniform weighting; i.e.," + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 143, + 198, + 505, + 228 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 143, + 198, + 505, + 228 + ], + "spans": [ + { + "bbox": [ + 143, + 198, + 505, + 228 + ], + "type": "interline_equation", + "content": "f _ {i j y} = \\log \\delta (i - j) \\Longleftrightarrow \\mathcal {L} _ {n} ^ {\\mathrm {U N I F}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\frac {1}{m} \\sum_ {i \\in [ m ]} \\mathrm {H} ^ {\\times} \\left[ t _ {i} (Y | x), s (Y | \\theta_ {i}, x) \\right] \\tag {6}", + "image_path": "2a3f9d649a7e19740b115addff3277875ce27523629f74fe044736fef0834577.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "spans": [ + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "type": "text", + "content": "This strategy introduces uniform weights " + }, + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "type": "inline_equation", + "content": "w_{i} = \\frac{1}{m}" + }, + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "type": "text", + "content": " over ensemble elements. The role of " + }, + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "type": "inline_equation", + "content": "\\log \\delta (i - j)" + }, + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "type": "text", + "content": " (here and elsewhere) is to sub-select corresponding teacher/student pairs rather than all " + }, + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "type": "inline_equation", + "content": "m^2" + }, + { + "bbox": [ + 104, + 234, + 506, + 260 + ], + "type": "text", + "content": " pairs." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 270, + 505, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 270, + 505, + 304 + ], + "spans": [ + { + "bbox": [ + 104, + 270, + 505, + 304 + ], + "type": "text", + "content": "Probability weighting (PROB) An alternative to using the average cross-entropy loss (UNIF) is to compute the cross-entropy loss of the average predictions whose gradient is weighted by " + }, + { + "bbox": [ + 104, + 270, + 505, + 304 + ], + "type": "inline_equation", + "content": "w_{ijy}" + }, + { + "bbox": [ + 104, + 270, + 505, + 304 + ], + "type": "text", + "content": " (see Appx. D.1). At " + }, + { + "bbox": [ + 104, + 270, + 505, + 304 + ], + "type": "inline_equation", + "content": "\\gamma = 1" + }, + { + "bbox": [ + 104, + 270, + 505, + 304 + ], + "type": "text", + "content": ", those gradient weights simplify into an average over the student probabilities:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 111, + 309, + 505, + 349 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 309, + 505, + 349 + ], + "spans": [ + { + "bbox": [ + 111, + 309, + 505, + 349 + ], + "type": "interline_equation", + "content": "f _ {i j y} = \\log s (y | \\theta_ {j}, x) \\iff \\mathcal {L} _ {n} ^ {\\mathrm {P R O B}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\mathsf {H} ^ {\\times} \\left[ \\frac {1}{m} \\sum_ {i \\in [ m ]} t _ {i} (Y | x), \\frac {1}{m} \\sum_ {j \\in [ m ]} s (Y | \\theta_ {j}, x) \\right] \\tag {7}", + "image_path": "bec58df67e15c99412f65c1ca799b6e2da8e1c885c3bf737ecc3915e34ec0d24.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 352, + 506, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 352, + 506, + 419 + ], + "spans": [ + { + "bbox": [ + 104, + 352, + 506, + 419 + ], + "type": "text", + "content": "Averaging the predictive distributions introduces correspondence between codes from different heads; thus different heads are no longer independent but instead cooperate to match the student to the teachers. The loss favors student heads with more confident predictions (i.e., larger " + }, + { + "bbox": [ + 104, + 352, + 506, + 419 + ], + "type": "inline_equation", + "content": "s(y|\\theta_j, x)" + }, + { + "bbox": [ + 104, + 352, + 506, + 419 + ], + "type": "text", + "content": "). Further motivation for averaging predictions comes from multi-sample losses studied in Morningstar et al. (2022). Note that the joint convexity of (unnormized) KL divergence implies that this loss is upper bounded by the UNIF loss up to some constant in " + }, + { + "bbox": [ + 104, + 352, + 506, + 419 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 352, + 506, + 419 + ], + "type": "text", + "content": " (see Appx. D)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 423, + 506, + 458 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 423, + 506, + 458 + ], + "spans": [ + { + "bbox": [ + 104, + 423, + 506, + 458 + ], + "type": "text", + "content": "Although the PROB strategy favors confident student predictions, the weights change as a function of " + }, + { + "bbox": [ + 104, + 423, + 506, + 458 + ], + "type": "inline_equation", + "content": "y \\in \\mathcal{V}" + }, + { + "bbox": [ + 104, + 423, + 506, + 458 + ], + "type": "text", + "content": ". This may be in conflict with our intuition that SSL is like maximum likelihood (Sec. 2), since under that view, the teacher is responsible for weighting outcomes." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 469, + 504, + 492 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 469, + 504, + 492 + ], + "spans": [ + { + "bbox": [ + 104, + 469, + 504, + 492 + ], + "type": "text", + "content": "Entropy weighting (ENT) Another way to favor heads with more confident predictions is to directly weight by their predictive entropies; i.e.," + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 136, + 497, + 504, + 510 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 497, + 504, + 510 + ], + "spans": [ + { + "bbox": [ + 136, + 497, + 504, + 510 + ], + "type": "interline_equation", + "content": "f _ {i j y} = - \\mathrm {H} [ t _ {i} (Y | x) ] + \\log \\delta (i - j) \\Longleftrightarrow \\tag {8}", + "image_path": "283805d1f3d44cc872a1bf1423900f11dbb3903ef735cd4f648e15e502b87658.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 119, + 512, + 504, + 541 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 512, + 504, + 541 + ], + "spans": [ + { + "bbox": [ + 119, + 512, + 504, + 541 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {E N T}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i \\in [ m ]} \\operatorname {s o f t m a x} _ {i} \\left(\\left\\{- \\frac {1}{\\gamma} \\mathrm {H} \\left[ t _ {i ^ {\\prime}} (Y | x) : i ^ {\\prime} \\in [ m ] \\right\\}\\right) \\mathrm {H} ^ {\\times} \\left[ t _ {i} (Y | x), s (Y | \\theta_ {i}, x) \\right]\\right) \\tag {9}", + "image_path": "4d0af2602010ef5a92d1b7a46911659ccd8e661672663af01498e55be239df54.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 548, + 506, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 548, + 506, + 639 + ], + "spans": [ + { + "bbox": [ + 104, + 548, + 506, + 639 + ], + "type": "text", + "content": "where the weight " + }, + { + "bbox": [ + 104, + 548, + 506, + 639 + ], + "type": "inline_equation", + "content": "w_{i} = \\mathrm{softmax}_{i}\\left(\\{-\\frac{1}{\\gamma}\\mathsf{H}[t_{i'}(Y|x)]:i'\\in [m]\\}\\right)" + }, + { + "bbox": [ + 104, + 548, + 506, + 639 + ], + "type": "text", + "content": " is inversely correlated with the entropy of teacher predictions. In other words, the head whose teacher has a lower entropy (i.e., higher confidence about its prediction) is given a larger importance weight for learning the representation. Like PROB, this strategy encourages \"data specialists\" by emphasizing strongly opinionated teacher heads for different inputs. Like UNIF, different heads are treated more independent (than PROB), since interaction between different heads is introduced only through the weight computation. By preferring low-entropy teachers we also favor low variance teachers; this aligns with the intuition that using a lower-variance teacher benefits representation quality (Wang et al., 2022)." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 650, + 504, + 673 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 650, + 504, + 673 + ], + "spans": [ + { + "bbox": [ + 104, + 650, + 504, + 673 + ], + "type": "text", + "content": "Countless other weighting schemes It is impossible to fully explore the space of weightings; the following might also be interesting to study in detail but were omitted due to resource constraints." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 127, + 679, + 504, + 691 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 679, + 504, + 691 + ], + "spans": [ + { + "bbox": [ + 127, + 679, + 504, + 691 + ], + "type": "interline_equation", + "content": "f _ {i j y} = 0 \\quad \\text {(F a v o r s a l l p a i r s o f t e a c h e r s / s t u d e n t s e q u a l l y)} \\tag {10}", + "image_path": "a5b0dd2ee85a1b798d613b9610b31d298bef0c9c0c956b830a114b950e356b98.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 127, + 693, + 504, + 704 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 693, + 504, + 704 + ], + "spans": [ + { + "bbox": [ + 127, + 693, + 504, + 704 + ], + "type": "interline_equation", + "content": "f _ {i j y} = \\log t _ {i} (y | x) \\quad (\\text {F a v o r s o p i n i o n a t e d t e a c h e r s}) \\tag {11}", + "image_path": "e4d83453d06cd40931f4fda8f6e5b62177e58d6251631a40bae572749f229fd7.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 127, + 706, + 504, + 719 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 706, + 504, + 719 + ], + "spans": [ + { + "bbox": [ + 127, + 706, + 504, + 719 + ], + "type": "interline_equation", + "content": "f _ {i j y} = - \\mathrm {H} [ s (Y | \\theta_ {j}, x) ] \\quad (\\text {F a v o r s l o w - e n t r o p y s t d u e n t s}) \\tag {12}", + "image_path": "aa7d72f3c35764f959195a2200736edc2eea074bc21667cbf5863faaaff899a5.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 127, + 720, + 504, + 733 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 720, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 127, + 720, + 504, + 733 + ], + "type": "interline_equation", + "content": "f _ {i j y} = \\mathsf {K} \\left[ t _ {i} (Y | x), s (Y | \\theta_ {j}, x) \\right] \\quad (\\text {F a v o r s d i a g e e i n g t e a c h e r / s t u d e n t p a i r s}) \\tag {13}", + "image_path": "9473a82d3d8c75cc19da308beec3849d4fb7de36e420cf9b3ff30853d96098dd.jpg" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "bbox": [ + 125, + 81, + 504, + 95 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 125, + 81, + 504, + 95 + ], + "spans": [ + { + "bbox": [ + 125, + 81, + 504, + 95 + ], + "type": "interline_equation", + "content": "f _ {i j y} = - \\frac {1}{2} \\log \\left(\\operatorname {V a r} _ {t _ {i} (Y | x)} [ Y ] + \\epsilon\\right) \\quad \\text {(F a v o r s l o w v a r i a n c e t e a c h e r s ; e . g . ,} \\epsilon = \\frac {1}{1 2}) \\tag {14}", + "image_path": "fb2a31360be1bcc070ffcda6e510220476d73c24972894cd0826998ed24f0938.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 98, + 507, + 122 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 98, + 507, + 122 + ], + "spans": [ + { + "bbox": [ + 104, + 98, + 507, + 122 + ], + "type": "text", + "content": "Note that \"aligned\" versions of all schemes are possible by using " + }, + { + "bbox": [ + 104, + 98, + 507, + 122 + ], + "type": "inline_equation", + "content": "f_{ijy} + \\log \\delta (i - j)" + }, + { + "bbox": [ + 104, + 98, + 507, + 122 + ], + "type": "text", + "content": ". We did early experiments exploring Eqs. (11) and (12), but the results were inferior and are largely omitted below." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 137, + 212, + 148 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 137, + 212, + 148 + ], + "spans": [ + { + "bbox": [ + 105, + 137, + 212, + 148 + ], + "type": "text", + "content": "4 RELATED WORK" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 161, + 506, + 328 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 161, + 506, + 328 + ], + "spans": [ + { + "bbox": [ + 104, + 161, + 506, + 328 + ], + "type": "text", + "content": "Self-supervised learning Recent work on self-supervised learning (SSL) focuses on discriminative or generative approaches. Most discriminative approaches seek to learn augmentation-invariant representations by enforcing the similarity between augmented pairs of the same image while utilizing different techniques to avoid collapse. Contrastive methods (Chen et al., 2020a; He et al., 2020; Wu et al., 2018; Hjelm et al., 2018; Bachman et al., 2019; Tian et al., 2020) use a large number of negative samples with a noise-contrastive objective (Gutmann & Hyvarinen, 2010; Oord et al., 2018). A large body of followup work eliminates the necessity of explicit negative samples with various techniques, including clustering assignment constraints (Caron et al., 2018; 2020; 2021; Asano et al., 2019), bootstrapping (Grill et al., 2020) or self-distillation (Caron et al., 2021) inspired by mean teacher (Tarvainen & Valpola, 2017), asymmetric architecture design (Grill et al., 2020; Chen & He, 2021), or redundancy reduction (Zbontar et al., 2021; Bardes et al., 2021). Recent generative approaches that use masked image modeling as the pretraining task (Dosovitskiy et al., 2020; Bao et al., 2021; He et al., 2022; Zhou et al., 2022; Xie et al., 2022) have achieved competitive finetuning performance. Our method may be applicable to all of the above methods that have some sort of \"projection head\", such as most of the discriminative approaches." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 338, + 507, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 338, + 507, + 515 + ], + "spans": [ + { + "bbox": [ + 104, + 338, + 507, + 515 + ], + "type": "text", + "content": "Ensemble methods Ensembling has been extensively studied for improving model performance (Hansen & Salamon, 1990; Perrone & Cooper, 1992; Dietterich, 2000) and uncertainty estimation (Lakshminarayanan et al., 2017; Ovadia et al., 2019) in supervised learning and semi-supervised learning (Laine & Aila, 2016). A major research direction is to train efficient ensembles with partial parameter sharing (Lee et al., 2015; Wen et al., 2020; Dusenberry et al., 2020; Havasi et al., 2020) or intermediate checkpointing (Huang et al., 2017; Garipov et al., 2018). Our method also shares the encoder parameters across ensembles, which is closely related to multi-headed networks (Lee et al., 2015; Tran et al., 2020). Ensemble methods for SSL are less explored. Some recent work studies ensembles of supervised models adapted from pretrained SSL models. Gontijo-Lopes et al. (2022) conduct an empirical study of ensembles adapted from different SSL models and find that higher divergence in SSL methods leads to less correlated errors and better performance. Wortsman et al. (2022) ensemble multiple finetuned models adapted from the same SSL model by averaging their weights, which boosts the performance without any inference cost. Our method differs from them in that it (1) applies to the SSL training stage to directly improve representation quality, rather than aggregates multiple models in the post-training/finetuning stage; (2) introduces little training cost and no evaluation cost; and (3) is complementary to these post-training/finetuning ensembling methods." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 529, + 201, + 542 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 529, + 201, + 542 + ], + "spans": [ + { + "bbox": [ + 105, + 529, + 201, + 542 + ], + "type": "text", + "content": "5 EXPERIMENTS" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 555, + 504, + 624 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 555, + 504, + 624 + ], + "spans": [ + { + "bbox": [ + 104, + 555, + 504, + 624 + ], + "type": "text", + "content": "We carefully study the impact of " + }, + { + "bbox": [ + 104, + 555, + 504, + 624 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 555, + 504, + 624 + ], + "type": "text", + "content": "-ensembles and our selected weighted ensemble losses (UNIF, PROB, and ENT) on smaller DINO models in Sec. 5.1. Using what we learned in those experiments, in Sec. 5.2 we present new state-of-the-art results on ImageNet-1K on various metrics for multiple model sizes by ensembling both DINO- and MSN-based models. Finally, we explore ensemble evaluations in the transfer learning setting in Sec. 5.3. Additional experimental details and results are in Appx. B and Appx. C, respectively." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 632, + 507, + 734 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 632, + 507, + 734 + ], + "spans": [ + { + "bbox": [ + 104, + 632, + 507, + 734 + ], + "type": "text", + "content": "Experimental setup We assessed the effectiveness of our method with two SSL methods: DINO (Caron et al., 2021) and MSN (Assran et al., 2022). In order to ensure that we are comparing against strong baselines, we consider three different classes of baselines: (1) evaluation numbers reported in the original works (Caron et al. (2021), Assran et al. (2022), and Zhou et al. (2022) for an additional baseline iBOT); (2) evaluation of our implementation using the hyperparameters reported in the original works (DINO only, for space reasons) to validate our implementation; and (3) evaluation of our implementation using the best hyperparameters that we found by tuning the baselines (DINO and MSN) for fair comparisons. In almost all models and evaluations, our retuned baselines give nontrivial performance improvements on top of previously reported numbers. These type (3) baselines" + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 107, + 145, + 504, + 277 + ], + "blocks": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "lines": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "spans": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "type": "text", + "content": "Table 1: Comparison of different ensemble strategies. ENT and PROB significantly improve over the non-ensembleed baseline, while UNIF leads to no gains. Ensembling both the projection head and the codebook works the best. All models are DINO* ViT-S/16 trained for 300 epochs. Averages and standard deviations are over 3 initialization seeds. The linear evaluation results on ImageNet-1K with different amounts of labeled data are reported here (see Table 11 in Appx. C.3 for all metrics)." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 107, + 145, + 504, + 277 + ], + "lines": [ + { + "bbox": [ + 107, + 145, + 504, + 277 + ], + "spans": [ + { + "bbox": [ + 107, + 145, + 504, + 277 + ], + "type": "table", + "html": "
HowWhere# of Labels Per Class
Proj. hψCode. μ15~13 (1%)Full
Base40.6 ± 0.257.9 ± 0.363.4 ± 0.274.4 ± 0.1
UNIF40.4 ± 0.457.6 ± 0.363.3 ± 0.374.5 ± 0.2
PROB39.8 ± 0.5 ↓ 0.957.4 ± 0.4 ↓ 0.563.0 ± 0.4 ↓ 0.474.8 ± 0.1 ↑ 0.4
PROB41.9 ± 0.3 ↑ 1.359.6 ± 0.4 ↑ 1.765.1 ± 0.3 ↑ 1.775.4 ± 0.1 ↑ 1.0
ENT-ST40.0 ± 0.5 ↓ 0.657.3 ± 0.5 ↓ 0.662.7 ± 0.5 ↓ 0.774.0 ± 0.4 ↓ 0.4
ENT40.8 ± 0.458.0 ± 0.463.5 ± 0.474.5 ± 0.3
ENT43.0 ± 0.6 ↑ 2.459.7 ± 0.7 ↑ 1.864.8 ± 0.5 ↑ 1.475.1 ± 0.4 ↑ 0.7
ENT44.0 ± 0.2 ↑ 3.460.5 ± 0.3 ↑ 2.665.5 ± 0.1 ↑ 2.275.3 ± 0.1 ↑ 0.9
", + "image_path": "6447524140b7634dc92578536ae658f8f0846bd5737531fcd84c0dea6796f34b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 285, + 506, + 342 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 285, + 506, + 342 + ], + "spans": [ + { + "bbox": [ + 104, + 285, + 506, + 342 + ], + "type": "text", + "content": "we label DINO* and MSN*, and we use them as the base models for our experiments with " + }, + { + "bbox": [ + 104, + 285, + 506, + 342 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 285, + 506, + 342 + ], + "type": "text", + "content": "-ensembles and weighted ensemble losses. Appx. B.2.1 describes the details for getting such strong performance for DINO* and MSN*. In particular, we find that the projection head has a crucial impact on label efficiency of representations and using a smaller head (3-layer MLP with hidden size 1024) significantly improves few-shot evaluation performance (see Appx. C.2)." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "spans": [ + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "text", + "content": "Evaluation metrics We compared models trained with and without our " + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "text", + "content": "-ensembles by measuring various evaluation metrics on ImageNet-1K (Deng et al., 2009). The evaluation metrics reflect the decodability and the label efficiency of learned representations. We measured the decodability with respect to both the linear classifier following the common linear evaluation protocol and the " + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "text", + "content": "-NN classifier following Caron et al. (2021). We measured the label efficiency by evaluating the linear evaluation performance in few-shot settings, including " + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "inline_equation", + "content": "\\sim 13" + }, + { + "bbox": [ + 104, + 353, + 506, + 441 + ], + "type": "text", + "content": "-shots) labeled data evaluation (Chen et al., 2020a) and 1-/2-/5-shot evaluations (Assran et al., 2022). All evaluations used frozen representations of the teacher encoder - we did not fine tune the models. See Appx. B.3 for details." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 455, + 313, + 467 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 455, + 313, + 467 + ], + "spans": [ + { + "bbox": [ + 105, + 455, + 313, + 467 + ], + "type": "text", + "content": "5.1 EMPIRICAL STUDY OF " + }, + { + "bbox": [ + 105, + 455, + 313, + 467 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 105, + 455, + 313, + 467 + ], + "type": "text", + "content": "-ENSEMBLES" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "spans": [ + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": "Table 1 compares different strategies for where and how to ensemble. Fig. 4 compares the impact of the weighted ensemble loss on " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": "-ensemble diversity. Fig. 3 shows the effect of increasing the number of ensembles, adjusting the temperature " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": ", and increasing baseline projection head parameters. In these experiments, we used DINO* with ViT-S/16 trained for 300 epochs as the base model. We compared different ensemble methods applied to the base model with " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "m = 16" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": " heads which we found to work the best. For the ENT strategy in Table 1, the entropy weighting temperature " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": " is set to " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "0.05\\times \\log (c)" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": " by default which is selected from " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "\\{0.0125,0.025,0.05,0.1,0.2\\} \\times \\log (c)" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": " where the scale " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "\\log (c)" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": " gives the maximum entropy of the codebook size " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": ". For PROB, we keep " + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "inline_equation", + "content": "\\gamma = 1" + }, + { + "bbox": [ + 104, + 475, + 507, + 565 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "spans": [ + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "text", + "content": "Where to ensemble We study the where question by ensembling either the projection head " + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "inline_equation", + "content": "h_{\\psi}" + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "text", + "content": ", the codebook " + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "text", + "content": ", or both with the ENT and the PROB ensemble strategies, as shown in Table 1. We find that ensembling both " + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "inline_equation", + "content": "h_{\\psi}" + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "text", + "content": " provides the largest gains for both losses, probably due to the increased flexibility for learning a diverse ensemble. Interestingly, only ensembling " + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "inline_equation", + "content": "h_{\\psi}" + }, + { + "bbox": [ + 104, + 575, + 506, + 633 + ], + "type": "text", + "content": " also works well for the ENT strategy." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "type": "text", + "content": "How to ensemble We study the how question by considering four different loss variants: UNIF, PROB, ENT, and the variant of ENT with student entropy weighting. We find that when we ensemble both the projection head " + }, + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "type": "inline_equation", + "content": "h_{\\psi}" + }, + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "type": "text", + "content": " and the codebook " + }, + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "type": "text", + "content": ", the ENT ensemble strategy leads to the most significant gains (e.g., 3.4 p.p. gains for 1-shot and 0.9 p.p. gains for full-data). The PROB strategy also consistently improves the performance with a slightly larger gain (1 p.p.) in full-data evaluation. In contrast, we see no gains for the UNIF strategy over the baseline. We also study a variant of ENT that uses the student entropy (i.e., Eq. (12) with the log " + }, + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "type": "inline_equation", + "content": "\\delta(i - j)" + }, + { + "bbox": [ + 104, + 643, + 507, + 733 + ], + "type": "text", + "content": " term) for the importance weights (denoted as ENT-ST). ENT-ST performs much worse than ENT and is even worse than the baseline." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 111, + 81, + 236, + 169 + ], + "blocks": [ + { + "bbox": [ + 111, + 81, + 236, + 169 + ], + "lines": [ + { + "bbox": [ + 111, + 81, + 236, + 169 + ], + "spans": [ + { + "bbox": [ + 111, + 81, + 236, + 169 + ], + "type": "image", + "image_path": "7d60d8124efc22ad555d6fbbdbc43aab642f5688816526f2126d36420340702c.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 111, + 175, + 235, + 186 + ], + "lines": [ + { + "bbox": [ + 111, + 175, + 235, + 186 + ], + "spans": [ + { + "bbox": [ + 111, + 175, + 235, + 186 + ], + "type": "text", + "content": "(a) Scaling of " + }, + { + "bbox": [ + 111, + 175, + 235, + 186 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 111, + 175, + 235, + 186 + ], + "type": "text", + "content": "-ensembles." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 242, + 81, + 366, + 169 + ], + "blocks": [ + { + "bbox": [ + 242, + 81, + 366, + 169 + ], + "lines": [ + { + "bbox": [ + 242, + 81, + 366, + 169 + ], + "spans": [ + { + "bbox": [ + 242, + 81, + 366, + 169 + ], + "type": "image", + "image_path": "79a22c445327a2374c4567b01548ac9e1cde74ecbd0a82714617a48008948f91.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "lines": [ + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "spans": [ + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "text", + "content": "Figure 3: Empirical study of " + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "text", + "content": "-ensembles. (a) The gains of " + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "text", + "content": "-ensembles start to diminish above 16 heads. (b) The temperature for entropy weighting has a larger impact on few-shot performance. 16 heads are used and " + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "text", + "content": " is scaled by " + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "inline_equation", + "content": "\\log(c)" + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "text", + "content": ". (c) Our " + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "text", + "content": "-ensembles outperform all non-ensembleed baselines when controlling for number of parameters. A too powerful non-ensemble projection head significantly harms accuracy. " + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 195, + 504, + 251 + ], + "type": "text", + "content": " data evaluation is shown. Also see Fig. 5." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 373, + 81, + 500, + 169 + ], + "blocks": [ + { + "bbox": [ + 246, + 175, + 364, + 187 + ], + "lines": [ + { + "bbox": [ + 246, + 175, + 364, + 187 + ], + "spans": [ + { + "bbox": [ + 246, + 175, + 364, + 187 + ], + "type": "text", + "content": "(b) Effect of ENT temperature " + }, + { + "bbox": [ + 246, + 175, + 364, + 187 + ], + "type": "inline_equation", + "content": "\\gamma" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 373, + 81, + 500, + 169 + ], + "lines": [ + { + "bbox": [ + 373, + 81, + 500, + 169 + ], + "spans": [ + { + "bbox": [ + 373, + 81, + 500, + 169 + ], + "type": "image", + "image_path": "f0d9635be7f5c83827576c16d79bb2fa15568867fec3b8742609cc81341a2443.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 380, + 175, + 492, + 186 + ], + "lines": [ + { + "bbox": [ + 380, + 175, + 492, + 186 + ], + "spans": [ + { + "bbox": [ + 380, + 175, + 492, + 186 + ], + "type": "text", + "content": "(c) Comparing different heads." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 262, + 504, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 262, + 504, + 307 + ], + "spans": [ + { + "bbox": [ + 104, + 262, + 504, + 307 + ], + "type": "text", + "content": "We conjecture that this is because the student predictions typically have a larger variance than teacher predictions (Wang et al., 2022) especially when multi-crop augmentation (Caron et al., 2020; 2021) is applied to the student. Similar experiments on Eq. (11) and/or " + }, + { + "bbox": [ + 104, + 262, + 504, + 307 + ], + "type": "inline_equation", + "content": "\\gamma = 0" + }, + { + "bbox": [ + 104, + 262, + 504, + 307 + ], + "type": "text", + "content": " variants of PROB also resulted in inferior performance (see Table 12)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 319, + 357, + 484 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 319, + 357, + 484 + ], + "spans": [ + { + "bbox": [ + 104, + 319, + 357, + 484 + ], + "type": "text", + "content": "Analysis of " + }, + { + "bbox": [ + 104, + 319, + 357, + 484 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 319, + 357, + 484 + ], + "type": "text", + "content": "-ensemble diversity The previous experiments showed that the choice of ensemble weighting strategy has a large impact on performance. We hypothesize that this choice substantially impacts the diversity of the codebook ensembles. Since the codes in different heads may not be aligned, we align them by the similarity of their code assignment probabilities across different input images, which measures how the codes are effectively used to 'cluster' the data. See Appx. C.4 for detailed explanations and results. In Fig. 4, we visualize the decay patterns of the similarity score between aligned codes (1.0 means the most similar) in a random pair of heads for each weighting strategy. ENT decays the fastest and UNIF decays the slowest, indicating that ENT learns the most diverse codebooks while UNIF is least diverse. This shows a positive correlation between the diversity of " + }, + { + "bbox": [ + 104, + 319, + 357, + 484 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 319, + 357, + 484 + ], + "type": "text", + "content": "-ensembles and the empirical" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 484, + 504, + 518 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 484, + 504, + 518 + ], + "spans": [ + { + "bbox": [ + 104, + 484, + 504, + 518 + ], + "type": "text", + "content": "performance of the ensemble strategies from Table 1. Finally, for UNIF, we find that different heads tend to learn the same semantic mappings even when randomly initialized; i.e., the code assignments in different heads become homogeneous up to permutation. See Fig. 8 for a visualization." + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 363, + 321, + 499, + 393 + ], + "blocks": [ + { + "bbox": [ + 363, + 321, + 499, + 393 + ], + "lines": [ + { + "bbox": [ + 363, + 321, + 499, + 393 + ], + "spans": [ + { + "bbox": [ + 363, + 321, + 499, + 393 + ], + "type": "image", + "image_path": "7a994fe5319f23357708dff945fb9964c4b86408e92bf1413733afc789d9d957.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 362, + 401, + 506, + 479 + ], + "lines": [ + { + "bbox": [ + 362, + 401, + 506, + 479 + ], + "spans": [ + { + "bbox": [ + 362, + 401, + 506, + 479 + ], + "type": "text", + "content": "Figure 4: Visualization of code similarity. ENT learns the most diverse " + }, + { + "bbox": [ + 362, + 401, + 506, + 479 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 362, + 401, + 506, + 479 + ], + "type": "text", + "content": "-ensembles reflected by the fastest decay of similarity scores between aligned codes in different heads. UNIF has low diversity between heads." + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "spans": [ + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "text", + "content": "Number of " + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "text", + "content": "-ensembles We study the effect of increasing the number of " + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "text", + "content": "-ensembles " + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "text", + "content": " for ENT in Fig. 3a. Having more " + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "text", + "content": "-ensembles boosts the performance until " + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "inline_equation", + "content": "m = 16" + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "text", + "content": ". Interestingly, using as few as " + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "inline_equation", + "content": "m = 2" + }, + { + "bbox": [ + 104, + 529, + 506, + 564 + ], + "type": "text", + "content": " heads already significantly improves over the baseline." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "spans": [ + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "text", + "content": "Effect of ENT temperature " + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "text", + "content": " Fig. 3b studies the effect of entropy weighting temperature " + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "text", + "content": " for different evaluation metrics. We observe that " + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "text", + "content": " has a relatively larger impact on few-shot evaluation performance. " + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "text", + "content": " should be neither too high nor too low: a high temperature leads to under-specialization (i.e. less diversity) of heads similar to UNIF (" + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "inline_equation", + "content": "\\gamma \\rightarrow \\infty" + }, + { + "bbox": [ + 104, + 575, + 504, + 632 + ], + "type": "text", + "content": ") and a low temperature may otherwise lead to over-specialization (i.e., only a single head is used for each input)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": "Comparison of different projection heads Our method linearly increases projection head parameters, thus a natural question is: Is the gain of " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": "-ensembles due to the increased power (or number of parameters) in projection heads? We answer this question with an empirical study of non-ensembled projection heads. Specifically, we studied non-ensembled " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "h_{\\psi}" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": " with (depth, width) searched over " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\{2,3,4\\} \\times \\{512,1024,2048,4096\\}" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": " and measured the linear evaluation performance with different amounts of labeled data. In Fig. 3c, we plot the " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": " data evaluation result with respect to the number of parameters of the projection head both for ensembled and non-ensembled baselines. See Appx. C.2 for detailed analysis and extra results for other metrics. Our key findings are:" + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 63, + 506, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 63, + 506, + 168 + ], + "spans": [ + { + "bbox": [ + 104, + 63, + 506, + 168 + ], + "type": "text", + "content": "Table 2: Effectiveness of ensemble heads for DINO*/MSN* with different ViT models. Our ensemble heads consistently improve all downstream evaluation metrics on ImageNet-1K and achieve a new state-of-the-art for few-shot evaluations. For ViT-S/16, we report linear evaluation results probed from the last layer (left) and from the last 4 layers (right, following DINO). †We evaluated the few-shot settings using DINO's publicly-available pretrained weights in the cases those results were not reported in Caron et al. (2021). ‡MSN ViT-B/16 and ViT-B/8 are both trained for 600 epochs in Assran et al. (2022), whereas our models are trained for only 400, 300 epochs, respectively. For each architecture, we highlight the best DINO baseline and weighted ensemble in blue. For MSN, the corresponding highlights are yellow. The best results for each architecture and metric are bolded." + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 106, + 173, + 504, + 504 + ], + "blocks": [ + { + "bbox": [ + 106, + 173, + 504, + 504 + ], + "lines": [ + { + "bbox": [ + 106, + 173, + 504, + 504 + ], + "spans": [ + { + "bbox": [ + 106, + 173, + 504, + 504 + ], + "type": "table", + "html": "
MethodFew-shotFull-data
125~13 (1%)k-NNLinear
ViT-S/16, 800 epochs
iBOT40.4 ± 0.550.8 ± 0.859.9 ± 0.265.975.2- / 77.9
DINO38.9 ± 0.448.9 ± 0.358.5 ± 0.164.574.576.1 / 77.0
DINO (Repro)39.1 ± 0.349.1 ± 0.558.6 ± 0.264.774.375.8 / 76.9
DINO* (Retuned)44.6 ± 0.253.6 ± 0.361.1 ± 0.266.274.175.8 / 76.9
MSN47.1 ± 0.155.8 ± 0.662.8 ± 0.367.2-- / 76.9
MSN* (Retuned)47.4 ± 0.156.3 ± 0.462.8 ± 0.267.173.375.6 / 76.6
DINO*-PROB (16)45.2 ± 0.454.9 ± 0.462.5 ± 0.267.375.176.5 / 77.6
DINO*-ENT (4)46.3 ± 0.155.5 ± 0.663.0 ± 0.367.574.876.2 / 77.2
DINO*-ENT (16)47.6 ± 0.1 ↑ 3.056.8 ± 0.564.0 ± 0.268.3 ↑ 2.175.376.8 / 77.7 ↑ 0.8
MSN*-ENT (2)48.8 ± 0.257.5 ± 0.564.0 ± 0.267.974.676.0 / 76.9
MSN*-ENT (8)50.1 ± 0.1 ↑ 2.758.9 ± 0.665.1 ± 0.368.7 ↑ 1.675.276.4 / 77.4 ↑ 0.8
ViT-B/16, 400 epochs
iBOT46.1 ± 0.356.2 ± 0.764.7 ± 0.369.777.179.5
DINO†43.0 ± 0.252.7 ± 0.561.8 ± 0.267.476.178.2
DINO* (Retuned)49.3 ± 0.158.1 ± 0.565.0 ± 0.369.176.078.5
MSN‡49.8 ± 0.258.9 ± 0.465.5 ± 0.3---
MSN* (Retuned)50.7 ± 0.159.2 ± 0.465.9 ± 0.269.774.778.1
DINO*-ENT (16)52.8 ± 0.1 ↑ 3.561.5 ± 0.467.6 ± 0.371.1 ↑ 2.077.179.1 ↑ 0.6
MSN*-ENT (8)53.7 ± 0.2 ↑ 3.062.4 ± 0.668.3 ± 0.271.5 ↑ 1.877.278.9 ↑ 0.8
ViT-B/8, 300 epochs
DINO†47.5 ± 0.257.3 ± 0.565.4 ± 0.370.377.480.1
DINO* (Retuned)49.5 ± 0.558.6 ± 0.665.9 ± 0.370.777.180.2
MSN‡55.1 ± 0.164.9 ± 0.771.6 ± 0.3---
MSN* (Retuned)51.9 ± 0.361.1 ± 0.467.7 ± 0.371.775.780.3
DINO*-ENT (16)55.0 ± 0.4 ↑ 5.563.4 ± 0.669.5 ± 0.373.4 ↑ 2.778.681.0 ↑ 0.8
MSN*-ENT (8)55.6 ± 0.2 ↑ 3.764.5 ± 0.570.3 ± 0.273.4 ↑ 1.778.980.8 ↑ 0.5
", + "image_path": "826bd0d1c3be03d85518b0f12bc7dced251218aae7e8ad7d1b39ff9fc91ddb2b.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 110, + 517, + 506, + 632 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 110, + 517, + 506, + 561 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 517, + 506, + 561 + ], + "spans": [ + { + "bbox": [ + 110, + 517, + 506, + 561 + ], + "type": "text", + "content": "- A too powerful non-enssembled " + }, + { + "bbox": [ + 110, + 517, + 506, + 561 + ], + "type": "inline_equation", + "content": "h_{\\psi}" + }, + { + "bbox": [ + 110, + 517, + 506, + 561 + ], + "type": "text", + "content": " significantly hurts the label efficiency of learned representations. This result is similar to Chen et al. (2020b), which found that probing from intermediate layers of projection heads (which can be viewed as using a shallower head) could improve semi-supervised learning (" + }, + { + "bbox": [ + 110, + 517, + 506, + 561 + ], + "type": "inline_equation", + "content": "1\\% - 10\\%" + }, + { + "bbox": [ + 110, + 517, + 506, + 561 + ], + "type": "text", + "content": " labeled data) results." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 110, + 563, + 506, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 563, + 506, + 608 + ], + "spans": [ + { + "bbox": [ + 110, + 563, + 506, + 608 + ], + "type": "text", + "content": "- The default head (3/2048, denoted as 'Default') used in recent SSL methods (SimCLRv2, DINO, MSN, etc.) does not perform as well in few-shot evaluations, probably because it is selected by looking at full-data evaluation metrics. In contrast, our baseline (3/1024, denoted as 'Our baseline') significantly improves few-shot evaluation performance." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 110, + 609, + 506, + 632 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 609, + 506, + 632 + ], + "spans": [ + { + "bbox": [ + 110, + 609, + 506, + 632 + ], + "type": "text", + "content": "- Our " + }, + { + "bbox": [ + 110, + 609, + 506, + 632 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 110, + 609, + 506, + 632 + ], + "type": "text", + "content": "-ensembles outperform all non-enssembled baselines and lead to consistent improvements in all evaluation metrics, despite the increase of parameters." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 645, + 340, + 656 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 645, + 340, + 656 + ], + "spans": [ + { + "bbox": [ + 105, + 645, + 340, + 656 + ], + "type": "text", + "content": "5.2 IMPROVING SOTA RESULTS WITH ENSEMBLEING" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": "Next we apply " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": "-ensembles to DINO* and MSN* and compare with the state-of-the-art results. We experimented with model architectures ViT-S/16, ViT-B/16, ViT-B/8 trained for 800, 400, 300 epochs respectively following Caron et al. (2021). We include both the published results and our returned versions to ensure strong baselines. For clarity, we denote our method as “{baseline}-{ensemble strategy} (# of heads)”, e.g., DINO*-ENT (4). We tuned both baselines and our methods for all architectures. We report the best hyperparameters for all models in Appx. B.2.2." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 9 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 140 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 140 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 140 + ], + "type": "text", + "content": "Table 2 compares the results of " + }, + { + "bbox": [ + 104, + 82, + 506, + 140 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 82, + 506, + 140 + ], + "type": "text", + "content": "-ensembles and baselines. We find that " + }, + { + "bbox": [ + 104, + 82, + 506, + 140 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 82, + 506, + 140 + ], + "type": "text", + "content": "-ensembles with ENT consistently improve all evaluation metrics (full-data, few-shot) across both SSL methods (DINO*, MSN*) and all architectures (ViT-S/16, ViT-B/16, ViT-B/8) over their non-ensembld counterparts. The gains in few-shot evaluation are particularly substantial, providing a new state-of-the-art for ImageNet-1K evaluation from ImageNet pretraining." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 150, + 328, + 163 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 150, + 328, + 163 + ], + "spans": [ + { + "bbox": [ + 105, + 150, + 328, + 163 + ], + "type": "text", + "content": "5.3 MORE EVALUATIONS FOR " + }, + { + "bbox": [ + 105, + 150, + 328, + 163 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 105, + 150, + 328, + 163 + ], + "type": "text", + "content": " -ENSEMBLES" + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 107, + 196, + 504, + 255 + ], + "blocks": [ + { + "bbox": [ + 104, + 170, + 504, + 194 + ], + "lines": [ + { + "bbox": [ + 104, + 170, + 504, + 194 + ], + "spans": [ + { + "bbox": [ + 104, + 170, + 504, + 194 + ], + "type": "text", + "content": "Table 3: Comparison of transfer performance. ViT-S/16 is used. Our ensemble heads lead to consistent improvements for " + }, + { + "bbox": [ + 104, + 170, + 504, + 194 + ], + "type": "inline_equation", + "content": "\\mathrm{MSN^{*}}" + }, + { + "bbox": [ + 104, + 170, + 504, + 194 + ], + "type": "text", + "content": " and comparable results for DINO*." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 107, + 196, + 504, + 255 + ], + "lines": [ + { + "bbox": [ + 107, + 196, + 504, + 255 + ], + "spans": [ + { + "bbox": [ + 107, + 196, + 504, + 255 + ], + "type": "table", + "html": "
Food101CIFAR10CIFAR100SUN397CarsDTDPetsCaltech-101FlowersAvg.
DINO*78.493.881.066.166.774.692.094.994.482.43
DINO*-ENT (16)79.193.881.466.566.874.992.894.693.982.64
MSN*77.793.179.864.663.372.292.494.792.781.17
MSN*-ENT (8)78.493.981.165.268.073.293.195.492.882.34
", + "image_path": "cdbaf3dca865c69b60f19eff8a4d8982561ce030cfab7252d09568ff127d58c0.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "spans": [ + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "type": "text", + "content": "Transfer learning In Table 3, we compare the transfer learning performance of " + }, + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "type": "text", + "content": "-ensembles and non-ensembed baselines. We used ViT-S-16 models trained on ImageNet-1K for 800 epochs and evaluated on 9 natural downstream datasets from Chen et al. (2020a) with linear evaluation (details in Appx. B.3). " + }, + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "type": "text", + "content": "-ensembles lead to consistent improvements in transfer performance for " + }, + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "type": "inline_equation", + "content": "\\mathrm{MSN}^*" + }, + { + "bbox": [ + 104, + 256, + 504, + 312 + ], + "type": "text", + "content": " and comparable results for DINO*." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 317, + 359, + 417 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 317, + 359, + 417 + ], + "spans": [ + { + "bbox": [ + 104, + 317, + 359, + 417 + ], + "type": "text", + "content": "Training overhead In Table 4, we benchmark the computational overhead of " + }, + { + "bbox": [ + 104, + 317, + 359, + 417 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 317, + 359, + 417 + ], + "type": "text", + "content": "-ensembles at training time. We used a medium sized model, DINO* with ViT-B/16, trained with the same setting used in all of our experiments. We benchmarked the wall-clock time and peak memory on 128 TPUv3 cores. " + }, + { + "bbox": [ + 104, + 317, + 359, + 417 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 317, + 359, + 417 + ], + "type": "text", + "content": "-ensembling is relatively cheap in terms of training cost because the ensembled parts typically account for a small portion of total computation, especially when the backbone encoder is more computationally expensive (e.g., ViT-B/8)." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 417, + 492, + 429 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 417, + 492, + 429 + ], + "spans": [ + { + "bbox": [ + 105, + 417, + 492, + 429 + ], + "type": "text", + "content": "Again, we emphasize that there is no evaluation overhead when " + }, + { + "bbox": [ + 105, + 417, + 492, + 429 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 105, + 417, + 492, + 429 + ], + "type": "text", + "content": "-ensembles are removed." + } + ] + } + ], + "index": 7 + }, + { + "type": "table", + "bbox": [ + 365, + 362, + 503, + 416 + ], + "blocks": [ + { + "bbox": [ + 362, + 315, + 506, + 359 + ], + "lines": [ + { + "bbox": [ + 362, + 315, + 506, + 359 + ], + "spans": [ + { + "bbox": [ + 362, + 315, + 506, + 359 + ], + "type": "text", + "content": "Table 4: Training overhead. Wall-clock time and peak memory per core for training with different numbers of ensembles." + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 365, + 362, + 503, + 416 + ], + "lines": [ + { + "bbox": [ + 365, + 362, + 503, + 416 + ], + "spans": [ + { + "bbox": [ + 365, + 362, + 503, + 416 + ], + "type": "table", + "html": "
mWall TimePeak Memory
15.81h5.25G
45.91h5.40G
166.34h5.89G
", + "image_path": "04e9dba9ecc945cde833cf63b0ef7a8ad64ecb50213bf25b413a96769eacacab.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_body" + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 443, + 276, + 455 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 443, + 276, + 455 + ], + "spans": [ + { + "bbox": [ + 105, + 443, + 276, + 455 + ], + "type": "text", + "content": "6 CONCLUSION & DISCUSSION" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 467, + 506, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 467, + 506, + 556 + ], + "spans": [ + { + "bbox": [ + 104, + 467, + 506, + 556 + ], + "type": "text", + "content": "We introduced an efficient ensemble method for SSL where multiple projection heads are ensembled to effectively improve representation learning. We showed that with carefully designed ensemble losses that induce diversity over ensemble heads, our method significantly improves recent state-of-the-art SSL methods in various evaluation metrics, particularly for few-shot evaluation. Although ensembling is a well-known technique for improving evaluation performance of a single model, we demonstrated that, for models with throw-away parts such as the projection heads in SSL, ensembling these parts can improve the learning of the non-ensemble representation encoder and also achieve significant gains in downstream evaluation without introducing extra evaluation cost." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "spans": [ + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "type": "text", + "content": "Our ensemble method is applicable to many SSL methods beyond the two we explored. For example, one may consider generalization to BYOL (Grill et al., 2020) or SimSiam (Chen & He, 2021) that ensembles projection and/or prediction heads, or MAE (He et al., 2022) that ensembles the decoders (which introduces more training cost though). Our weighted ensemble losses can also be applied as long as the original loss can be reformulated as MLE for some " + }, + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "type": "inline_equation", + "content": "t" + }, + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "type": "inline_equation", + "content": "s" + }, + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "type": "inline_equation", + "content": "Y" + }, + { + "bbox": [ + 104, + 561, + 506, + 639 + ], + "type": "text", + "content": ", e.g., the MSE loss in these methods is MLE under multivariate normal distributions. We hope our results and insights will motivate more future work for extending our method or exploring more ensemble techniques for SSL." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": "In future work, we also hope to remove three limitations of our setting. First, considering ensembling strategies that include the representation encoder, " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "r_{\\omega}" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": ", may lead to further improvements in the performance of weighted ensemble SSL, at the cost of increased computation requirements during both training and evaluation. Second, considering heterogenous architectures in the ensemble may further improve the learned representations (e.g., mixing Transformers with ResNets), whether the heterogeneity is in " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "r_{\\omega}, h_{\\psi}" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": ", or both. Third, considering other possibilities for " + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "inline_equation", + "content": "f_{ijy}" + }, + { + "bbox": [ + 104, + 643, + 506, + 733 + ], + "type": "text", + "content": " may also reveal performance gains and improve our understanding of the critical aspects that lead to good SSL representations, similar to what we learned about the importance of ensemble diversity." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 83, + 201, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 83, + 201, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 83, + 201, + 94 + ], + "type": "text", + "content": "ACKNOWLEDGMENTS" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 101, + 506, + 136 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 101, + 506, + 136 + ], + "spans": [ + { + "bbox": [ + 105, + 101, + 506, + 136 + ], + "type": "text", + "content": "We would like to thank Mathilde Caron and Mahmoud Assran for their extensive help in reproducing DINO and MSN baselines. We would also like to thank Ting Chen and Yann Dubois for their helpful discussions and encouragements." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 146, + 248, + 158 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 146, + 248, + 158 + ], + "spans": [ + { + "bbox": [ + 105, + 146, + 248, + 158 + ], + "type": "text", + "content": "REPRODUCIBITLITY STATEMENT" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 165, + 506, + 211 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 165, + 506, + 211 + ], + "spans": [ + { + "bbox": [ + 105, + 165, + 506, + 211 + ], + "type": "text", + "content": "We include detailed derivations for all our proposed losses in Appx. D. We report experimental details in Appx. B, including the implementation details for reproducing the baselines (Appx. B.1), training and evaluating our methods (Appx. B.2.1), and all hyper-parameters (Appx. B.2.2) used in our experiments for reproducing our results in Table 2." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 221, + 165, + 232 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 221, + 165, + 232 + ], + "spans": [ + { + "bbox": [ + 105, + 221, + 165, + 232 + ], + "type": "text", + "content": "REFERENCES" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 240, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 105, + 240, + 505, + 262 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 240, + 505, + 262 + ], + "spans": [ + { + "bbox": [ + 105, + 240, + 505, + 262 + ], + "type": "text", + "content": "TensorFlow Datasets, a collection of ready-to-use datasets. https://www.tensorflow.org/datasets." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 269, + 506, + 303 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 269, + 506, + 303 + ], + "spans": [ + { + "bbox": [ + 105, + 269, + 506, + 303 + ], + "type": "text", + "content": "Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saunshi. A theoretical analysis of contrastive unsupervised representation learning. arXiv preprint arXiv:1902.09229, 2019." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 309, + 504, + 333 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 309, + 504, + 333 + ], + "spans": [ + { + "bbox": [ + 105, + 309, + 504, + 333 + ], + "type": "text", + "content": "Yuki Markus Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. arXiv preprint arXiv:1911.05371, 2019." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 338, + 506, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 338, + 506, + 373 + ], + "spans": [ + { + "bbox": [ + 105, + 338, + 506, + 373 + ], + "type": "text", + "content": "Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, and Nicolas Ballas. Masked siamese networks for label-efficient learning. arXiv preprint arXiv:2204.07141, 2022." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 379, + 504, + 403 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 379, + 504, + 403 + ], + "spans": [ + { + "bbox": [ + 105, + 379, + 504, + 403 + ], + "type": "text", + "content": "Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. Advances in neural information processing systems, 32, 2019." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 408, + 504, + 432 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 408, + 504, + 432 + ], + "spans": [ + { + "bbox": [ + 105, + 408, + 504, + 432 + ], + "type": "text", + "content": "Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 437, + 504, + 462 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 437, + 504, + 462 + ], + "spans": [ + { + "bbox": [ + 105, + 437, + 504, + 462 + ], + "type": "text", + "content": "Adrien Bardes, Jean Ponce, and Yann LeCun. Vicreg: Variance-invariance-covariance regularization for self-supervised learning. arXiv preprint arXiv:2105.04906, 2021." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 468, + 506, + 501 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 468, + 506, + 501 + ], + "spans": [ + { + "bbox": [ + 105, + 468, + 506, + 501 + ], + "type": "text", + "content": "Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101-mining discriminative components with random forests. In European conference on computer vision, pp. 446-461. Springer, 2014." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 507, + 504, + 531 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 507, + 504, + 531 + ], + "spans": [ + { + "bbox": [ + 105, + 507, + 504, + 531 + ], + "type": "text", + "content": "Yuri Burda, Roger B Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In ICLR, 2016." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 536, + 506, + 571 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 536, + 506, + 571 + ], + "spans": [ + { + "bbox": [ + 105, + 536, + 506, + 571 + ], + "type": "text", + "content": "Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In Proceedings of the European conference on computer vision (ECCV), pp. 132-149, 2018." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 577, + 506, + 612 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 577, + 506, + 612 + ], + "spans": [ + { + "bbox": [ + 105, + 577, + 506, + 612 + ], + "type": "text", + "content": "Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems, 33:9912-9924, 2020." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 617, + 504, + 652 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 617, + 504, + 652 + ], + "spans": [ + { + "bbox": [ + 105, + 617, + 504, + 652 + ], + "type": "text", + "content": "Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9650-9660, 2021." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 105, + 658, + 506, + 692 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 658, + 506, + 692 + ], + "spans": [ + { + "bbox": [ + 105, + 658, + 506, + 692 + ], + "type": "text", + "content": "Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. In International conference on machine learning, pp. 1597-1607. PMLR, 2020a." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 105, + 697, + 504, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 697, + 504, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 697, + 504, + 732 + ], + "type": "text", + "content": "Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey E Hinton. Big self-supervised models are strong semi-supervised learners. Advances in neural information processing systems, 33:22243-22255, 2020b." + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 300, + 751, + 311, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 751, + 311, + 761 + ], + "spans": [ + { + "bbox": [ + 300, + 751, + 311, + 761 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 732 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 507, + 106 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 507, + 106 + ], + "type": "text", + "content": "Xinlei Chen and Kaiming He. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15750-15758, 2021." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 110, + 507, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 110, + 507, + 146 + ], + "spans": [ + { + "bbox": [ + 105, + 110, + 507, + 146 + ], + "type": "text", + "content": "Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, and Andrea Vedaldi. Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3606-3613, 2014." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 150, + 503, + 164 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 150, + 503, + 164 + ], + "spans": [ + { + "bbox": [ + 105, + 150, + 503, + 164 + ], + "type": "text", + "content": "Thomas M Cover and Joy A Thomas. Elements of Information Theory. John Wiley & Sons, 1999." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 169, + 505, + 194 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 169, + 505, + 194 + ], + "spans": [ + { + "bbox": [ + 105, + 169, + 505, + 194 + ], + "type": "text", + "content": "Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26, 2013." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 198, + 507, + 233 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 198, + 507, + 233 + ], + "spans": [ + { + "bbox": [ + 105, + 198, + 507, + 233 + ], + "type": "text", + "content": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. IEEE, 2009." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 238, + 505, + 263 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 238, + 505, + 263 + ], + "spans": [ + { + "bbox": [ + 105, + 238, + 505, + 263 + ], + "type": "text", + "content": "Thomas G Dietterich. Ensemble methods in machine learning. In International workshop on multiple classifier systems, pp. 1-15. Springer, 2000." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 267, + 505, + 292 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 267, + 505, + 292 + ], + "spans": [ + { + "bbox": [ + 105, + 267, + 505, + 292 + ], + "type": "text", + "content": "Onur Dikmen, Zhirong Yang, and Erkki Oja. Learning the information divergence. IEEE transactions on pattern analysis and machine intelligence, 37(7):1442-1454, 2014." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 297, + 505, + 342 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 297, + 505, + 342 + ], + "spans": [ + { + "bbox": [ + 105, + 297, + 505, + 342 + ], + "type": "text", + "content": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 348, + 505, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 348, + 505, + 373 + ], + "spans": [ + { + "bbox": [ + 105, + 348, + 505, + 373 + ], + "type": "text", + "content": "Sever S Dragomir. A generalization of " + }, + { + "bbox": [ + 105, + 348, + 505, + 373 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 105, + 348, + 505, + 373 + ], + "type": "text", + "content": "-divergence measure to convex functions defined on linear spaces. Communications in Mathematical Analysis, 15(2):1-14, 2013." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 376, + 505, + 412 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 376, + 505, + 412 + ], + "spans": [ + { + "bbox": [ + 105, + 376, + 505, + 412 + ], + "type": "text", + "content": "Michael Dusenberry, Ghassen Jerfel, Yeming Wen, Yian Ma, Jasper Snoek, Katherine Heller, Balaji Lakshminarayanan, and Dustin Tran. Efficient and scalable bayesian neural nets with rank-1 factors. In International conference on machine learning, pp. 2782-2792. PMLR, 2020." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 417, + 505, + 452 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 417, + 505, + 452 + ], + "spans": [ + { + "bbox": [ + 105, + 417, + 505, + 452 + ], + "type": "text", + "content": "Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In 2004 conference on computer vision and pattern recognition workshop, pp. 178-178. IEEE, 2004." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 457, + 505, + 492 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 457, + 505, + 492 + ], + "spans": [ + { + "bbox": [ + 105, + 457, + 505, + 492 + ], + "type": "text", + "content": "Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of dnns. Advances in neural information processing systems, 31, 2018." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 497, + 505, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 497, + 505, + 533 + ], + "spans": [ + { + "bbox": [ + 105, + 497, + 505, + 533 + ], + "type": "text", + "content": "Raphael Gontijo-Lopes, Yann Dauphin, and Ekin Dogus Cubuk. No one representation to rule them all: Overlapping features of training methods. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=BK-4qbGgIE3." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 537, + 505, + 583 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 537, + 505, + 583 + ], + "spans": [ + { + "bbox": [ + 105, + 537, + 505, + 583 + ], + "type": "text", + "content": "Jean-Bastien Grill, Florian Strub, Florent Alché, Coretin Tallec, Pierre Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. Advances in neural information processing systems, 33:21271-21284, 2020." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 589, + 505, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 589, + 505, + 633 + ], + "spans": [ + { + "bbox": [ + 105, + 589, + 505, + 633 + ], + "type": "text", + "content": "Michael Gutmann and Aapo Hyvarinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 297-304. JMLR Workshop and Conference Proceedings, 2010." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 639, + 505, + 664 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 639, + 505, + 664 + ], + "spans": [ + { + "bbox": [ + 105, + 639, + 505, + 664 + ], + "type": "text", + "content": "Abner Guzman-Rivera, Dhruv Batra, and Pushmeet Kohli. Multiple choice learning: Learning to produce multiple structured outputs. Advances in neural information processing systems, 25, 2012." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 670, + 505, + 693 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 670, + 505, + 693 + ], + "spans": [ + { + "bbox": [ + 105, + 670, + 505, + 693 + ], + "type": "text", + "content": "Lars Kai Hansen and Peter Salamon. Neural network ensembles. IEEE transactions on pattern analysis and machine intelligence, 12(10):993-1001, 1990." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 105, + 698, + 505, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 698, + 505, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 698, + 505, + 732 + ], + "type": "text", + "content": "Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M Dai, and Dustin Tran. Training independent subnetworks for robust prediction. arXiv preprint arXiv:2010.06610, 2020." + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 761 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 19, + "blocks": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 505, + 117 + ], + "type": "text", + "content": "Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729-9738, 2020." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 504, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 504, + 158 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 504, + 158 + ], + "type": "text", + "content": "Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dólár, and Ross Girshick. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16000-16009, 2022." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 504, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 504, + 198 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 504, + 198 + ], + "type": "text", + "content": "R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 204, + 504, + 228 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 204, + 504, + 228 + ], + "spans": [ + { + "bbox": [ + 105, + 204, + 504, + 228 + ], + "type": "text", + "content": "Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In European conference on computer vision, pp. 646-661. Springer, 2016." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 233, + 506, + 258 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 233, + 506, + 258 + ], + "spans": [ + { + "bbox": [ + 105, + 233, + 506, + 258 + ], + "type": "text", + "content": "Gao Huang, Yixuan Li, Geoff Pleiss, Zhuang Liu, John E Hopcroft, and Kilian Q Weinberger. Snapshot ensembles: Train 1, get m for free. arXiv preprint arXiv:1704.00109, 2017." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 263, + 506, + 298 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 263, + 506, + 298 + ], + "spans": [ + { + "bbox": [ + 105, + 263, + 506, + 298 + ], + "type": "text", + "content": "Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proceedings of the IEEE international conference on computer vision workshops, pp. 554-561, 2013." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 304, + 506, + 318 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 304, + 506, + 318 + ], + "spans": [ + { + "bbox": [ + 105, + 304, + 506, + 318 + ], + "type": "text", + "content": "Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 323, + 504, + 347 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 323, + 504, + 347 + ], + "spans": [ + { + "bbox": [ + 105, + 323, + 504, + 347 + ], + "type": "text", + "content": "Harold W Kuhn. The hungarian method for the assignment problem. *Naval research logistics quarterly*, 2(1-2):83-97, 1955." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 353, + 504, + 376 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 353, + 504, + 376 + ], + "spans": [ + { + "bbox": [ + 105, + 353, + 504, + 376 + ], + "type": "text", + "content": "Samuli Laine and Timo Aila. Temporal ensembling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 382, + 506, + 417 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 382, + 506, + 417 + ], + "spans": [ + { + "bbox": [ + 105, + 382, + 506, + 417 + ], + "type": "text", + "content": "Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 423, + 506, + 457 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 423, + 506, + 457 + ], + "spans": [ + { + "bbox": [ + 105, + 423, + 506, + 457 + ], + "type": "text", + "content": "Kuang-Huei Lee, Anurag Arnab, Sergio Guadarrama, John Canny, and Ian Fischer. Compressive visual representations. Advances in Neural Information Processing Systems, 34:19538-19552, 2021." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 464, + 504, + 498 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 464, + 504, + 498 + ], + "spans": [ + { + "bbox": [ + 105, + 464, + 504, + 498 + ], + "type": "text", + "content": "Stefan Lee, Senthil Purushwalkam, Michael Cogswell, David Crandall, and Dhruv Batra. Why m heads are better than one: Training a diverse ensemble of deep networks. arXiv preprint arXiv:1511.06314, 2015." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 505, + 506, + 529 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 505, + 506, + 529 + ], + "spans": [ + { + "bbox": [ + 105, + 505, + 506, + 529 + ], + "type": "text", + "content": "Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International Conference on Learning Representations, 2018." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 535, + 504, + 570 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 535, + 504, + 570 + ], + "spans": [ + { + "bbox": [ + 105, + 535, + 504, + 570 + ], + "type": "text", + "content": "Warren R Morningstar, Alex Alemi, and Joshua V Dillon. Pacm-bayes: Narrowing the empirical risk gap in the misspecified bayesian regime. In International Conference on Artificial Intelligence and Statistics, pp. 8270-8298. PMLR, 2022." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 575, + 506, + 599 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 575, + 506, + 599 + ], + "spans": [ + { + "bbox": [ + 105, + 575, + 506, + 599 + ], + "type": "text", + "content": "Wai Ho Mow. A tight upper bound on discrete entropy. IEEE Transactions on Information Theory, 44(2):775-778, 1998." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 605, + 504, + 629 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 605, + 504, + 629 + ], + "spans": [ + { + "bbox": [ + 105, + 605, + 504, + 629 + ], + "type": "text", + "content": "Yurii Nesterov. A method for solving the convex programming problem with convergence rate " + }, + { + "bbox": [ + 105, + 605, + 504, + 629 + ], + "type": "inline_equation", + "content": "o(1 / k^2)" + }, + { + "bbox": [ + 105, + 605, + 504, + 629 + ], + "type": "text", + "content": ". Proceedings of the USSR Academy of Sciences, 269:543-547, 1983." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 635, + 506, + 670 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 635, + 506, + 670 + ], + "spans": [ + { + "bbox": [ + 105, + 635, + 506, + 670 + ], + "type": "text", + "content": "Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pp. 722-729. IEEE, 2008." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 105, + 676, + 506, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 676, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 676, + 506, + 732 + ], + "type": "text", + "content": "Kento Nozawa, Pascal Germain, and Benjamin Guedj. Pac-bayesian contrastive unsupervised representation learning. In Jonas Peters and David Sontag (eds.), Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), volume 124 of Proceedings of Machine Learning Research, pp. 21-30. PMLR, 03-06 Aug 2020. URL https://proceedings.mlr.press/v124/nozawa20a.html." + } + ] + } + ], + "index": 18 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 82, + 507, + 731 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 107, + 82, + 505, + 105 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 82, + 505, + 105 + ], + "spans": [ + { + "bbox": [ + 107, + 82, + 505, + 105 + ], + "type": "text", + "content": "Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 111, + 506, + 156 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 111, + 506, + 156 + ], + "spans": [ + { + "bbox": [ + 105, + 111, + 506, + 156 + ], + "type": "text", + "content": "Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. Advances in neural information processing systems, 32, 2019." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 162, + 504, + 186 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 162, + 504, + 186 + ], + "spans": [ + { + "bbox": [ + 107, + 162, + 504, + 186 + ], + "type": "text", + "content": "Omkar M Parkhi, Andrea Vedaldi, Andrew Zisserman, and CV Jawahar. Cats and dogs. In 2012 IEEE conference on computer vision and pattern recognition, pp. 3498-3505. IEEE, 2012." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 106, + 191, + 506, + 236 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 191, + 506, + 236 + ], + "spans": [ + { + "bbox": [ + 106, + 191, + 506, + 236 + ], + "type": "text", + "content": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 106, + 243, + 506, + 276 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 243, + 506, + 276 + ], + "spans": [ + { + "bbox": [ + 106, + 243, + 506, + 276 + ], + "type": "text", + "content": "Michael P Perrone and Leon N Cooper. When networks disagree: Ensemble methods for hybrid neural networks. Technical report, Brown Univ Providence Ri Inst for Brain and Neural Systems, 1992." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 284, + 506, + 328 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 284, + 506, + 328 + ], + "spans": [ + { + "bbox": [ + 107, + 284, + 506, + 328 + ], + "type": "text", + "content": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763. PMLR, 2021." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 335, + 507, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 335, + 507, + 369 + ], + "spans": [ + { + "bbox": [ + 107, + 335, + 507, + 369 + ], + "type": "text", + "content": "Yangjun Ruan, Yann Dubois, and Chris J. Maddison. Optimal representations for covariate shift. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=Rf58LPCwJj0." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 375, + 506, + 409 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 375, + 506, + 409 + ], + "spans": [ + { + "bbox": [ + 107, + 375, + 506, + 409 + ], + "type": "text", + "content": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929-1958, 2014." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 415, + 506, + 449 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 415, + 506, + 449 + ], + "spans": [ + { + "bbox": [ + 107, + 415, + 506, + 449 + ], + "type": "text", + "content": "Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139-1147. PMLR, 2013." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 456, + 504, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 456, + 504, + 490 + ], + "spans": [ + { + "bbox": [ + 107, + 456, + 504, + 490 + ], + "type": "text", + "content": "Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Advances in neural information processing systems, 30, 2017." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 496, + 504, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 496, + 504, + 520 + ], + "spans": [ + { + "bbox": [ + 107, + 496, + 504, + 520 + ], + "type": "text", + "content": "Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. In European conference on computer vision, pp. 776-794. Springer, 2020." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 526, + 506, + 560 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 526, + 506, + 560 + ], + "spans": [ + { + "bbox": [ + 107, + 526, + 506, + 560 + ], + "type": "text", + "content": "Nenad Tomasev, Ioana Bica, Brian McWilliams, Lars Buesing, Razvan Pascanu, Charles Blundell, and Jovana Mitrovic. Pushing the limits of self-supervised resnets: Can we outperform supervised learning without labels onImagenet? arXiv preprint arXiv:2201.05119, 2022." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 566, + 506, + 600 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 566, + 506, + 600 + ], + "spans": [ + { + "bbox": [ + 107, + 566, + 506, + 600 + ], + "type": "text", + "content": "Linh Tran, Bastiaan S Veeling, Kevin Roth, Jakub Swiatkowski, Joshua V Dillon, Jasper Snoek, Stephan Mandt, Tim Salimans, Sebastian Nowozin, and Rodolphe Jenatton. Hydra: Preserving ensemble diversity for model distillation. arXiv preprint arXiv:2001.04694, 2020." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 606, + 504, + 640 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 606, + 504, + 640 + ], + "spans": [ + { + "bbox": [ + 107, + 606, + 504, + 640 + ], + "type": "text", + "content": "Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, and Xinlei Chen. On the importance of asymmetry for siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16570-16579, 2022." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 647, + 504, + 670 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 647, + 504, + 670 + ], + "spans": [ + { + "bbox": [ + 107, + 647, + 504, + 670 + ], + "type": "text", + "content": "Yeming Wen, Dustin Tran, and Jimmy Ba. Batchsemble: an alternative approach to efficient ensemble and lifelong learning. arXiv preprint arXiv:2002.06715, 2020." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 677, + 506, + 731 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 677, + 506, + 731 + ], + "spans": [ + { + "bbox": [ + 107, + 677, + 506, + 731 + ], + "type": "text", + "content": "Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos, Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Kornblith, et al. Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. In International Conference on Machine Learning, pp. 23965-23998. PMLR, 2022." + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 280 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 117 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 507, + 117 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 507, + 117 + ], + "type": "text", + "content": "Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3733-3742, 2018." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 507, + 158 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 507, + 158 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 507, + 158 + ], + "type": "text", + "content": "Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 3485-3492. IEEE, 2010." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 507, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 507, + 198 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 507, + 198 + ], + "type": "text", + "content": "Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9653-9663, 2022." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 204, + 507, + 238 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 204, + 507, + 238 + ], + "spans": [ + { + "bbox": [ + 105, + 204, + 507, + 238 + ], + "type": "text", + "content": "Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stephane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning, pp. 12310-12320. PMLR, 2021." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 245, + 507, + 280 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 245, + 507, + 280 + ], + "spans": [ + { + "bbox": [ + 105, + 245, + 507, + 280 + ], + "type": "text", + "content": "Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. Image BERT pre-training with online tokenizer. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=ydopy-e6Dg." + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 7 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 201, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 201, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 201, + 94 + ], + "type": "text", + "content": "A PSEUDOCODE" + } + ] + } + ], + "index": 1 + }, + { + "type": "code", + "bbox": [ + 105, + 144, + 402, + 329 + ], + "blocks": [ + { + "bbox": [ + 105, + 129, + 335, + 142 + ], + "lines": [ + { + "bbox": [ + 105, + 129, + 335, + 142 + ], + "spans": [ + { + "bbox": [ + 105, + 129, + 335, + 142 + ], + "type": "text", + "content": "Algorithm 1: Pseudocode for computing ensemble loss" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "code_caption" + }, + { + "bbox": [ + 105, + 144, + 402, + 329 + ], + "lines": [ + { + "bbox": [ + 105, + 144, + 402, + 329 + ], + "spans": [ + { + "bbox": [ + 105, + 144, + 402, + 329 + ], + "type": "text", + "content": "b, n, c:, batch size, number of ensemble heads, codebook size \n# log_ps, log_ct: student, teacher log probabilities with n ensembles \n# strategy: ensemble loss average strategy \n# tau_ent: temperature for entropy weighting \ndef ensemble_loss(log_ps, log_ct, strategy, tau_ent): \n b, n, c = log_ct.shape # axis 1 corresponds to ensemble \n log_ct = stop_grad(log_ct) # stop gradient for teacher \n if strategy == \"Unif\": \n loss = - (exp(log_ct) * log_ps).sum(axis=-1) \n loss = loss.mean(axis=1) # average over ensembles \n elif strategy == \"Prob\": \n log_mean_ct = logsumexp(log_ct, axis=1, b=1/n) # mean teacher \n log_mean_ps = logsumexp(log_ps, axis=1, b=1/n) # mean student \n loss = - (exp(log_mean_ct) * log_mean_ps).sum(axis=-1) \n elif strategy == \"Ent\": \n ent = - (exp(log_ct) * log_ct).sum(axis=-1) # teacher entropy \n weight = softmax(-ent/tau_ent, axis=1) # entropy weights \n loss = - (exp(log_ct) * log_ps).sum(axis=-1) \n loss = (loss * weight).sum(axis=1) # entropy weighted average \nreturn loss.mean() # average over samples" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "code_body" + } + ], + "index": 3, + "sub_type": "code", + "guess_lang": "python" + }, + { + "type": "code", + "bbox": [ + 105, + 383, + 491, + 712 + ], + "blocks": [ + { + "bbox": [ + 105, + 369, + 386, + 381 + ], + "lines": [ + { + "bbox": [ + 105, + 369, + 386, + 381 + ], + "spans": [ + { + "bbox": [ + 105, + 369, + 386, + 381 + ], + "type": "text", + "content": "Algorithm 2: Pseudocode for ensemble heads with simplified DINO" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "code_caption" + }, + { + "bbox": [ + 105, + 383, + 491, + 712 + ], + "lines": [ + { + "bbox": [ + 105, + 383, + 491, + 712 + ], + "spans": [ + { + "bbox": [ + 105, + 383, + 491, + 712 + ], + "type": "text", + "content": "# n, c, eta: number of ensemble heads, codebook size, momentum update rate\n# fs, ft: student, teacher encoders\n# hs_ens, ht_ens: student, teacher projection heads with n ensembles, list with length n\n# mus_ens, mut_ens: student, teacher codebooks with n ensembles, list with length n\n# taus, taut: student, teacher temperatures\n# strategy: ensemble loss average strategy\n# tau_ent: temperature for entropy weighting\nfor x in dataloder: # load a batch with b samples\nxs, xt = augs(x), augt(x) # random augmentations\nzs, zt = fs(xs), ft(xt) # representations, (b, l)\n# all following computation can be parallelized with batch computation\nlog_ps, log_ct = [], []\nfor j in range(n):\nhs_j, ht_j = hs_ens[j], ht_ens[j] # j-th projection head\nmus_j, mut_j = mus_ens[j], mut_ens[j] # j-th codebook, (d, c)\nes_j, et_j = hs_j(zs), ht_j(zt) # j-th embedding, (b, d)\nrs_j = (es_j @ mus_j) / (es_j(norm(axis=1, keepdims=True) * mus_j(norm(axis=0, keepdims=True)) / taus # student logits, (b, c)\nrt_j = (et_j @ mut_j) / (et_j(norm(axis=1, keepdims=True) * mut_j(norm(axis=0, keepdims=True)) / taut # teacher logits, (b, c)\nlog_ps_j = logsoftmax(rs_j, axis=-1) # (b, c)\nlog_rt_j = logsoftmax(rt_j, axis=-1) # (b, c)\nlog_rt_j = renorm(log_rt_j) # adjust teacher predictions with centering or sinkhorn,\nomitted here for simplicity\nlog_ps.append(log_ps_j)\nlog_rt.append(log_rt_j)\nlog_ps = stack(log_ps_j, axis=1) # stacked student log probabilities, (b, n, c)\nlog_rt = stack(log_rt_j, axis=1) # stacked teacher log probabilities, (b, n, c)\nloss = ensemble_loss(log_ps, log_rt, strategy=strategy) # compute ensemble loss\nloss.backup() # back-propagate\nsgd_update(fs, hs_ens, mus_ens) # apply gradient decent update for student\nema_update(ft, ht_ens, mut_ens, rate=eta) # apply momentum update for teacher" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "code_body" + } + ], + "index": 5, + "sub_type": "code", + "guess_lang": "julia" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 257, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 257, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 257, + 94 + ], + "type": "text", + "content": "B EXPERIMENTAL DETAILS" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 106, + 506, + 152 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 106, + 506, + 152 + ], + "spans": [ + { + "bbox": [ + 104, + 106, + 506, + 152 + ], + "type": "text", + "content": "In this section, we provide details for our experiments. In Appx. B.1, we describe how we reproduced and improved the baseline DINO/MSN models. We give the implementation details for SSL training and evaluation in Appx. B.2 and Appx. B.3 respectively. All the hyper-parameters used in our experiments are in Appx. B.2.2." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 310, + 175 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 310, + 175 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 310, + 175 + ], + "type": "text", + "content": "B.1 REPRODUCING & IMPROVING BASELINES" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "spans": [ + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "type": "text", + "content": "We carefully reproduced and further improved baseline methods (denoted as DINO* and MSN* respectively) with an extensive study and hyperparameter search (see Appx. B.1). In particular, we systematically study the projection head design (which we found is crucial for few-shot evaluation performance (Appx. C.2)) and different techniques for avoiding collapse used in both methods (Appx. C.1). DINO* performs significantly better than DINO on few-shot evaluation (e.g., " + }, + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "type": "inline_equation", + "content": "2\\sim 6" + }, + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "type": "text", + "content": " percentage point (p.p.) gains for 1 shot) and maintains the full-data evaluation performance. The main adjustments of DINO* are: (i) A 3-layer projection head with a hidden dimension of 1024 (instead of 2048); (ii) Sinkhorn-Knopp (SK) normalization (instead of centering) is applied to teacher predictions, combined with a smaller teacher temperature " + }, + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "type": "inline_equation", + "content": "\\tau = 0.025" + }, + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "type": "text", + "content": " and codebook size " + }, + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "type": "inline_equation", + "content": "c = 1024" + }, + { + "bbox": [ + 104, + 183, + 506, + 316 + ], + "type": "text", + "content": " or 4096. MSN* uses the same projection head as DINO* and applies ME-MAX regularization without SK normalization (which is applied in MSN by default). Further details for DINO and MSN can be found below." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 327, + 169, + 339 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 327, + 169, + 339 + ], + "spans": [ + { + "bbox": [ + 105, + 327, + 169, + 339 + ], + "type": "text", + "content": "B.1.1 DINO" + } + ] + } + ], + "index": 5 + }, + { + "type": "table", + "bbox": [ + 109, + 401, + 502, + 469 + ], + "blocks": [ + { + "bbox": [ + 104, + 354, + 504, + 389 + ], + "lines": [ + { + "bbox": [ + 104, + 354, + 504, + 389 + ], + "spans": [ + { + "bbox": [ + 104, + 354, + 504, + 389 + ], + "type": "text", + "content": "Table 5: Reproducing & Improving DINO. Our reproduce results match the public numbers. We further improve the DINO baseline (DINO*) by studying projection heads and collapse-avoiding techniques. The evaluation results of DINO/DINO* ViT-S/16 trained with 800 epochs are reported." + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 109, + 401, + 502, + 469 + ], + "lines": [ + { + "bbox": [ + 109, + 401, + 502, + 469 + ], + "spans": [ + { + "bbox": [ + 109, + 401, + 502, + 469 + ], + "type": "table", + "html": "
Few-shotFull-data
125~13 (1%)k-NNLinear
DINO (Caron et al., 2021)38.9 ± 0.448.9 ± 0.358.5 ± 0.164.574.576.1 / 77.0
DINO (Ours reproduced)39.1 ± 0.349.1 ± 0.558.6 ± 0.264.774.375.8 / 76.9
DINO* (Retuned)44.6 ± 0.253.6 ± 0.361.1 ± 0.266.274.175.8 / 76.9
", + "image_path": "b2a2e41958ede2350a5e360b42fe559d83ce2c86ef637dd7610ae3b5a081277a.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "table_body" + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 479, + 506, + 602 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 479, + 506, + 602 + ], + "spans": [ + { + "bbox": [ + 104, + 479, + 506, + 602 + ], + "type": "text", + "content": "Reproducing DINO We carefully reproduced DINO with JAX following the official DINO implementation1. In Table 5, we report the evaluation results of DINO using ViT-S trained with 800 epochs following the exact training configuration for ViT-S/16 in the official DINO code. The official results of full-data evaluation and " + }, + { + "bbox": [ + 104, + 479, + 506, + 602 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 479, + 506, + 602 + ], + "type": "text", + "content": " data evaluation are from Caron et al. (2021), the other few-shot evaluation results are evaluated by Assran et al. (2022) and also validated by us. Note that for consistency of full-data linear evaluation, we report the results with both the [CLS] token representations of the last layer and the concatenation of the [CLS] token representations from the last 4 layers following Caron et al. (2021). For 1-/2-/5-shots evaluation results, we report the mean accuracy and standard deviation across 3 random splits of the data following Assran et al. (2022). As shown in Table 5, our reproduced results are all comparable with the published numbers which validates the implementation of our training and evaluation pipelines." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "type": "text", + "content": "Improving DINO We improved the DINO baseline with a systematic empirical study of some important components. We first empirically compared different techniques for avoiding collapse (see Appx. C.1) and find that Sinkhorn-Knopp (SK) normalization is a more effective and also simpler technique for encouraging codebook usage than the centering operation used in DINO. We thus applied SK normalization, which enabled us to use a smaller teacher temperature " + }, + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\tau = 0.025" + }, + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "type": "text", + "content": " (instead of " + }, + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\tau = 0.07" + }, + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "type": "text", + "content": ") and a much smaller codebook size " + }, + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "type": "inline_equation", + "content": "c = 1024" + }, + { + "bbox": [ + 104, + 613, + 506, + 715 + ], + "type": "text", + "content": " or 4096 (instead of 65536). These modifications lead to similar performance as DINO with a much smaller codebook (up to 1M parameters, compared to 16M parameters for DINO). Next we empirically studied the effect of projection heads for different evaluation metrics (see Appx. C.2), and found that the design of" + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 117, + 720, + 340, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 720, + 340, + 732 + ], + "spans": [ + { + "bbox": [ + 117, + 720, + 340, + 732 + ], + "type": "text", + "content": "https://github.com/facebookresearch/dino" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 149 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 149 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 149 + ], + "type": "text", + "content": "projection heads is crucial for few-shot evaluation metrics and an too power powerful projection head (e.g., the 3-layer MLP with a hidden dimension of 2048 used in DINO/MSN/etc.) could significantly hurt the few-shot performance. With an empirically study of projection head architectures, we found that a simply reducing the hidden dimension to 1024 could significantly improves the few-shot evaluation performance while maintaining full-data evaluation performance. The improved results of DINO* are shown in Table 5." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 160, + 167, + 171 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 160, + 167, + 171 + ], + "spans": [ + { + "bbox": [ + 105, + 160, + 167, + 171 + ], + "type": "text", + "content": "B.1.2 MSN" + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 108, + 232, + 502, + 301 + ], + "blocks": [ + { + "bbox": [ + 104, + 185, + 504, + 220 + ], + "lines": [ + { + "bbox": [ + 104, + 185, + 504, + 220 + ], + "spans": [ + { + "bbox": [ + 104, + 185, + 504, + 220 + ], + "type": "text", + "content": "Table 6: Reproducing & improving MSN. We implement " + }, + { + "bbox": [ + 104, + 185, + 504, + 220 + ], + "type": "inline_equation", + "content": "\\mathsf{MSN^{*}}" + }, + { + "bbox": [ + 104, + 185, + 504, + 220 + ], + "type": "text", + "content": " by adding ME-MAX regularization and masking to DINO*, which surpasses public MSN results. The evaluation results of MSN/MSN* ViT-S/16 trained with 800 epochs are reported." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 232, + 502, + 301 + ], + "lines": [ + { + "bbox": [ + 108, + 232, + 502, + 301 + ], + "spans": [ + { + "bbox": [ + 108, + 232, + 502, + 301 + ], + "type": "table", + "html": "
Few-shotFull-data
125~13 (1%)k-NNLinear
MSN (Assran et al., 2022)47.1 ± 0.155.8 ± 0.662.8 ± 0.367.2-- / 76.9
MSN (Repro)39.1 ± 0.349.2 ± 0.358.4 ± 0.164.372.874.7 / 75.5
MSN* (Retuned)47.4 ± 0.156.3 ± 0.462.8 ± 0.267.173.375.6 / 76.6
", + "image_path": "ef482eb8da3b2eef6dfd5d4b3c41b41b8cbb4b20575f8fc58f977eaa82c18716.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "spans": [ + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "type": "text", + "content": "We carefully implemented MSN by adding its main components, i.e., ME-MAX regularization and masking, to the DINO implementation (denoted as " + }, + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "type": "inline_equation", + "content": "\\mathrm{MSN}^*" + }, + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "type": "text", + "content": "), which surpassed public results as shown in Table 6. Note that the implementation of " + }, + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "type": "inline_equation", + "content": "\\mathrm{MSN}^*" + }, + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "type": "text", + "content": " does not exactly match the public implementation in the public MSN code" + }, + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "type": "inline_equation", + "content": "^2" + }, + { + "bbox": [ + 104, + 304, + 504, + 350 + ], + "type": "text", + "content": ", where the main differences are:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 358, + 506, + 430 + ], + "type": "list", + "angle": 0, + "index": 8, + "blocks": [ + { + "bbox": [ + 130, + 358, + 506, + 392 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 358, + 506, + 392 + ], + "spans": [ + { + "bbox": [ + 130, + 358, + 506, + 392 + ], + "type": "text", + "content": "- MSN applies ME-MAX with Sinkhorn-Knopp normalization by default (as in the released training configuration), which we empirically find does not work very well (see Table 9). " + }, + { + "bbox": [ + 130, + 358, + 506, + 392 + ], + "type": "inline_equation", + "content": "\\mathrm{MSN}^*" + }, + { + "bbox": [ + 130, + 358, + 506, + 392 + ], + "type": "text", + "content": " does not apply SK normalization and tunes the regularization strength for ME-MAX." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 396, + 506, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 396, + 506, + 430 + ], + "spans": [ + { + "bbox": [ + 130, + 396, + 506, + 430 + ], + "type": "text", + "content": "- Some differences in implementation details, e.g., schedules for learning rate/weight decay, batch normalization in projection heads, specific data augmentations, etc. " + }, + { + "bbox": [ + 130, + 396, + 506, + 430 + ], + "type": "inline_equation", + "content": "\\mathrm{MSN}^*" + }, + { + "bbox": [ + 130, + 396, + 506, + 430 + ], + "type": "text", + "content": " uses the exact same setup as DINO\\* which follows original DINO implementation." + } + ] + } + ], + "index": 7 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 438, + 504, + 472 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 438, + 504, + 472 + ], + "spans": [ + { + "bbox": [ + 104, + 438, + 504, + 472 + ], + "type": "text", + "content": "We initially tried to exactly reproduce the original MSN following the public MSN code, but the results are much below the public ones, as shown in Table 6. Incorporating the two differences above bridges the gap and makes " + }, + { + "bbox": [ + 104, + 438, + 504, + 472 + ], + "type": "inline_equation", + "content": "\\mathrm{MSN}^*" + }, + { + "bbox": [ + 104, + 438, + 504, + 472 + ], + "type": "text", + "content": " surpass the public results." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 485, + 233, + 496 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 485, + 233, + 496 + ], + "spans": [ + { + "bbox": [ + 105, + 485, + 233, + 496 + ], + "type": "text", + "content": "B.2 PRETRAINING DETAILS" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 506, + 506, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 506, + 506, + 529 + ], + "spans": [ + { + "bbox": [ + 104, + 506, + 506, + 529 + ], + "type": "text", + "content": "In this subsection, we provide the general implementation details in Appx. B.2.1 and specific hyperparameters in Appx. B.2.2 in Appx. B.2.2 for reproducibility." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 540, + 259, + 551 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 540, + 259, + 551 + ], + "spans": [ + { + "bbox": [ + 105, + 540, + 259, + 551 + ], + "type": "text", + "content": "B.2.1 IMPLEMENTATION DETAILS" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "text", + "content": "**Common setup** We experimented with DINO (Caron et al., 2021) and MSN (Assran et al., 2022) models on ImageNet ILSVRC-2012 dataset (Deng et al., 2009). We mainly followed the training setup in Caron et al. (2021). In particular, all models were trained with AdamW optimizer (Loshchilov & Hutter, 2018) and a batch size of 1024. The learning rate was linearly warmuped to 0.002 " + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "inline_equation", + "content": "(= 0.001 \\times \\text{batch size} / 512)" + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "text", + "content": " and followed a cosine decay schedule. The weight decay followed a cosine schedule from 0.04 to 0.4. The momentum rate for the teacher was increased from 0.996 to 1 with a cosine schedule following BYOL (Grill et al., 2020). A stochastic depth (Huang et al., 2016) of 0.1 was applied without dropout (Srivastava et al., 2014). The student temperature " + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\tau" + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "text", + "content": " is set to 0.1. As with DINO, we used the data augmentations of BYOL and multi-crop augmentation of SWAV (Caron et al., 2020). In particular, 2 global views with a " + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "text", + "content": " resolution and crop area range [0.25, 1.0] were generated for the teacher and student, and another 10 local views with " + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "inline_equation", + "content": "96 \\times 96" + }, + { + "bbox": [ + 104, + 559, + 506, + 715 + ], + "type": "text", + "content": " resolution and crop area range [0.08, 0.25] were used as extra augmented inputs for the student. For MSN, we used the exact same setup and incorporated its major component: 1) mean entropy maximization (ME-MAX) regularization; 2) masking as an extra augmentation applied to the student global view." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 720, + 335, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 720, + 335, + 732 + ], + "spans": [ + { + "bbox": [ + 116, + 720, + 335, + 732 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 116, + 720, + 335, + 732 + ], + "type": "text", + "content": "https://github.com/facebookresearch/msn" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": "Main modifications We retuned the baselines (DINO* and MSN*) as detailed in Appx. B.1, and the main adjustments are as followed. We used a 3-layer projection head with a hidden dimension of 1024. The output embedding (i.e., " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "(h_{\\psi} \\circ r_{\\omega})(x)" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": ") and the codes (i.e., " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": ") both have a dimension of 256 and are " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "L_{2}" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": " normalized. For DINO*, Sinkhorn-Knopp (SK) normalization was applied to teacher predictions. For MSN*, ME-MAX was used without SK normalization and the regularization strength was tuned over " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "\\{3, 4, 5\\}" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": ". For all models, we used teacher temperature " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "\\tau = 0.025" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": " which was linearly decayed from 0.05 for the first 30 epochs. The codebook size " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "c" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": " was selected over " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "\\{1024, 4096\\}" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": " for all models, and typically " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "c = 4096" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": " was selected for baseline methods and " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "c = 1024" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": " was selected for ours. For our " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": "-ensembles with ENT, entropy weighting temperature " + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 82, + 506, + 193 + ], + "type": "text", + "content": " is linearly decayed from 0.5 to the specified value." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 204, + 234, + 216 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 204, + 234, + 216 + ], + "spans": [ + { + "bbox": [ + 105, + 204, + 234, + 216 + ], + "type": "text", + "content": "B.2.2 HYPER-PARAMETERS" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 223, + 405, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 223, + 405, + 236 + ], + "spans": [ + { + "bbox": [ + 104, + 223, + 405, + 236 + ], + "type": "text", + "content": "We report the hyperparameters for training our models for reproducibility:" + } + ] + } + ], + "index": 3 + }, + { + "type": "table", + "bbox": [ + 106, + 271, + 504, + 504 + ], + "blocks": [ + { + "bbox": [ + 187, + 245, + 422, + 258 + ], + "lines": [ + { + "bbox": [ + 187, + 245, + 422, + 258 + ], + "spans": [ + { + "bbox": [ + 187, + 245, + 422, + 258 + ], + "type": "text", + "content": "Table 7: Hyper-parameters for training the DINO* model." + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 271, + 504, + 504 + ], + "lines": [ + { + "bbox": [ + 106, + 271, + 504, + 504 + ], + "spans": [ + { + "bbox": [ + 106, + 271, + 504, + 504 + ], + "type": "table", + "html": "
Hyper-parameterViT-S/16ViT-B/16ViT-B/8
DINO*DINO*-PROB (16)DINO*-ENT (4/16)DINO*DINO*-ENT (16)DINO*DINO*-ENT (16)
training epoch800400300
batch size102410241024
learning rate2e-32e-32e-3
warmup epoch103010
min lr1e-51e-54e-5
weight decay0.04 → 0.40.04 → 0.40.04 → 0.4
stochastic depth0.10.10.1
gradient clip3.01.03.0
momentum0.996 → 1.00.996 → 1.00.996 → 1.0
# of multi-crops101010
masking ratio---
proj. layer333
proj. hidden dim102410241024
emb. dim d256256256
rep. dim384768768
codebook size c4096102410244096102440961024
student temp.0.10.10.1
teacher temp.0.0250.0250.025
te. temp. decay epoch303030
centerxxx
SK norm
ME-MAX weight---
ent. weight temp. γ--0.05-0.05-0.06
γ init.--0.5-0.5-0.5
γ decay epoch--30-30-30
", + "image_path": "0ce535b282b281cff1393e8223140d3e8be97283eef8434f1fc586abc05a4ea3.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "table_body" + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 516, + 244, + 528 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 516, + 244, + 528 + ], + "spans": [ + { + "bbox": [ + 105, + 516, + 244, + 528 + ], + "type": "text", + "content": "B.3 EVALUATION PROTOCALS" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "spans": [ + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "text", + "content": "Few-shot linear evaluation We followed the few-shot evaluation protocol in Assran et al. (2022). Specifically, we used the 1-/2-/5-shot ImageNet dataset splits3 in Assran et al. (2022) and " + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "inline_equation", + "content": "\\sim 13" + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "text", + "content": "-shot) ImageNet dataset splits4. For given labelled images, we took a single central crop of size " + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "text", + "content": " without additional data augmentations, and extracted the output [CLS] token representations from the frozen pretrained model. Then we trained a linear classifier with multi-class logistic regression on top of the extracted representations. We used the scikit-learn package (Pedregosa et al., 2011) for the logistic regression classifier. For all few-shot evaluations, we searched the " + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "inline_equation", + "content": "\\mathrm{L}_2" + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "text", + "content": " regularization strength over " + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "inline_equation", + "content": "\\{1\\mathrm{e}-4, 3\\mathrm{e}-4, 1\\mathrm{e}-3, 3\\mathrm{e}-3, 1\\mathrm{e}-2, 3\\mathrm{e}-2, 1\\mathrm{e}-1, 3\\mathrm{e}-1, 1, 3, 10\\}" + }, + { + "bbox": [ + 104, + 536, + 506, + 625 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 636, + 507, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 636, + 507, + 693 + ], + "spans": [ + { + "bbox": [ + 104, + 636, + 507, + 693 + ], + "type": "text", + "content": "Full-data linear evaluation We followed the linear evaluation protocol in (Caron et al., 2021). Specifically, we trained a linear classifier on top of the representations extracted from the frozen pretrained model. The linear classifier is optimized by SGD with Nesterov momentum (Nesterov, 1983; Sutskever et al., 2013) of 0.9 and a batch size of 4096 for 100 epochs on the whole ImageNet dataset, following a cosine learning rate decay schedule. We did not apply any weight decay." + } + ] + } + ], + "index": 8 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 118, + 700, + 410, + 711 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 118, + 700, + 410, + 711 + ], + "spans": [ + { + "bbox": [ + 118, + 700, + 410, + 711 + ], + "type": "text", + "content": "3Publicly available at https://github.com/facebookresearch/msn" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 106, + 712, + 505, + 731 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 712, + 505, + 731 + ], + "spans": [ + { + "bbox": [ + 106, + 712, + 505, + 731 + ], + "type": "text", + "content": "4Publicly available at https://github.com/google-research/simclr/tree/master/imagenet_subsets" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 132, + 105, + 480, + 360 + ], + "blocks": [ + { + "bbox": [ + 189, + 80, + 420, + 92 + ], + "lines": [ + { + "bbox": [ + 189, + 80, + 420, + 92 + ], + "spans": [ + { + "bbox": [ + 189, + 80, + 420, + 92 + ], + "type": "text", + "content": "Table 8: Hyper-parameters for training the " + }, + { + "bbox": [ + 189, + 80, + 420, + 92 + ], + "type": "inline_equation", + "content": "{\\mathrm{{MSN}}}^{ * }" + }, + { + "bbox": [ + 189, + 80, + 420, + 92 + ], + "type": "text", + "content": " model." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 132, + 105, + 480, + 360 + ], + "lines": [ + { + "bbox": [ + 132, + 105, + 480, + 360 + ], + "spans": [ + { + "bbox": [ + 132, + 105, + 480, + 360 + ], + "type": "table", + "html": "
Hyper-parameterViT-S/16ViT-B/16ViT-B/8
DINO*MSN*-ENT (2/8)MSN*MSN*-ENT (8)MSN*MSN*-ENT (8)
training epoch800400300
batch size102410241024
learning rate2e-32e-32e-3
warmup epoch203020
min lr1e-54e-54e-5
weight decay0.04 → 0.40.04 → 0.40.04 → 0.4
stochastic depth0.10.10.1
gradient clip1.01.01.0
momentum0.996 → 1.00.996 → 1.00.996 → 1.0
# of multi-crops101010
masking ratio0.20.20.15
proj. layer333
proj. hidden dim102410241024
emb. dim d256256256
rep. dim384768768
codebook size c409610244096102440961024
student temp.0.10.10.1
teacher temp.0.0250.0250.025
te. temp. decay epoch303030
centerXXX
SK normXXX
ME-MAX weight4.04.04.0
ent. weight temp. γ-0.01-0.005-0.01
γ init.-0.5-0.5-0.5
γ decay epoch-30-30-30
", + "image_path": "f37856b7343d311e5439e11a81d14b5ed3e4b0d25d2ee0e7220e7b47751129d5.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "spans": [ + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": "During training, we only applied basic data augmentations including random resized crops of size " + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": " and horizontal flips. During testing, we took a single central crop of the same size. For ViT-S/16, Caron et al. (2021) found that concatenating the [CLS] token representations from the last " + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "inline_equation", + "content": "l" + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": " (specifically, " + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "inline_equation", + "content": "l = 4" + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": ") layers (c.f. Appendix F.2 in Caron et al. (2021)) improved the results by about 1 p.p. We followed the same procedure, but reported linear evaluation results with both " + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "inline_equation", + "content": "l = 1" + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "inline_equation", + "content": "l = 4" + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": " in Table 2 for consistency. In our empirical study with ViT-S/16, we used the result with " + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "inline_equation", + "content": "l = 1" + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": ". For larger models (e.g., ViT-B/16), we followed Caron et al. (2021); Zhou et al. (2022) to use the concatenation of the [CLS] token representation and the average-pooled patch tokens from the last " + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "inline_equation", + "content": "l = 1" + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": " layer for linear evaluation. For all linear evaluations, we searched the base learning rate over " + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "inline_equation", + "content": "\\{4.8\\mathrm{e} - 3, 1.6\\mathrm{e} - 2, 4.8\\mathrm{e} - 2, 1.6\\mathrm{e} - 1, 4.8\\mathrm{e} - 1, 1.6\\}" + }, + { + "bbox": [ + 104, + 376, + 506, + 486 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "spans": [ + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": "Full-data " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": "-NN evaluation We followed the " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": "-NN evaluation protocol in Caron et al. (2021); Wu et al. (2018). Specifically, for each image in the given dataset, we took a single central crop of size " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " without additional data augmentations, and extracted the output [CLS] token representations from the frozen pretrained model. The extracted representations are used for a weighted " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": "-Nearest-Neighbor classifier. In particular, denote the stored training representations and labels as " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\mathcal{D} = \\{(z_i, y_i)\\}_{i=1}^N" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": ". For a test image with extracted representation " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "z" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": ", denote the set of its top " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": "-NN training samples as " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_k[z] \\subseteq \\mathcal{D}" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "|\\mathcal{D}_k[z]| = k" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": ". The " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": "-NN set " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\mathcal{D}_k[z]" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " is used to make the prediction for the test image with a weighted vote, i.e., " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\hat{y} = \\arg \\max_y \\left( \\sum_{(z_j, y_j) \\in \\mathcal{D}_k[z]} \\alpha_j \\mathbf{1}_{y=y_j} \\right)" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\mathbf{1}_{y=y_j}" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " is the one-hot vector corresponding to label " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "y_j" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\alpha_j" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " is the weight induced by the cosine similarity between " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "z" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "z_j" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": ", i.e., " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\alpha_j = \\exp \\left( \\frac{1}{\\tau'} \\frac{z^\\top z_j}{||z|| \\|z_j||} \\right)" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": ". We set " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\tau' = 0.07" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " without tuning as in Caron et al. (2021); Wu et al. (2018). For all " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": "-NN evaluations, we searched " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " over " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "\\{5, 10, 20, 50, 100\\}" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " and found that " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k = 10" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " or " + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "inline_equation", + "content": "k = 20" + }, + { + "bbox": [ + 104, + 501, + 506, + 651 + ], + "type": "text", + "content": " was consistently the best." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": "Transfer evaluation via linear probing We mainly followed the transfer evaluation protocol in (Grill et al., 2020; Chen et al., 2020a). In particular, we used 9 of their 13 datasets that are available in tensorflow-datasets (tfd), namely Food-101 (Bossard et al., 2014), CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), SUN397 scene dataset (Xiao et al., 2010), Stanford Cars (Krause et al., 2013), Describable Textures Dataset (Cimpoi et al., 2014, DTD), Oxford-IIIT Pets (Parkhi et al., 2012), Caltech-101 (Fei-Fei et al., 2004), Oxford 102 Flowers (Nilsback & Zisserman," + } + ] + } + ], + "index": 5 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 300, + 750, + 311, + 760 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 6 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 108, + 140, + 502, + 215 + ], + "blocks": [ + { + "bbox": [ + 104, + 79, + 504, + 125 + ], + "lines": [ + { + "bbox": [ + 104, + 79, + 504, + 125 + ], + "spans": [ + { + "bbox": [ + 104, + 79, + 504, + 125 + ], + "type": "text", + "content": "Table 9: Empirical study of different techniques for avoiding collapse. Using Sinkhorn-Knopp normalization instead of centering for DINO leads to improved performance, and matches the original DINO even with a much smaller codebook. The ME-MAX regularization of MSN is very effective and leads to significant improvement for few-shot evaluations." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 108, + 140, + 502, + 215 + ], + "lines": [ + { + "bbox": [ + 108, + 140, + 502, + 215 + ], + "spans": [ + { + "bbox": [ + 108, + 140, + 502, + 215 + ], + "type": "table", + "html": "
TechniqueFew-shotFull-data
CenterSinkhornME-MAX125~13 (1%)k-NNLinear
DINO37.8 ± 0.447.4 ± 0.356.9 ± 0.463.072.474.9
39.1 ± 0.349.4 ± 0.358.7 ± 0.264.874.176.0
MSN36.0 ± 0.446.6 ± 0.656.5 ± 0.263.273.275.2
43.9 ± 0.253.0 ± 0.361.1 ± 0.266.074.075.8
", + "image_path": "9a27ed9b5ae01b37badb6bf57d8bd17d7e8524fdbd48e5d2b36164ee73106be3.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 151, + 244, + 459, + 312 + ], + "blocks": [ + { + "bbox": [ + 168, + 219, + 441, + 232 + ], + "lines": [ + { + "bbox": [ + 168, + 219, + 441, + 232 + ], + "spans": [ + { + "bbox": [ + 168, + 219, + 441, + 232 + ], + "type": "text", + "content": "Table 10: ME-MAX regularization is sensitive to hyper-parameters." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 151, + 244, + 459, + 312 + ], + "lines": [ + { + "bbox": [ + 151, + 244, + 459, + 312 + ], + "spans": [ + { + "bbox": [ + 151, + 244, + 459, + 312 + ], + "type": "table", + "html": "
WeightFew-shotFull-data
125~13 (1%)KNNLinear
1.037.6 ± 0.248.0 ± 0.457.7 ± 0.264.073.575.6
3.043.9 ± 0.253.0 ± 0.361.1 ± 0.266.074.075.8
5.043.6 ± 0.252.6 ± 0.460.4 ± 0.165.573.975.6
", + "image_path": "60b12267d8f6efabcdcf0af59665cc9d258e1f1c0f4fded2ce22f5096265c227.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "spans": [ + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "text", + "content": "2008). Following their evaluation metrics, we reported mean per-class accuracy for Oxford-IIIT Pets, Caltech-101, and Oxford 102 Flowers datasets and reported top-1 accuracy for other datasets. We transferred the models pretrained on ImageNet (Deng et al., 2009) to these datasets by training a linear classifier on top of frozen representations. In particular, we resized given images to " + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "inline_equation", + "content": "256 \\times 256" + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "text", + "content": " and took a single central crop of size " + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "inline_equation", + "content": "224 \\times 224" + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "text", + "content": " without additional data augmentations. We extracted the output [CLS] token representations from the frozen pretrained model. Then we trained a linear classifier with multi-class logistic regression on top of the extracted representations. We used the scikit-learn package (Pedregosa et al., 2011) for the logistic regression classifier. For all transfer evaluations, we searched the " + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "inline_equation", + "content": "\\mathbf{L}_2" + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "text", + "content": " regularization strength over " + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "inline_equation", + "content": "\\{1e - 6, 1e - 5, 1e - 4, 3e - 4, 1e - 3, 3e - 3, 1e - 2, 3e - 2, 1e - 1, 3, 1e, 3e, 1e2, 1e3, 1e4, 1e5\\}" + }, + { + "bbox": [ + 104, + 326, + 506, + 437 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 453, + 245, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 453, + 245, + 464 + ], + "spans": [ + { + "bbox": [ + 105, + 453, + 245, + 464 + ], + "type": "text", + "content": "C ADDITIONAL RESULTS" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 479, + 395, + 491 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 479, + 395, + 491 + ], + "spans": [ + { + "bbox": [ + 105, + 479, + 395, + 491 + ], + "type": "text", + "content": "C.1 EMPIRICAL STUDY OF TECHNIQUES FOR AVOIDING COLLAPSE" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 499, + 504, + 565 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 499, + 504, + 565 + ], + "spans": [ + { + "bbox": [ + 104, + 499, + 504, + 565 + ], + "type": "text", + "content": "Most self-supervised learning methods utilize some techniques to avoid collapse of representations with, e.g., contrastive loss (Chen et al., 2020a; He et al., 2020), batch normalization (Grill et al., 2020), asymmetric architecture design with a predictor (Grill et al., 2020; Chen & He, 2021), etc. In DINO and MSN, a learnable codebook is used for the learning objective and different techniques are applied to encourage the effective codebook usage. There are two potential cases of collapse (as discussed in Caron et al. (2021)):" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 575, + 504, + 679 + ], + "type": "list", + "angle": 0, + "index": 11, + "blocks": [ + { + "bbox": [ + 130, + 575, + 504, + 641 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 575, + 504, + 641 + ], + "spans": [ + { + "bbox": [ + 130, + 575, + 504, + 641 + ], + "type": "text", + "content": "- Dominating codes. This is the case of \"winner-take-all\": only a small portion of codes are being predicted while others are inactive. Typical solutions for avoiding this include applying Sinkhorn-Knopp normalization (Cuturei, 2013) as in SWaV (Caron et al., 2020), centering teacher logits as in DINO (Caron et al., 2021), and applying mean-entropy maximization regularization (ME-MAX) as in MSN (Assran et al., 2022). Note that in MSN, ME-MAX is combined with Sinkhorn-Knopp normalization by default." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 130, + 644, + 504, + 679 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 644, + 504, + 679 + ], + "spans": [ + { + "bbox": [ + 130, + 644, + 504, + 679 + ], + "type": "text", + "content": "- Uniform codes. This is the case where all codes are treated equally and the predictions reduce to be uniform over codes. A simple and effective solution is to applying sharpening, i.e., using a lower temperature for computing the teacher prediction." + } + ] + } + ], + "index": 10 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 687, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 687, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 687, + 504, + 733 + ], + "type": "text", + "content": "We systematically study different techniques in a unified setup. In particular, we used DINO with the ViT-S backbone, a 3-layer MLP projection head with hidden dimension 2048, and a codebook of size 4096 and dimension 256. We applied different techniques to DINO and searched the teacher temperature in " + }, + { + "bbox": [ + 104, + 687, + 504, + 733 + ], + "type": "inline_equation", + "content": "\\{0.0125, 0.025, 0.05\\}" + }, + { + "bbox": [ + 104, + 687, + 504, + 733 + ], + "type": "text", + "content": " for each. For ME-MAX, we searched regularization weight in" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 126, + 83, + 302, + 202 + ], + "blocks": [ + { + "bbox": [ + 126, + 83, + 302, + 202 + ], + "lines": [ + { + "bbox": [ + 126, + 83, + 302, + 202 + ], + "spans": [ + { + "bbox": [ + 126, + 83, + 302, + 202 + ], + "type": "image", + "image_path": "7aa3a2618ab4f81946ae6bb6a3ca92ed6ddae7efb732ca1e8841212ceb669cc1.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 194, + 209, + 236, + 220 + ], + "lines": [ + { + "bbox": [ + 194, + 209, + 236, + 220 + ], + "spans": [ + { + "bbox": [ + 194, + 209, + 236, + 220 + ], + "type": "text", + "content": "(a) Merged" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 308, + 82, + 484, + 202 + ], + "blocks": [ + { + "bbox": [ + 308, + 82, + 484, + 202 + ], + "lines": [ + { + "bbox": [ + 308, + 82, + 484, + 202 + ], + "spans": [ + { + "bbox": [ + 308, + 82, + 484, + 202 + ], + "type": "image", + "image_path": "eacebcf1752170d31391abb882a7366a3ddf70fc8efbfdda8a126124a1dd4a4e.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 377, + 209, + 414, + 220 + ], + "lines": [ + { + "bbox": [ + 377, + 209, + 414, + 220 + ], + "spans": [ + { + "bbox": [ + 377, + 209, + 414, + 220 + ], + "type": "text", + "content": "(b) 1-shot" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 121, + 223, + 299, + 342 + ], + "blocks": [ + { + "bbox": [ + 121, + 223, + 299, + 342 + ], + "lines": [ + { + "bbox": [ + 121, + 223, + 299, + 342 + ], + "spans": [ + { + "bbox": [ + 121, + 223, + 299, + 342 + ], + "type": "image", + "image_path": "17820ffe48f4d9f4a2a9c91f2bb0d87fd05d0efafd855e0c3791dd310c6138e9.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 186, + 350, + 231, + 361 + ], + "lines": [ + { + "bbox": [ + 186, + 350, + 231, + 361 + ], + "spans": [ + { + "bbox": [ + 186, + 350, + 231, + 361 + ], + "type": "text", + "content": "(c) " + }, + { + "bbox": [ + 186, + 350, + 231, + 361 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 186, + 350, + 231, + 361 + ], + "type": "text", + "content": " -data" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 315, + 223, + 489, + 342 + ], + "blocks": [ + { + "bbox": [ + 315, + 223, + 489, + 342 + ], + "lines": [ + { + "bbox": [ + 315, + 223, + 489, + 342 + ], + "spans": [ + { + "bbox": [ + 315, + 223, + 489, + 342 + ], + "type": "image", + "image_path": "93cf075d1b3664c9ea0e84057da987b3d78503601684857ad97b19dc53f52ece.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 378, + 350, + 425, + 361 + ], + "lines": [ + { + "bbox": [ + 378, + 350, + 425, + 361 + ], + "spans": [ + { + "bbox": [ + 378, + 350, + 425, + 361 + ], + "type": "text", + "content": "(d) Full-data" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 369, + 506, + 437 + ], + "lines": [ + { + "bbox": [ + 104, + 369, + 506, + 437 + ], + "spans": [ + { + "bbox": [ + 104, + 369, + 506, + 437 + ], + "type": "text", + "content": "Figure 5: Effect of projection heads for different evaluation metrics. We compare non-ensemble projection heads with different depths and widths as well as our " + }, + { + "bbox": [ + 104, + 369, + 506, + 437 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 369, + 506, + 437 + ], + "type": "text", + "content": "-ensembles, and evaluate linear evaluation performance with different amount of labeled data. (a) shows the comparison of normalized metrics for non-ensembles. (b)-(d) compares non-ensemble and " + }, + { + "bbox": [ + 104, + 369, + 506, + 437 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 369, + 506, + 437 + ], + "type": "text", + "content": "-ensembles by unnormalized metrics. 'Default' denotes the default projection heads used in many SSL methods. See analysis in Appx. C.2 for details." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 457, + 504, + 481 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 457, + 504, + 481 + ], + "spans": [ + { + "bbox": [ + 105, + 457, + 504, + 481 + ], + "type": "inline_equation", + "content": "\\{1.0, 3.0, 5.0\\}" + }, + { + "bbox": [ + 105, + 457, + 504, + 481 + ], + "type": "text", + "content": ". For ME-MAX combined with Sinkhorn, we followed Assran et al. (2022) and used default regularization weight of 1.0. The results are in Table 10. We observed that:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 130, + 490, + 506, + 604 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 130, + 490, + 504, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 490, + 504, + 545 + ], + "spans": [ + { + "bbox": [ + 130, + 490, + 504, + 545 + ], + "type": "text", + "content": "- DINO's centering operation is not as strong as other techniques, and it favours a larger teacher temperature (e.g., 0.05). It does not work well when the codebook size (4096) is not as large as the one used in the original DINO model (65536). Switching to use Sinkhorn-Knopp normalization leads to much more improved performance, and matches the performance of original DINO (Table 5) with a much smaller codebook." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 130, + 548, + 506, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 548, + 506, + 604 + ], + "spans": [ + { + "bbox": [ + 130, + 548, + 506, + 604 + ], + "type": "text", + "content": "- MSN's ME-MAX regularization is very effective, and leads to significant improvements over others. We also found it is sensitive to the regularization weight and teacher temperature (c.f. Table 10). However, we observed that combining ME-MAX with Sinkhorn does not work well without tuning the regularization weight (which is recommended by Assran et al. (2022))." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 617, + 317, + 628 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 617, + 317, + 628 + ], + "spans": [ + { + "bbox": [ + 105, + 617, + 317, + 628 + ], + "type": "text", + "content": "C.2 EMPIRICAL STUDY OF PROJECTION HEADS" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 638, + 504, + 693 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 638, + 504, + 693 + ], + "spans": [ + { + "bbox": [ + 104, + 638, + 504, + 693 + ], + "type": "text", + "content": "In this subsection, we systematically study the effect of projection heads for different evaluation metrics. In particular, we used DINO* ViT-S/16 as the base model and used different projection heads with (depth, width) searched over " + }, + { + "bbox": [ + 104, + 638, + 504, + 693 + ], + "type": "inline_equation", + "content": "\\{2,3,4\\} \\times \\{512,1024,2048,4096\\}" + }, + { + "bbox": [ + 104, + 638, + 504, + 693 + ], + "type": "text", + "content": ". All models are trained with 300 epochs using exact the same set of hyper-parameters. We measured the linear evaluation performance with different amount of labeled data (i.e., full-data, " + }, + { + "bbox": [ + 104, + 638, + 504, + 693 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 104, + 638, + 504, + 693 + ], + "type": "text", + "content": " data, 1-shot)." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "type": "text", + "content": "In Fig. 5a, we plot different evaluation metrics (normalized respectively by the best of each) versus the number of projection head parameters. In Figs. 5b to 5d, we plot each unnormalized evaluation metric respectively for different heads as well as our " + }, + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 104, + 698, + 504, + 733 + ], + "type": "text", + "content": "-ensembles. Our key findings are:" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 209, + 82, + 399, + 214 + ], + "blocks": [ + { + "bbox": [ + 209, + 82, + 399, + 214 + ], + "lines": [ + { + "bbox": [ + 209, + 82, + 399, + 214 + ], + "spans": [ + { + "bbox": [ + 209, + 82, + 399, + 214 + ], + "type": "image", + "image_path": "dbf4514b134bb8ce72a6eac34d5f024be139ceac375ac2bd229901674992e7ca.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 226, + 504, + 270 + ], + "lines": [ + { + "bbox": [ + 104, + 226, + 504, + 270 + ], + "spans": [ + { + "bbox": [ + 104, + 226, + 504, + 270 + ], + "type": "text", + "content": "Figure 6: Effect of teacher temperature for non-ensemble DINO*. DINO* with a lower temperature can achieve better few-shot performance, but still under-performs our ensemble method (DINO*-ENT with 16 heads, orange lines). DINO* ViT-S/16 trained for 300 epochs is used and " + }, + { + "bbox": [ + 104, + 226, + 504, + 270 + ], + "type": "inline_equation", + "content": "\\tau = 0.025" + }, + { + "bbox": [ + 104, + 226, + 504, + 270 + ], + "type": "text", + "content": " is used for DINO*-ENT." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "bbox": [ + 130, + 294, + 504, + 541 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 130, + 294, + 504, + 371 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 294, + 504, + 371 + ], + "spans": [ + { + "bbox": [ + 130, + 294, + 504, + 371 + ], + "type": "text", + "content": "- The projection head has a relatively larger impact on few-shot evaluation metrics, as reflected by the relative magnitudes of different metrics in Fig. 5a. An too powerful non-ensemble projection head significantly hurts the label efficiency of learned representations, reflected by a much larger drop in few-shot evaluation performance (up to 18 p.p. for 1-shot, 9 p.p. for " + }, + { + "bbox": [ + 130, + 294, + 504, + 371 + ], + "type": "inline_equation", + "content": "1\\%" + }, + { + "bbox": [ + 130, + 294, + 504, + 371 + ], + "type": "text", + "content": " data). This result is also partially observed in Chen et al. (2020b), where they found that probing from intermediate layers of projection heads (which can be viewed as using a shallower head) could improve the semi-supervised learning (" + }, + { + "bbox": [ + 130, + 294, + 504, + 371 + ], + "type": "inline_equation", + "content": "1\\% - 10\\%" + }, + { + "bbox": [ + 130, + 294, + 504, + 371 + ], + "type": "text", + "content": ") results." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 130, + 376, + 504, + 453 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 376, + 504, + 453 + ], + "spans": [ + { + "bbox": [ + 130, + 376, + 504, + 453 + ], + "type": "text", + "content": "- The optimal projection head for different metrics can differ a lot. A weaker head improves label efficiency (few-shot performance), while a stronger (but not too strong) head improves linear decodability. As a result, the default projection head (3/2048) that is widely used in SimCLR v2 (Chen et al., 2020b), DINO (Caron et al., 2021), iBOT (Zhou et al., 2022), MSN (Assran et al., 2022), etc., does not perform well in few-shot evaluations (as shown by the green cross denoted as 'Default'), probably because it is selected by full-data evaluation metrics." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 130, + 460, + 504, + 480 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 460, + 504, + 480 + ], + "spans": [ + { + "bbox": [ + 130, + 460, + 504, + 480 + ], + "type": "text", + "content": "- There exist some projection heads that performs decently well on all evaluation metrics, e.g., the baseline model (3/1024) used in our experiments (pink star denoted as 'Our base')." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 130, + 487, + 504, + 541 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 487, + 504, + 541 + ], + "spans": [ + { + "bbox": [ + 130, + 487, + 504, + 541 + ], + "type": "text", + "content": "- Compared to naively tuning projection head architectures, our " + }, + { + "bbox": [ + 130, + 487, + 504, + 541 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 130, + 487, + 504, + 541 + ], + "type": "text", + "content": "-ensembles (orange curves in Figs. 5b to 5d) consistently improve all metrics with different amount of labeled data, despite it also increases the number of parameters in projection heads. Our " + }, + { + "bbox": [ + 130, + 487, + 504, + 541 + ], + "type": "inline_equation", + "content": "(h_{\\psi}, \\mu)" + }, + { + "bbox": [ + 130, + 487, + 504, + 541 + ], + "type": "text", + "content": "-ensembles outperform all non-ensembles, which also include the counterparts of probing from intermediate layers from the a deeper head (i.e., shallower heads)." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 557, + 315, + 569 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 557, + 315, + 569 + ], + "spans": [ + { + "bbox": [ + 105, + 557, + 315, + 569 + ], + "type": "text", + "content": "C.3 EMPIRICAL STUDY OF " + }, + { + "bbox": [ + 105, + 557, + 315, + 569 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 105, + 557, + 315, + 569 + ], + "type": "text", + "content": "-ENSEMBLES" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "spans": [ + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "text", + "content": "Are the gains of ENT purely from sharper teacher predictions? Our ENT strategy assigns higher weights to the heads that predict with lower entropies, thus effectively uses sharper teacher predictions as the targets. One may be curious about how this effect accounts for the gains of the ENT strategy. We empirically answer this question by studying the non-ensemble baseline that uses a sharper teacher predictions in a data-independent manner (in contrast to ENT, which uses data-dependent entropy weights). Specifically, we compare the non-ensemble DINO* that use different teacher temperature " + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "inline_equation", + "content": "\\tau \\in \\{0.005, 0.01, 0.025, 0.05\\}" + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "text", + "content": " and also our DINO*-ENT (16) with " + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "inline_equation", + "content": "\\tau = 0.025" + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "text", + "content": ", as shown in Fig. 6. We find that the teacher temperature has a big impact on evaluation results especially for few-shot evaluation. Compared to our default baseline that uses " + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "inline_equation", + "content": "\\tau = 0.025" + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "text", + "content": ", a lower temperature (e.g., " + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "inline_equation", + "content": "\\tau = 0.01" + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "text", + "content": ") can indeed improve the 1-shot performance (at the cost of worse full-data performance). However, an too low temperature (" + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "inline_equation", + "content": "\\tau = 0.005" + }, + { + "bbox": [ + 104, + 578, + 506, + 721 + ], + "type": "text", + "content": ") will hurt the performance. Our DINO*-ENT (16) consistently outperform all the baselines, which implies the importance of selecting sharper teacher predictions in a data-dependent manner." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 107, + 150, + 504, + 274 + ], + "blocks": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "lines": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "spans": [ + { + "bbox": [ + 104, + 79, + 506, + 137 + ], + "type": "text", + "content": "Table 11: Full table of Table 1 including all metrics for comparing different ensemble strategies. ENT and PROB significantly improves over the non-ensemble baseline, while UNIF leads to no gains. Ensembling the whole projection head works the best. All models are DINO* ViT-S/16 trained for 300 epochs. The means and standard deviations over 3 initialization seeds for all evaluation results are reported." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 107, + 150, + 504, + 274 + ], + "lines": [ + { + "bbox": [ + 107, + 150, + 504, + 274 + ], + "spans": [ + { + "bbox": [ + 107, + 150, + 504, + 274 + ], + "type": "table", + "html": "
HowWhereFew-shotFull-data
Proj. HeadCodebook125~13 (1%)k-NNLinear
Base40.6 ± 0.249.8 ± 0.257.9 ± 0.363.4 ± 0.272.3 ± 0.174.4 ± 0.1
UNIF40.4 ± 0.449.5 ± 0.457.6 ± 0.363.3 ± 0.372.2 ± 0.274.5 ± 0.2
PROB39.7 ± 0.549.0 ± 0.557.4 ± 0.463.0 ± 0.472.8 ± 0.274.8 ± 0.1
PROB41.9 ± 0.351.5 ± 0.559.6 ± 0.465.1 ± 0.373.7 ± 0.375.4 ± 0.1
ENT40.6 ± 0.449.5 ± 0.658.0 ± 0.463.5 ± 0.472.1 ± 0.374.5 ± 0.3
ENT43.0 ± 0.652.2 ± 0.859.7 ± 0.764.8 ± 0.572.9 ± 0.675.1 ± 0.4
ENT44.0 ± 0.253.0 ± 0.560.5 ± 0.365.5 ± 0.173.2 ± 0.175.3 ± 0.1
ENT-ST40.0 ± 0.539.2 ± 0.657.3 ± 0.562.7 ± 0.571.9 ± 0.474.0 ± 0.4
", + "image_path": "2a91811cb30ca0ac1f030c675ed7dcecd63230129dcea028d7b0fad6c6e8d1d3.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 107, + 327, + 503, + 406 + ], + "blocks": [ + { + "bbox": [ + 104, + 277, + 504, + 312 + ], + "lines": [ + { + "bbox": [ + 104, + 277, + 504, + 312 + ], + "spans": [ + { + "bbox": [ + 104, + 277, + 504, + 312 + ], + "type": "text", + "content": "Table 12: Comparison of different varants of PROB. The PROB strategy used in our experiments performs the best. ' -' in the table denotes training divergence for PROB-MAX. The experimental setup is the same as Table 11." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 107, + 327, + 503, + 406 + ], + "lines": [ + { + "bbox": [ + 107, + 327, + 503, + 406 + ], + "spans": [ + { + "bbox": [ + 107, + 327, + 503, + 406 + ], + "type": "table", + "html": "
HowWhereFew-shotFull-data
Weight byTemp. γ125~13 (1%)k-NNLinear
Base40.6 ± 0.249.8 ± 0.257.9 ± 0.363.4 ± 0.272.3 ± 0.174.4 ± 0.1
PROBstudent141.9 ± 0.351.5 ± 0.559.6 ± 0.465.1 ± 0.373.7 ± 0.375.4 ± 0.1
PROB-TEteacher141.5 ± 0.250.4 ± 0.358.3 ± 0.363.7 ± 0.172.3 ± 0.274.6 ± 0.1
PROB-MAXstudent0------
PROB-MAX-TEteacher041.4 ± 0.250.3 ± 0.358.1 ± 0.363.6 ± 0.272.3 ± 0.274.5 ± 0.2
", + "image_path": "10140e926b5dcade496087f3088fda8e813b6a2202ec94f8156c76ad55529556.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 422, + 504, + 444 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 422, + 504, + 444 + ], + "spans": [ + { + "bbox": [ + 104, + 422, + 504, + 444 + ], + "type": "text", + "content": "Comparison of different ensemble strategies and variants We present the full table of Table 1 that includes all the metrics in Table 11. The same observation holds for all metrics." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 449, + 506, + 483 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 449, + 506, + 483 + ], + "spans": [ + { + "bbox": [ + 104, + 449, + 506, + 483 + ], + "type": "text", + "content": "For all previous studies, we considered a specific instantiation of PROB strategy, i.e., weight by student predicted probabilities " + }, + { + "bbox": [ + 104, + 449, + 506, + 483 + ], + "type": "inline_equation", + "content": "f_{ijy} = \\log s(y|\\theta_j,x)" + }, + { + "bbox": [ + 104, + 449, + 506, + 483 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 449, + 506, + 483 + ], + "type": "inline_equation", + "content": "\\gamma = 1" + }, + { + "bbox": [ + 104, + 449, + 506, + 483 + ], + "type": "text", + "content": ", which has a nice interpretation of model average (see Sec. 3.3). We also studied different variants of the PROB strategy (see Appx. D.1)," + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 130, + 493, + 406, + 540 + ], + "type": "list", + "angle": 0, + "index": 10, + "blocks": [ + { + "bbox": [ + 130, + 493, + 382, + 506 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 493, + 382, + 506 + ], + "spans": [ + { + "bbox": [ + 130, + 493, + 382, + 506 + ], + "type": "text", + "content": "PROB-TE: weight by teacher " + }, + { + "bbox": [ + 130, + 493, + 382, + 506 + ], + "type": "inline_equation", + "content": "f_{ijy} = \\log t_i(y|x)" + }, + { + "bbox": [ + 130, + 493, + 382, + 506 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 493, + 382, + 506 + ], + "type": "inline_equation", + "content": "\\gamma = 1" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 130, + 510, + 396, + 523 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 510, + 396, + 523 + ], + "spans": [ + { + "bbox": [ + 130, + 510, + 396, + 523 + ], + "type": "text", + "content": "PROB-MAX: weight by student " + }, + { + "bbox": [ + 130, + 510, + 396, + 523 + ], + "type": "inline_equation", + "content": "f_{ijy} = \\log s_j(y|x)" + }, + { + "bbox": [ + 130, + 510, + 396, + 523 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 510, + 396, + 523 + ], + "type": "inline_equation", + "content": "\\gamma \\rightarrow 0" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 130, + 526, + 406, + 540 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 130, + 526, + 406, + 540 + ], + "spans": [ + { + "bbox": [ + 130, + 526, + 406, + 540 + ], + "type": "text", + "content": "PROB-MAX-TE: weight by teacher " + }, + { + "bbox": [ + 130, + 526, + 406, + 540 + ], + "type": "inline_equation", + "content": "f_{ijy} = \\log t_i(y|x)" + }, + { + "bbox": [ + 130, + 526, + 406, + 540 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 130, + 526, + 406, + 540 + ], + "type": "inline_equation", + "content": "\\gamma \\to 0" + } + ] + } + ], + "index": 9 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 104, + 548, + 506, + 649 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 548, + 506, + 649 + ], + "spans": [ + { + "bbox": [ + 104, + 548, + 506, + 649 + ], + "type": "text", + "content": "Table 12 compares the downstream performance for all the variants. We find that the our PROB (used in our empirical studies) performs better than other variants. Interestingly, weighting by the teacher (PROB-TE) performs worse than PROB. We conjecture that this is because the important weights turn out to give a weighted average of teacher predictions as the surrogate target that is shared across all students (like PROB) but does not give effective preferential treatment across students which are directly optimized (unlike PROB-TE). Furthermore, PROB-MAX which sharpens the importance weights leads to training divergence. This is probably because the student predictions have higher variance based on which sharp weights lead to unstable training. In contrast, PROB-MAX-TE which uses the (lower-variance) teacher gives reasonable results and comparable to PROB-TE." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "spans": [ + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "text", + "content": "Number of ensembles for " + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "inline_equation", + "content": "\\mathbf{MSN}^*" + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "text", + "content": " In Fig. 7a, we study the effect of increasing the number of " + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "text", + "content": "-ensembles for " + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "inline_equation", + "content": "\\mathbf{MSN}^*" + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "text", + "content": "-ENT with ViT-S/16 trained for 800 epochs. The scaling trend is similar to DINO\\*-ENT (Fig. 3a) and the gains start to diminish when the number of heads increases above 8." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "content": "Effect of ENT temperature " + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\mathbf{MSN}^*" + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "content": " Fig. 7b studies the effect of entropy weighting temperature " + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\mathbf{MSN}^*" + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "content": "-ENT. We observed that " + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\mathbf{MSN}^*" + }, + { + "bbox": [ + 104, + 709, + 506, + 733 + ], + "type": "text", + "content": " is more robust to small temperatures, and the" + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 127, + 83, + 299, + 204 + ], + "blocks": [ + { + "bbox": [ + 127, + 83, + 299, + 204 + ], + "lines": [ + { + "bbox": [ + 127, + 83, + 299, + 204 + ], + "spans": [ + { + "bbox": [ + 127, + 83, + 299, + 204 + ], + "type": "image", + "image_path": "490cb30743786891a399ad47d2e7d5d357f8e031ff6e40f82d53e6a5347c2e08.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 169, + 211, + 260, + 222 + ], + "lines": [ + { + "bbox": [ + 169, + 211, + 260, + 222 + ], + "spans": [ + { + "bbox": [ + 169, + 211, + 260, + 222 + ], + "type": "text", + "content": "(a) Scaling of ensembles" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 307, + 81, + 479, + 205 + ], + "blocks": [ + { + "bbox": [ + 307, + 81, + 479, + 205 + ], + "lines": [ + { + "bbox": [ + 307, + 81, + 479, + 205 + ], + "spans": [ + { + "bbox": [ + 307, + 81, + 479, + 205 + ], + "type": "image", + "image_path": "0a9b966552940abc5d3f63d5e0d2d091adf2b746587f3106e8aadebcd57a9fd5.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 336, + 211, + 454, + 222 + ], + "lines": [ + { + "bbox": [ + 336, + 211, + 454, + 222 + ], + "spans": [ + { + "bbox": [ + 336, + 211, + 454, + 222 + ], + "type": "text", + "content": "(b) Effect of ENT temperature " + }, + { + "bbox": [ + 336, + 211, + 454, + 222 + ], + "type": "inline_equation", + "content": "\\gamma" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "lines": [ + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "spans": [ + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "type": "text", + "content": "Figure 7: Empirical study for " + }, + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "type": "inline_equation", + "content": "\\mathbf{MSN}^*" + }, + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "type": "text", + "content": " -ENT. (a) The gains by increasing the number of " + }, + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "type": "text", + "content": " ensembles start to diminish when it is over 8 heads. (b) " + }, + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "type": "inline_equation", + "content": "\\mathbf{MSN}^*" + }, + { + "bbox": [ + 104, + 231, + 506, + 266 + ], + "type": "text", + "content": " prefers smaller temperature for entropy weighting than DINO*." + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 328, + 504, + 353 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 328, + 504, + 353 + ], + "spans": [ + { + "bbox": [ + 104, + 328, + 504, + 353 + ], + "type": "text", + "content": "best " + }, + { + "bbox": [ + 104, + 328, + 504, + 353 + ], + "type": "inline_equation", + "content": "\\gamma = 0.01" + }, + { + "bbox": [ + 104, + 328, + 504, + 353 + ], + "type": "text", + "content": " is smaller than that of DINO\\* " + }, + { + "bbox": [ + 104, + 328, + 504, + 353 + ], + "type": "inline_equation", + "content": "(\\gamma = 0.05)" + }, + { + "bbox": [ + 104, + 328, + 504, + 353 + ], + "type": "text", + "content": ". When the temperature is too high, the performance drops as a result of under-specialization (i.e., less diversity) as with DINO\\*." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 411, + 317, + 423 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 411, + 317, + 423 + ], + "spans": [ + { + "bbox": [ + 105, + 411, + 317, + 423 + ], + "type": "text", + "content": "C.4 ANALYZING " + }, + { + "bbox": [ + 105, + 411, + 317, + 423 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 105, + 411, + 317, + 423 + ], + "type": "text", + "content": " -ENSEMBLE DIVERSITY" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 449, + 504, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 449, + 504, + 495 + ], + "spans": [ + { + "bbox": [ + 104, + 449, + 504, + 495 + ], + "type": "text", + "content": "Visualizing " + }, + { + "bbox": [ + 104, + 449, + 504, + 495 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 449, + 504, + 495 + ], + "type": "text", + "content": "-ensemble similarity We analyze the diversity between different heads by visualizing the similarity matrix between their codes. Directly measuring the similarity between codes in two heads could not work, because 1) they may live in different subspaces because of the ensembled projection heads; 2) they may not align in the natural order but in a permuted order." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "spans": [ + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": "Therefore, we seek to align codes between different heads by how they are effectively used to 'cluster' the data. In particular, we use a set of randomly sampled inputs " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "\\{x^i\\}_{i\\in [b]}" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": " of size " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "b = 51200" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": " to obtain an empirical code assignment matrix " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "A^{j}\\in \\mathbb{R}^{b\\times c}" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": " for each " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": "-ensemble " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "j\\in [m]" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": ", where the " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": "-th row of " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "A^j" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": " corresponds to the teacher predictions " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "t_j(Y|x^i)" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": ". For the " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": "-th code in the head " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": ", we extract the " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": "-th column from " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "A^j" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": " (i.e., its empirical assignment) as its embedding. For two codes, we measure their similarity by the cosine similarity between their embeddings. For a pair of heads " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "j" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "j'" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": ", we align their codes using the Hungarian algorithm (Kuhn, 1955) to maximize the sum of cosine similarity. After that, we plot the similarity matrix which is aligned and reordered by the similarity value on the diagonal (in an descending order). Note that it is not necessary to do the alignment procedure for the PROB strategy since it is naturally aligned because of the direct distribution averaging over " + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 499, + 506, + 628 + ], + "type": "text", + "content": "-ensembles, but we did for fair comparison with other strategies." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": "We applied the same procedure for different ensemble weighting strategies using DINO* with 4 " + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": "-ensembles. We randomly picked a pair of heads and visualize the similarity matrix before (top row) and after (bottom row) the alignment-reordering setup in Fig. 8. We found that before the alignment procedure, the similarity matrix of the PROB strategy already mostly aligns because it explicitly introduces code correspondence between different heads. Furthermore, by analyzing the similarity decay pattern on the diagonal, it is clear that ENT learns the most diverse " + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 632, + 506, + 733 + ], + "type": "text", + "content": "-ensembles while UNIF learns the least ones, which may explain the difference of their empirical performance. For completeness, we also include the visualization of aligned similarity matrices for all pairs of heads in Figs. 9 to 11, the observations are the same." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 133, + 93, + 232, + 294 + ], + "blocks": [ + { + "bbox": [ + 133, + 93, + 232, + 294 + ], + "lines": [ + { + "bbox": [ + 133, + 93, + 232, + 294 + ], + "spans": [ + { + "bbox": [ + 133, + 93, + 232, + 294 + ], + "type": "image", + "image_path": "93d84602fa00e114dd3c0daeb60a27b653229fa1fbb04e8c93b8bae3163f38e4.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 165, + 303, + 199, + 314 + ], + "lines": [ + { + "bbox": [ + 165, + 303, + 199, + 314 + ], + "spans": [ + { + "bbox": [ + 165, + 303, + 199, + 314 + ], + "type": "text", + "content": "(a) UNIF" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 246, + 93, + 345, + 294 + ], + "blocks": [ + { + "bbox": [ + 246, + 93, + 345, + 294 + ], + "lines": [ + { + "bbox": [ + 246, + 93, + 345, + 294 + ], + "spans": [ + { + "bbox": [ + 246, + 93, + 345, + 294 + ], + "type": "image", + "image_path": "69e3ff8a8e40a87c085722110883fb610a78890e858601ad95efdd9cf61d3d2a.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 278, + 303, + 314, + 314 + ], + "lines": [ + { + "bbox": [ + 278, + 303, + 314, + 314 + ], + "spans": [ + { + "bbox": [ + 278, + 303, + 314, + 314 + ], + "type": "text", + "content": "(b) PROB" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 360, + 94, + 478, + 296 + ], + "blocks": [ + { + "bbox": [ + 360, + 94, + 478, + 296 + ], + "lines": [ + { + "bbox": [ + 360, + 94, + 478, + 296 + ], + "spans": [ + { + "bbox": [ + 360, + 94, + 478, + 296 + ], + "type": "image", + "image_path": "7634234890279c11db0dfd9b0cb24cd3d4b24177529df99d2e1436bfc66bea1b.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 403, + 303, + 434, + 314 + ], + "lines": [ + { + "bbox": [ + 403, + 303, + 434, + 314 + ], + "spans": [ + { + "bbox": [ + 403, + 303, + 434, + 314 + ], + "type": "text", + "content": "(c)ENT" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 142, + 401, + 246, + 540 + ], + "blocks": [ + { + "bbox": [ + 104, + 323, + 506, + 380 + ], + "lines": [ + { + "bbox": [ + 104, + 323, + 506, + 380 + ], + "spans": [ + { + "bbox": [ + 104, + 323, + 506, + 380 + ], + "type": "text", + "content": "Figure 8: Visualization of " + }, + { + "bbox": [ + 104, + 323, + 506, + 380 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 323, + 506, + 380 + ], + "type": "text", + "content": "-ensemble diversity. ENT learns the most diverse " + }, + { + "bbox": [ + 104, + 323, + 506, + 380 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 323, + 506, + 380 + ], + "type": "text", + "content": "-ensembles while UNIF learns the least ones. We visualize the code similarity matrix between a pair of randomly selected projection heads. Top row shows the original similarity matrix (i.e., in natural order) and the bottom row shows the aligned similarity matrix which aligns codes by empirical assignment probabilities. DINO* ViT-S/16 with 4 heads is used. Best viewed in color." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 142, + 401, + 246, + 540 + ], + "lines": [ + { + "bbox": [ + 142, + 401, + 246, + 540 + ], + "spans": [ + { + "bbox": [ + 142, + 401, + 246, + 540 + ], + "type": "image", + "image_path": "e05471b9f5e868d1dffbd8e5bcea52e9435ba7d5d8be5b8126734318036c4abf.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 250, + 402, + 353, + 539 + ], + "blocks": [ + { + "bbox": [ + 250, + 402, + 353, + 539 + ], + "lines": [ + { + "bbox": [ + 250, + 402, + 353, + 539 + ], + "spans": [ + { + "bbox": [ + 250, + 402, + 353, + 539 + ], + "type": "image", + "image_path": "4fcb7930e54c036bf09639468394c07434ebea281fd08eeee5a751eb6cbb0c30.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 360, + 402, + 484, + 539 + ], + "blocks": [ + { + "bbox": [ + 360, + 402, + 484, + 539 + ], + "lines": [ + { + "bbox": [ + 360, + 402, + 484, + 539 + ], + "spans": [ + { + "bbox": [ + 360, + 402, + 484, + 539 + ], + "type": "image", + "image_path": "be1db7c343bf205e685344364e792bd47d8b3209eaab5706fca18ff963715821.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_body" + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 143, + 545, + 244, + 647 + ], + "blocks": [ + { + "bbox": [ + 143, + 545, + 244, + 647 + ], + "lines": [ + { + "bbox": [ + 143, + 545, + 244, + 647 + ], + "spans": [ + { + "bbox": [ + 143, + 545, + 244, + 647 + ], + "type": "image", + "image_path": "b26d4702e63657ab529e61b968bf21c875cc03721b6843d0481d2a8fd9ae83f9.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "lines": [ + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "spans": [ + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "text", + "content": "Figure 9: Visualization of " + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "text", + "content": "-ensemble diversity between all pairs of heads for DINO*UNIF. The UNIF strategy does not learn diverse " + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 662, + 506, + 696 + ], + "type": "text", + "content": "-ensembles. DINO* with ViT-S/16 and 4 heads is used. Best viewed in color." + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 251, + 545, + 353, + 647 + ], + "blocks": [ + { + "bbox": [ + 251, + 545, + 353, + 647 + ], + "lines": [ + { + "bbox": [ + 251, + 545, + 353, + 647 + ], + "spans": [ + { + "bbox": [ + 251, + 545, + 353, + 647 + ], + "type": "image", + "image_path": "d0eeef05fd5b7cbc2af0e80f19c6c5bc29c4d802d0008e6b11d345448f6f5d6a.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_body" + } + ], + "index": 12 + }, + { + "type": "image", + "bbox": [ + 360, + 545, + 461, + 647 + ], + "blocks": [ + { + "bbox": [ + 360, + 545, + 461, + 647 + ], + "lines": [ + { + "bbox": [ + 360, + 545, + 461, + 647 + ], + "spans": [ + { + "bbox": [ + 360, + 545, + 461, + 647 + ], + "type": "image", + "image_path": "800c2c5d5ceee656fe26bbec9d5a4eec6a5751cf5cc5f19a9e7a56734071c9b8.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 143, + 113, + 246, + 233 + ], + "blocks": [ + { + "bbox": [ + 143, + 113, + 246, + 233 + ], + "lines": [ + { + "bbox": [ + 143, + 113, + 246, + 233 + ], + "spans": [ + { + "bbox": [ + 143, + 113, + 246, + 233 + ], + "type": "image", + "image_path": "272be9539ff5f046a8c9e23b472a43e11da8447364537fc81791658061a50b93.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 164, + 240, + 222, + 250 + ], + "lines": [ + { + "bbox": [ + 164, + 240, + 222, + 250 + ], + "spans": [ + { + "bbox": [ + 164, + 240, + 222, + 250 + ], + "type": "text", + "content": "Head 2-Head 3" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 251, + 113, + 353, + 233 + ], + "blocks": [ + { + "bbox": [ + 251, + 113, + 353, + 233 + ], + "lines": [ + { + "bbox": [ + 251, + 113, + 353, + 233 + ], + "spans": [ + { + "bbox": [ + 251, + 113, + 353, + 233 + ], + "type": "image", + "image_path": "be6fa8e76ae37017b8a194d4f64f1ae0bc32fecf97ff2739cc8970c7f355fe34.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 272, + 240, + 331, + 250 + ], + "lines": [ + { + "bbox": [ + 272, + 240, + 331, + 250 + ], + "spans": [ + { + "bbox": [ + 272, + 240, + 331, + 250 + ], + "type": "text", + "content": "Head 2 - Head 4" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 360, + 113, + 484, + 234 + ], + "blocks": [ + { + "bbox": [ + 360, + 113, + 484, + 234 + ], + "lines": [ + { + "bbox": [ + 360, + 113, + 484, + 234 + ], + "spans": [ + { + "bbox": [ + 360, + 113, + 484, + 234 + ], + "type": "image", + "image_path": "01b47cb375d674dc6f01d1367efc025e231692d883ac3813544199c4fb5ff68f.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 381, + 240, + 440, + 250 + ], + "lines": [ + { + "bbox": [ + 381, + 240, + 440, + 250 + ], + "spans": [ + { + "bbox": [ + 381, + 240, + 440, + 250 + ], + "type": "text", + "content": "Head 3 - Head 4" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 143, + 257, + 245, + 358 + ], + "blocks": [ + { + "bbox": [ + 143, + 257, + 245, + 358 + ], + "lines": [ + { + "bbox": [ + 143, + 257, + 245, + 358 + ], + "spans": [ + { + "bbox": [ + 143, + 257, + 245, + 358 + ], + "type": "image", + "image_path": "ae5e040b53d11107d21ac0a879235838cf924b9ab2b0f5caa0a1a792481f6568.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 373, + 506, + 407 + ], + "lines": [ + { + "bbox": [ + 104, + 373, + 506, + 407 + ], + "spans": [ + { + "bbox": [ + 104, + 373, + 506, + 407 + ], + "type": "text", + "content": "Figure 10: Visualization of " + }, + { + "bbox": [ + 104, + 373, + 506, + 407 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 373, + 506, + 407 + ], + "type": "text", + "content": "-ensemble diversity between all pairs of heads for DINO\\*PROB. The PROB strategy learns more diverse " + }, + { + "bbox": [ + 104, + 373, + 506, + 407 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 373, + 506, + 407 + ], + "type": "text", + "content": "-ensembles than UNIF. DINO\\* with ViT-S/16 and 4 heads is used. Best viewed in color." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 251, + 257, + 353, + 358 + ], + "blocks": [ + { + "bbox": [ + 251, + 257, + 353, + 358 + ], + "lines": [ + { + "bbox": [ + 251, + 257, + 353, + 358 + ], + "spans": [ + { + "bbox": [ + 251, + 257, + 353, + 358 + ], + "type": "image", + "image_path": "1375ae686351af0267608f622f6a018b95da8a0139150e0e7f5197326005e972.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 360, + 257, + 462, + 358 + ], + "blocks": [ + { + "bbox": [ + 360, + 257, + 462, + 358 + ], + "lines": [ + { + "bbox": [ + 360, + 257, + 462, + 358 + ], + "spans": [ + { + "bbox": [ + 360, + 257, + 462, + 358 + ], + "type": "image", + "image_path": "3ebf637ad44c2d4a824c4cab904b9560dcb9dba73522fc73344666c0257ba54f.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + }, + { + "type": "image", + "bbox": [ + 142, + 429, + 245, + 550 + ], + "blocks": [ + { + "bbox": [ + 142, + 429, + 245, + 550 + ], + "lines": [ + { + "bbox": [ + 142, + 429, + 245, + 550 + ], + "spans": [ + { + "bbox": [ + 142, + 429, + 245, + 550 + ], + "type": "image", + "image_path": "151f38dca80bdbc22fd11b7f1783b360b52b7abe6e22abdf1102b673eb0e0f96.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 164, + 557, + 222, + 567 + ], + "lines": [ + { + "bbox": [ + 164, + 557, + 222, + 567 + ], + "spans": [ + { + "bbox": [ + 164, + 557, + 222, + 567 + ], + "type": "text", + "content": "Head 2 - Head 3" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "image_caption" + } + ], + "index": 11 + }, + { + "type": "image", + "bbox": [ + 251, + 430, + 353, + 550 + ], + "blocks": [ + { + "bbox": [ + 251, + 430, + 353, + 550 + ], + "lines": [ + { + "bbox": [ + 251, + 430, + 353, + 550 + ], + "spans": [ + { + "bbox": [ + 251, + 430, + 353, + 550 + ], + "type": "image", + "image_path": "08efe47fa419970fcf1c422d4cc1d375935b1f0185e4068c9e706deec3c3fc1f.jpg" + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 273, + 557, + 331, + 567 + ], + "lines": [ + { + "bbox": [ + 273, + 557, + 331, + 567 + ], + "spans": [ + { + "bbox": [ + 273, + 557, + 331, + 567 + ], + "type": "text", + "content": "Head 2 - Head 4" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "image_caption" + } + ], + "index": 13 + }, + { + "type": "image", + "bbox": [ + 360, + 430, + 484, + 551 + ], + "blocks": [ + { + "bbox": [ + 360, + 430, + 484, + 551 + ], + "lines": [ + { + "bbox": [ + 360, + 430, + 484, + 551 + ], + "spans": [ + { + "bbox": [ + 360, + 430, + 484, + 551 + ], + "type": "image", + "image_path": "b36c8b5148138c8101d06cc95e49969bf5472dfab11ef4555ab5f67274ad87ce.jpg" + } + ] + } + ], + "index": 15, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 381, + 557, + 439, + 567 + ], + "lines": [ + { + "bbox": [ + 381, + 557, + 439, + 567 + ], + "spans": [ + { + "bbox": [ + 381, + 557, + 439, + 567 + ], + "type": "text", + "content": "Head 3 - Head 4" + } + ] + } + ], + "index": 16, + "angle": 0, + "type": "image_caption" + } + ], + "index": 15 + }, + { + "type": "image", + "bbox": [ + 143, + 574, + 245, + 676 + ], + "blocks": [ + { + "bbox": [ + 143, + 574, + 245, + 676 + ], + "lines": [ + { + "bbox": [ + 143, + 574, + 245, + 676 + ], + "spans": [ + { + "bbox": [ + 143, + 574, + 245, + 676 + ], + "type": "image", + "image_path": "99012c1cf59f4ca983add61d184cc261235a1f27e6126857cea6e0510e9fe7e8.jpg" + } + ] + } + ], + "index": 17, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 690, + 506, + 724 + ], + "lines": [ + { + "bbox": [ + 104, + 690, + 506, + 724 + ], + "spans": [ + { + "bbox": [ + 104, + 690, + 506, + 724 + ], + "type": "text", + "content": "Figure 11: Visualization of " + }, + { + "bbox": [ + 104, + 690, + 506, + 724 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 690, + 506, + 724 + ], + "type": "text", + "content": "-ensemble diversity between all pairs of heads for DINO*-ENT. The ENT strategy learns the most diverse " + }, + { + "bbox": [ + 104, + 690, + 506, + 724 + ], + "type": "inline_equation", + "content": "(h_{\\psi},\\mu)" + }, + { + "bbox": [ + 104, + 690, + 506, + 724 + ], + "type": "text", + "content": "-ensembles. DINO* with ViT-S/16 and 4 heads is used. Best viewed in color." + } + ] + } + ], + "index": 20, + "angle": 0, + "type": "image_caption" + } + ], + "index": 17 + }, + { + "type": "image", + "bbox": [ + 251, + 574, + 353, + 676 + ], + "blocks": [ + { + "bbox": [ + 251, + 574, + 353, + 676 + ], + "lines": [ + { + "bbox": [ + 251, + 574, + 353, + 676 + ], + "spans": [ + { + "bbox": [ + 251, + 574, + 353, + 676 + ], + "type": "image", + "image_path": "bec4ce2bf9a81d5b0de88b89f5799727a134240fd9cf66377c3e800caf989a13.jpg" + } + ] + } + ], + "index": 18, + "angle": 0, + "type": "image_body" + } + ], + "index": 18 + }, + { + "type": "image", + "bbox": [ + 360, + 574, + 462, + 676 + ], + "blocks": [ + { + "bbox": [ + 360, + 574, + 462, + 676 + ], + "lines": [ + { + "bbox": [ + 360, + 574, + 462, + 676 + ], + "spans": [ + { + "bbox": [ + 360, + 574, + 462, + 676 + ], + "type": "image", + "image_path": "4151f52eebf1277b2204f9bf6d0a923d161832e267af427000acb645d8f04a57.jpg" + } + ] + } + ], + "index": 19, + "angle": 0, + "type": "image_body" + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "26" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 25 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 182, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 182, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 182, + 94 + ], + "type": "text", + "content": "D ANALYSIS" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 107, + 194, + 118 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 107, + 194, + 118 + ], + "spans": [ + { + "bbox": [ + 105, + 107, + 194, + 118 + ], + "type": "text", + "content": "D.1 DERIVATIONS" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 128, + 504, + 150 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 128, + 504, + 150 + ], + "spans": [ + { + "bbox": [ + 104, + 128, + 504, + 150 + ], + "type": "text", + "content": "In this subsection, we provide derivations for some non-trivial losses that we explore within our framework." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 156, + 340, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 156, + 340, + 168 + ], + "spans": [ + { + "bbox": [ + 105, + 156, + 340, + 168 + ], + "type": "text", + "content": "Recall that our weighted cross-entropy loss is of the form," + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 197, + 174, + 504, + 236 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 174, + 504, + 236 + ], + "spans": [ + { + "bbox": [ + 197, + 174, + 504, + 236 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathcal {L} _ {n} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i, j \\in [ m ]} H ^ {\\times} \\left[ w _ {i j Y} \\odot t _ {i} (Y | x), s (Y | \\theta_ {j}, x) \\right] (15) \\\\ = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i, j \\in [ m ]} \\sum_ {y \\in \\mathcal {Y}} w _ {i j y} t _ {i} (y | x) \\log s (y | \\theta_ {j}, x) (16) \\\\ \\end{array}", + "image_path": "df04c8a64c0aeba76e5992c254b9ae667d330149a83bc3f2a5433ba1129c4027.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 168, + 238, + 504, + 259 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 168, + 238, + 504, + 259 + ], + "spans": [ + { + "bbox": [ + 168, + 238, + 504, + 259 + ], + "type": "interline_equation", + "content": "w _ {i j y} = \\operatorname {s o f t m a x} \\left(\\left\\{\\frac {1}{\\gamma} f _ {i j y} (\\operatorname {s t o p g r a d} (\\theta), x): i, j \\in [ m ] \\right\\}\\right). \\tag {17}", + "image_path": "bcbc22dec89755456abad632c35d0d42887c70f0b86da02ade0753825bc702b0.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 270, + 215, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 270, + 215, + 281 + ], + "spans": [ + { + "bbox": [ + 105, + 270, + 215, + 281 + ], + "type": "text", + "content": "Furthermore, observe that," + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 117, + 287, + 504, + 317 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 287, + 504, + 317 + ], + "spans": [ + { + "bbox": [ + 117, + 287, + 504, + 317 + ], + "type": "interline_equation", + "content": "\\nabla_ {\\theta} \\sum_ {i, j \\in [ m ]} \\mathsf {H} ^ {\\times} \\left[ w _ {i j Y} \\odot t _ {i} (Y | x), s (Y | \\theta_ {j}, x) \\right] = \\sum_ {i, j \\in [ m ]} \\int_ {\\mathcal {Y}} w _ {i j y} t _ {i} (y | x) \\nabla_ {\\theta} \\log s (y | \\theta_ {j}, x) \\mathrm {d} y. \\tag {18}", + "image_path": "b4a1925d631d4e243e1ba45032b50963297460913589259bae22419db2b381f2.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 323, + 504, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 323, + 504, + 346 + ], + "spans": [ + { + "bbox": [ + 104, + 323, + 504, + 346 + ], + "type": "text", + "content": "This indicates that the proposed weighted ensemble SSL loss is simply a reweighted log-likelihood loss. We use this fact in our derivation of probability weighting (PROB) loss." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "spans": [ + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "type": "text", + "content": "Uniform weighting (UNIF) Our UNIF strategy in Eq. (6) uses " + }, + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "type": "inline_equation", + "content": "f_{ijy} = \\log \\delta (i - j)" + }, + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "type": "text", + "content": " which gives " + }, + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "type": "inline_equation", + "content": "w_{ijy} = \\frac{1}{m}\\delta (i - j)" + }, + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "type": "text", + "content": " (for any choice of " + }, + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "type": "inline_equation", + "content": "\\gamma" + }, + { + "bbox": [ + 104, + 357, + 504, + 383 + ], + "type": "text", + "content": "), thus the loss," + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 176, + 388, + 504, + 451 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 176, + 388, + 504, + 451 + ], + "spans": [ + { + "bbox": [ + 176, + 388, + 504, + 451 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathcal {L} _ {n} ^ {\\mathrm {U N I F}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i, j \\in [ m ]} \\sum_ {y \\in \\mathcal {Y}} \\frac {1}{m} \\delta (i - j) t _ {i} (y | x) \\log s (y | \\theta_ {j}, x) (19) \\\\ = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\frac {1}{m} \\sum_ {i \\in [ m ]} \\mathrm {H} ^ {\\times} \\left[ t _ {i} (Y | x), s (Y | \\theta_ {i}, x) \\right] (20) \\\\ \\end{array}", + "image_path": "c3d26d2d3214e1d60e2c8163796a1320d44fa99231ef492ec0f9fb215bfa8d41.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 456, + 393, + 468 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 456, + 393, + 468 + ], + "spans": [ + { + "bbox": [ + 104, + 456, + 393, + 468 + ], + "type": "text", + "content": "This loss assigns equal weights to " + }, + { + "bbox": [ + 104, + 456, + 393, + 468 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 104, + 456, + 393, + 468 + ], + "type": "text", + "content": " pairs of pairwised student/teacher." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "spans": [ + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "type": "text", + "content": "An straightforward generalization is to assign equal weights to all possible pairs " + }, + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "type": "inline_equation", + "content": "(m^2)" + }, + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "type": "text", + "content": " of student/teacher with " + }, + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "type": "inline_equation", + "content": "f_{ijy} = 0" + }, + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "type": "inline_equation", + "content": "w_{ijy} = \\frac{1}{m^2}" + }, + { + "bbox": [ + 104, + 473, + 506, + 498 + ], + "type": "text", + "content": ", which gives the UNIF-ALL loss," + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 181, + 502, + 504, + 533 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 181, + 502, + 504, + 533 + ], + "spans": [ + { + "bbox": [ + 181, + 502, + 504, + 533 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\text {U N I F - A L L}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\frac {1}{m ^ {2}} \\sum_ {i, j \\in [ m ]} \\mathrm {H} ^ {\\times} \\left[ t _ {i} (Y | x), s (Y | \\theta_ {j}, x) \\right], \\tag {21}", + "image_path": "883f661a547ea7257918ad5e653de5ca813939eca91bb9068fc4b08363f9fbba.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 544, + 425, + 557 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 544, + 425, + 557 + ], + "spans": [ + { + "bbox": [ + 105, + 544, + 425, + 557 + ], + "type": "text", + "content": "Probability weighting (PROB) Recall our PROB loss in Eq. (7) has the form," + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 171, + 563, + 504, + 601 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 563, + 504, + 601 + ], + "spans": [ + { + "bbox": [ + 171, + 563, + 504, + 601 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {P R O B}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\mathrm {H} ^ {\\times} \\left[ \\frac {1}{m} \\sum_ {i \\in [ m ]} t _ {i} (Y | x), \\frac {1}{m} \\sum_ {j \\in [ m ]} s (Y | \\theta_ {j}, x) \\right]. \\tag {22}", + "image_path": "74666ea0932ed664cd342954ff757c336333a8256762dbedfdf1d5eaf8b344c5.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 607, + 504, + 630 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 607, + 504, + 630 + ], + "spans": [ + { + "bbox": [ + 104, + 607, + 504, + 630 + ], + "type": "text", + "content": "We derive its equivalence with our general loss with " + }, + { + "bbox": [ + 104, + 607, + 504, + 630 + ], + "type": "inline_equation", + "content": "f_{ijy} = \\log s(y|\\theta_j,x)" + }, + { + "bbox": [ + 104, + 607, + 504, + 630 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 607, + 504, + 630 + ], + "type": "inline_equation", + "content": "\\gamma = 1" + }, + { + "bbox": [ + 104, + 607, + 504, + 630 + ], + "type": "text", + "content": " in terms of the gradients," + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 152, + 635, + 504, + 734 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 152, + 635, + 504, + 734 + ], + "spans": [ + { + "bbox": [ + 152, + 635, + 504, + 734 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\nabla_ {\\theta} \\mathcal {L} _ {n} ^ {\\mathrm {P R O B}} (\\theta) = \\frac {1}{m} \\sum_ {i \\in [ m ]} \\int_ {\\mathcal {Y}} t _ {i} (y | x) \\log \\frac {1}{m} \\sum_ {j \\in [ m ]} s (y | \\theta_ {j}, x) \\mathrm {d} y (23) \\\\ = \\frac {1}{m} \\sum_ {i \\in [ m ]} \\int_ {\\mathcal {Y}} t _ {i} (y | x) \\nabla_ {\\theta} \\log \\frac {1}{m} \\sum_ {j \\in [ m ]} s (y | \\theta_ {j}, x) d y (24) \\\\ = \\frac {1}{m} \\sum_ {i \\in [ m ]} \\int_ {\\mathcal {Y}} t _ {i} (y | x) \\frac {\\frac {1}{m} \\sum_ {j \\in [ m ]} \\nabla_ {\\theta} s (y | \\theta_ {j} , x)}{\\frac {1}{m} \\sum_ {j \\in [ m ]} s (y | \\theta_ {j} , x)} d y (25) \\\\ \\end{array}", + "image_path": "a881c8a5ec70cf49ebd5af07bca5ea33555e582a5b8449da6dcefeacb0a39b7d.jpg" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "text", + "content": "27" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 26 + }, + { + "para_blocks": [ + { + "bbox": [ + 203, + 79, + 504, + 178 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 79, + 504, + 178 + ], + "spans": [ + { + "bbox": [ + 203, + 79, + 504, + 178 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} = \\frac {1}{m} \\sum_ {i \\in [ m ]} \\int_ {\\mathcal {Y}} t _ {i} (y | x) \\frac {\\frac {1}{m} \\sum_ {j \\in [ m ]} s (y | \\theta_ {j} , x) \\nabla_ {\\theta} \\log s (y | \\theta_ {j} , x)}{\\frac {1}{m} \\sum_ {j ^ {\\prime} \\in [ m ]} s (y | \\theta_ {j ^ {\\prime}} , x)} d y (26) \\\\ = \\frac {1}{m} \\sum_ {i, j \\in [ m ]} \\int_ {\\mathcal {Y}} t _ {i} (y | x) \\frac {s (y | \\theta_ {j} , x)}{\\sum_ {j ^ {\\prime} \\in [ m ]} s (y | \\theta_ {j ^ {\\prime}} , x)} \\nabla_ {\\theta} \\log s (y | \\theta_ {j}, x) d y (27) \\\\ = \\nabla_ {\\theta} \\frac {1}{m} \\sum_ {i, j \\in [ m ]} \\mathsf {H} ^ {\\times} \\left[ w _ {i j Y} \\odot t _ {i} (Y | x), s (Y | \\theta_ {j}, x) \\right] (28) \\\\ \\end{array}", + "image_path": "f94eeb9a2453e0465d1823fea498f92cf7c4e8b632af50d6fe2e21053560234a.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "spans": [ + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "inline_equation", + "content": "w_{ijy} = \\frac{s(y|\\theta_j,x)}{\\sum_{j'\\in[m]}s(y|\\theta_{j'}x)}" + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "text", + "content": " (or equivalently, " + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "inline_equation", + "content": "f_{ijy} = \\log s(y|\\theta_j,x)" + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "inline_equation", + "content": "\\gamma = 1" + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "text", + "content": "). The last equality is because " + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "inline_equation", + "content": "w_{ijy}" + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "text", + "content": " is stopped gradient with respect to " + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 188, + 506, + 251 + ], + "type": "text", + "content": ". This is the same analysis as done in Burda et al. (2016). The above formation establishes the equivalence of gradients between two losses, which implies the same behavior (e.g., optimum) using gradient-based optimization, as the common practice of deep learning." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 255, + 504, + 294 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 255, + 504, + 294 + ], + "spans": [ + { + "bbox": [ + 104, + 255, + 504, + 294 + ], + "type": "text", + "content": "We also generalize this loss to some variants which we explore in Table 12. A \"dual\" variant is to use teacher predictions " + }, + { + "bbox": [ + 104, + 255, + 504, + 294 + ], + "type": "inline_equation", + "content": "f_{ijy} = \\log t_i(y|x)" + }, + { + "bbox": [ + 104, + 255, + 504, + 294 + ], + "type": "text", + "content": " instead of student ones; this implies " + }, + { + "bbox": [ + 104, + 255, + 504, + 294 + ], + "type": "inline_equation", + "content": "w_{ijy} = \\frac{t_i(y|x)}{\\sum_{i' \\in [m]} t_{i'}(y|x)}" + }, + { + "bbox": [ + 104, + 255, + 504, + 294 + ], + "type": "text", + "content": " and the PROB-TE loss," + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 156, + 300, + 504, + 332 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 156, + 300, + 504, + 332 + ], + "spans": [ + { + "bbox": [ + 156, + 300, + 504, + 332 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {P R O B - T E}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i, j \\in [ m ]} \\sum_ {y \\in \\mathcal {Y}} \\frac {t _ {i} (y | x)}{\\sum_ {i ^ {\\prime} \\in [ m ]} t _ {i ^ {\\prime}} (y | x)} t _ {i} (y | x) \\log s (y | \\theta_ {j}, x). \\tag {29}", + "image_path": "bf01620ed0ce2a99cb794df464cdf621d58742c507c162938e600eb4dc1bbea0.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 340, + 504, + 369 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 340, + 504, + 369 + ], + "spans": [ + { + "bbox": [ + 104, + 340, + 504, + 369 + ], + "type": "text", + "content": "Note that this simply reduces to use a weighted teacher predictions " + }, + { + "bbox": [ + 104, + 340, + 504, + 369 + ], + "type": "inline_equation", + "content": "\\frac{t_i(y|x)}{\\sum_{i' \\in [m]} t_{i'}(y|x)} t_i(y|x)" + }, + { + "bbox": [ + 104, + 340, + 504, + 369 + ], + "type": "text", + "content": " as the surrogate target that is shared across all students." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 373, + 504, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 373, + 504, + 397 + ], + "spans": [ + { + "bbox": [ + 104, + 373, + 504, + 397 + ], + "type": "text", + "content": "Another generalization is to use \"hard\" weighting, i.e., " + }, + { + "bbox": [ + 104, + 373, + 504, + 397 + ], + "type": "inline_equation", + "content": "\\gamma \\rightarrow 0" + }, + { + "bbox": [ + 104, + 373, + 504, + 397 + ], + "type": "text", + "content": ", which gives the PROB-MAX loss that only assigns weight to the most confident student," + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 158, + 403, + 504, + 434 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 158, + 403, + 504, + 434 + ], + "spans": [ + { + "bbox": [ + 158, + 403, + 504, + 434 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {P R O B - M A X}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i, j \\in [ m ]} \\sum_ {y \\in \\mathcal {Y}} w _ {i j y} t _ {i} (y | x) \\log s (y | \\theta_ {j}, x) \\tag {30}", + "image_path": "1d10373a9689868e14732e3718b6282ec5d8ef123b75ea10a437d80d63a22507.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 160, + 435, + 504, + 454 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 435, + 504, + 454 + ], + "spans": [ + { + "bbox": [ + 160, + 435, + 504, + 454 + ], + "type": "interline_equation", + "content": "w _ {i j y} = \\delta \\left(i - i ^ {*}\\right) \\delta \\left(j - j ^ {*}\\right), \\quad \\left(i ^ {*}, j ^ {*}\\right) = \\arg \\max _ {i j} f _ {i j y}, \\forall y \\in \\mathcal {Y}. \\tag {31}", + "image_path": "c0e8f14a982bdd943d568bd0c940f7ef8c3b83c293c408e2e5757fd626f9f0d6.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 461, + 504, + 495 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 461, + 504, + 495 + ], + "spans": [ + { + "bbox": [ + 104, + 461, + 504, + 495 + ], + "type": "text", + "content": "This loss reduces to a generalization of multiple choice learning (Guzman-Rivera et al., 2012) used in multi-headed networks (Lee et al., 2015) in our ensemble SSL setup. Similarly we can also derive the dual variant of it that uses the teacher predictions, which is omitted here for brevity." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 508, + 504, + 545 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 508, + 504, + 545 + ], + "spans": [ + { + "bbox": [ + 104, + 508, + 504, + 545 + ], + "type": "text", + "content": "Entropy weighting (ENT) The derivation of ENT loss in Eq. (9) is similar to the UNIF loss but applies an entropy weights. Recall that we use " + }, + { + "bbox": [ + 104, + 508, + 504, + 545 + ], + "type": "inline_equation", + "content": "f_{ijy} = -\\mathsf{H}[t_i(Y|x)] + \\log \\delta (i - j)" + }, + { + "bbox": [ + 104, + 508, + 504, + 545 + ], + "type": "text", + "content": ", which gives " + }, + { + "bbox": [ + 104, + 508, + 504, + 545 + ], + "type": "inline_equation", + "content": "w_{ijy} = \\mathrm{softmax}_i(\\{-\\frac{1}{\\gamma}\\mathsf{H}[t_{i'}(Y|x)]:i'\\in [m]\\})" + }, + { + "bbox": [ + 104, + 508, + 504, + 545 + ], + "type": "text", + "content": " and," + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 113, + 552, + 504, + 583 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 552, + 504, + 583 + ], + "spans": [ + { + "bbox": [ + 113, + 552, + 504, + 583 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {E N T}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i \\in [ m ]} \\operatorname {s o f t m a x} _ {i} \\left(\\left\\{- \\frac {1}{\\gamma} \\mathrm {H} \\left[ t _ {i ^ {\\prime}} (Y | x) : i ^ {\\prime} \\in [ m ] \\right\\}\\right) \\mathrm {H} ^ {\\times} \\left[ t _ {i} (Y | x), s (Y | \\theta_ {i}, x) \\right]\\right). \\tag {32}", + "image_path": "8633d4fa8b0ca6014a9763c5ed910c95ae54d2751aaba5cc6b506abda3cd305b.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 590, + 504, + 615 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 590, + 504, + 615 + ], + "spans": [ + { + "bbox": [ + 104, + 590, + 504, + 615 + ], + "type": "text", + "content": "One can also generalize it to its dual variant which uses the student entopies, i.e., " + }, + { + "bbox": [ + 104, + 590, + 504, + 615 + ], + "type": "inline_equation", + "content": "f_{ijy} = -\\mathsf{H}[s(Y|\\theta_j,x)] + \\log \\delta (i - j)" + }, + { + "bbox": [ + 104, + 590, + 504, + 615 + ], + "type": "text", + "content": ", which gives the ENT-ST loss," + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 111, + 620, + 504, + 662 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 620, + 504, + 662 + ], + "spans": [ + { + "bbox": [ + 111, + 620, + 504, + 662 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {E N T - S T}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\sum_ {i \\in [ m ]} \\operatorname {s o f t m a x} _ {i} \\left(\\left\\{- \\frac {1}{\\gamma} \\mathrm {H} [ s (Y | \\theta_ {i ^ {\\prime}}, x) ]: i ^ {\\prime} \\in [ m ] \\right\\}\\right) \\mathrm {H} ^ {\\times} \\left[ t _ {i} (Y | x), s (Y | \\theta_ {i}, x) \\right]. \\tag {33}", + "image_path": "36072b890579dd42958333fc4e11990091b8d57ece73e653d4a6ba2e314c7a39.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 677, + 239, + 689 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 677, + 239, + 689 + ], + "spans": [ + { + "bbox": [ + 105, + 677, + 239, + 689 + ], + "type": "text", + "content": "D.2 RELATING SOME LOSSES" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 698, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 506, + 733 + ], + "type": "text", + "content": "Here, we relate some losses derived above. Specifically, we relate the uniform weighting (UNIF, UNIF-ALL) and probability weighting (PROB) in Appx. D.2.1, and relate entropy weighting (ENT) and variance weighting in Appx. D.2.2." + } + ] + } + ], + "index": 15 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "text", + "content": "28" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 27 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 82, + 311, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 82, + 311, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 82, + 311, + 94 + ], + "type": "text", + "content": "D.2.1 UNIFORM & PROBABILITY WEIGHTING" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "spans": [ + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "type": "text", + "content": "We first establish the relation between UNIF and PROB using the joint convexity of unnormalized KL divergence and the fact that our weighted cross-entropy loss is a weighted unnormalized KL divergence up to some constant in " + }, + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "type": "text", + "content": ". In particular, the joint convexity of unnormalized KL divergence can be shown by combining the facts that Csiszár " + }, + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "type": "inline_equation", + "content": "f" + }, + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "type": "text", + "content": "-divergences are jointly convex (Proposition 1 in Dragomir (2013)) and unnormalized KL divergence corresponds to the convex generator, " + }, + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "type": "inline_equation", + "content": "f(u) = u\\log u - u + 1" + }, + { + "bbox": [ + 104, + 101, + 506, + 168 + ], + "type": "text", + "content": ", as required by the proposition." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 173, + 489, + 185 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 173, + 489, + 185 + ], + "spans": [ + { + "bbox": [ + 104, + 173, + 489, + 185 + ], + "type": "text", + "content": "First, our weighted cross-entropy loss is unnormalized KL divergence up to some constant in " + }, + { + "bbox": [ + 104, + 173, + 489, + 185 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 173, + 489, + 185 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 155, + 190, + 504, + 220 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 190, + 504, + 220 + ], + "spans": [ + { + "bbox": [ + 155, + 190, + 504, + 220 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {U N I F}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} \\frac {1}{m} \\sum_ {i \\in [ m ]} \\mathrm {K} \\left[ t _ {i} (Y | x), s \\left(Y \\mid \\theta_ {i}, x\\right) \\right] + \\text {c o n s t a n t} \\tag {34}", + "image_path": "22d8e6f255374e3f1813c4b1394a2ca20498fe0dab831dd2ae452b82b3d9baab.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 155, + 223, + 504, + 262 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 223, + 504, + 262 + ], + "spans": [ + { + "bbox": [ + 155, + 223, + 504, + 262 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {P R O B}} (\\theta) = \\frac {1}{n} \\sum_ {x \\in \\mathcal {D} _ {n}} K \\left[ \\frac {1}{m} \\sum_ {i \\in [ m ]} t _ {i} (Y | x), \\frac {1}{m} \\sum_ {j \\in [ m ]} s (Y | \\theta_ {j}, x) \\right] + \\text {c o n s t a n t} \\tag {35}", + "image_path": "987560568239edd2f9ff0d12d041d285983a18e2714b24d25b31c8a6c0d22afd.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 266, + 504, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 266, + 504, + 289 + ], + "spans": [ + { + "bbox": [ + 104, + 266, + 504, + 289 + ], + "type": "text", + "content": "Therefore, the joint convexity of (unnormized) KL divergence directly implies an ordering of the loss up to some constant in " + }, + { + "bbox": [ + 104, + 266, + 504, + 289 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 266, + 504, + 289 + ], + "type": "text", + "content": ", i.e.," + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 273, + 294, + 504, + 308 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 273, + 294, + 504, + 308 + ], + "spans": [ + { + "bbox": [ + 273, + 294, + 504, + 308 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\mathrm {P R O B}} \\leq \\mathcal {L} _ {n} ^ {\\mathrm {U N I F}} \\tag {36}", + "image_path": "dea4103539725dcf2b78d359d63341d0601cc0b843aec1926f0f1e365216b5ae.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "spans": [ + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "type": "text", + "content": "Furthermore, we can also relate PROB and UNIF-ALL using the fact that the (unnormized) cross-entropy " + }, + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "type": "inline_equation", + "content": "\\mathsf{H}^{\\times}[p(X), q(X)]" + }, + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "type": "text", + "content": " is linear in the first argument " + }, + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "type": "text", + "content": " but convex in the second argument " + }, + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 314, + 506, + 347 + ], + "type": "text", + "content": ", which implies," + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 265, + 353, + 504, + 368 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 265, + 353, + 504, + 368 + ], + "spans": [ + { + "bbox": [ + 265, + 353, + 504, + 368 + ], + "type": "interline_equation", + "content": "\\mathcal {L} _ {n} ^ {\\text {P R O B}} \\leq \\mathcal {L} _ {n} ^ {\\text {U N I F - A L L}} \\tag {37}", + "image_path": "40c1f06f6cb8136fd1b96a93530c5f88114226da212746cfc1e71ba7e123a1ca.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 378, + 297, + 389 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 378, + 297, + 389 + ], + "spans": [ + { + "bbox": [ + 105, + 378, + 297, + 389 + ], + "type": "text", + "content": "D.2.2 ENTROPY & VARIANCE WEIGHTING" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 397, + 453, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 397, + 453, + 411 + ], + "spans": [ + { + "bbox": [ + 104, + 397, + 453, + 411 + ], + "type": "text", + "content": "Suppose " + }, + { + "bbox": [ + 104, + 397, + 453, + 411 + ], + "type": "inline_equation", + "content": "p(X)" + }, + { + "bbox": [ + 104, + 397, + 453, + 411 + ], + "type": "text", + "content": " is a discrete distribution (normalized) on " + }, + { + "bbox": [ + 104, + 397, + 453, + 411 + ], + "type": "inline_equation", + "content": "\\mathcal{X} = [c]" + }, + { + "bbox": [ + 104, + 397, + 453, + 411 + ], + "type": "text", + "content": ". It can be shown that," + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 205, + 415, + 504, + 430 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 415, + 504, + 430 + ], + "spans": [ + { + "bbox": [ + 205, + 415, + 504, + 430 + ], + "type": "interline_equation", + "content": "\\mathsf {H} [ p (X) ] \\leq \\frac {1}{2} \\log \\left(\\operatorname {V a r} _ {p} [ X ] + \\frac {1}{1 2}\\right) + \\frac {1}{2} \\log (2 \\pi e) \\tag {38}", + "image_path": "25614e7c479ac2433d5a9e25186b2a1a26a8e9bd789321bc40b42e5e6f4c839c.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "spans": [ + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "inline_equation", + "content": "\\operatorname{Var}_p[X] = \\sum_{x \\in [c]} p(x)(x - \\mu)^2" + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "inline_equation", + "content": "\\mu = \\mathsf{E}_p[X] = \\sum_{x \\in [c]} p(x)x" + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "text", + "content": " (Theorem 9.7.1, Cover & Thomas (1999)). Note, a tighter bound (Mow, 1998) also exists but it places stronger restrictions on " + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "text", + "content": ". This relationship suggests that choosing weights proportional to " + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "inline_equation", + "content": "\\exp(-\\mathsf{H}[t_i(Y|x)])" + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "text", + "content": " (as in ENT) is potentially related to choosing weights proportional to weighting by variance " + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "inline_equation", + "content": "(\\operatorname{Var}_{t_i(Y|x)}[Y] + \\epsilon)^{-1/2}" + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "inline_equation", + "content": "(\\epsilon = \\frac{1}{12})" + }, + { + "bbox": [ + 104, + 435, + 506, + 499 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 13 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2023" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "text", + "content": "29" + } + ] + } + ], + "index": 14 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 28 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file