diff --git "a/2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/layout.json" "b/2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/layout.json" new file mode 100644--- /dev/null +++ "b/2024/A Hierarchical Bayesian Model for Few-Shot Meta Learning/layout.json" @@ -0,0 +1,29130 @@ +{ + "pdf_info": [ + { + "para_blocks": [ + { + "bbox": [ + 105, + 79, + 504, + 116 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 79, + 504, + 116 + ], + "spans": [ + { + "bbox": [ + 105, + 79, + 504, + 116 + ], + "type": "text", + "content": "A HIERARCHICAL BAYESIAN MODEL FOR FEW-SHOT META LEARNING" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 111, + 133, + 315, + 146 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 133, + 315, + 146 + ], + "spans": [ + { + "bbox": [ + 111, + 133, + 315, + 146 + ], + "type": "text", + "content": "Minyoung Kim1 & Timothy M. Hospedales1,2" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 112, + 147, + 264, + 169 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 147, + 264, + 169 + ], + "spans": [ + { + "bbox": [ + 112, + 147, + 264, + 169 + ], + "type": "inline_equation", + "content": "^{1}" + }, + { + "bbox": [ + 112, + 147, + 264, + 169 + ], + "type": "text", + "content": "Samsung AI Center Cambridge, UK \nmikim21@gmail.com" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 293, + 146, + 423, + 169 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 293, + 146, + 423, + 169 + ], + "spans": [ + { + "bbox": [ + 293, + 146, + 423, + 169 + ], + "type": "inline_equation", + "content": "^{2}" + }, + { + "bbox": [ + 293, + 146, + 423, + 169 + ], + "type": "text", + "content": "University of Edinburgh, UK t.hospedales@ed.ac.uk" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 276, + 198, + 335, + 209 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 276, + 198, + 335, + 209 + ], + "spans": [ + { + "bbox": [ + 276, + 198, + 335, + 209 + ], + "type": "text", + "content": "ABSTRACT" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 140, + 213, + 471, + 411 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 213, + 471, + 411 + ], + "spans": [ + { + "bbox": [ + 140, + 213, + 471, + 411 + ], + "type": "text", + "content": "We propose novel model parametrisation and inference algorithm in a hierarchical Bayesian model for the few-shot meta learning problem. We consider episode-wise random variables to model episode-specific generative processes, where these local random variables are governed by a higher-level global random variable. The global variable captures information shared across episodes, while controlling how much the model needs to be adapted to new episodes in a principled Bayesian manner. Within our framework, prediction on a novel episode/task can be seen as a Bayesian inference problem. For tractable training, we need to be able to relate each local episode-specific solution to the global higher-level parameters. We propose a Normal-Inverse-Wishart model, for which establishing this local-global relationship becomes feasible due to the approximate closed-form solutions for the local posterior distributions. The resulting algorithm is more attractive than the MAML in that it does not maintain a costly computational graph for the sequence of gradient descent steps in an episode. Our approach is also different from existing Bayesian meta learning methods in that rather than modeling a single random variable for all episodes, it leverages a hierarchical structure that exploits the local-global relationships desirable for principled Bayesian learning with many related tasks." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 429, + 206, + 441 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 429, + 206, + 441 + ], + "spans": [ + { + "bbox": [ + 105, + 429, + 206, + 441 + ], + "type": "text", + "content": "1 INTRODUCTION" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 445, + 506, + 556 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 445, + 506, + 556 + ], + "spans": [ + { + "bbox": [ + 104, + 445, + 506, + 556 + ], + "type": "text", + "content": "Few-shot learning (FSL) aims to emulate the human ability to learn from few examples (Lake et al., 2015). It has received substantial and growing interest (Wang et al., 2020b) due to the need to alleviate the notoriously data intensive nature of mainstream supervised deep learning. Approaches to FSL are all based on some kind of knowledge transfer from a set of plentiful source recognition problems to the sparse data target problem of interest. Existing approaches are differentiated in terms of the assumptions they make about what is task agnostic knowledge that can be transferred from the source tasks, and what is task-specific knowledge that should be learned from the sparse target examples. For example, the seminal MAML (Finn et al., 2017) and ProtoNets (Snell et al., 2017) respectively assume that the initialisation for fine-tuning, or the feature extractor for metric-based recognition should be transferred from source categories." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 561, + 506, + 651 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 561, + 506, + 651 + ], + "spans": [ + { + "bbox": [ + 104, + 561, + 506, + 651 + ], + "type": "text", + "content": "One of the most principled and systematic ways to model such sets of related problems are hierarchical Bayesian models (HBMs) (Gelman et al., 2003). The HBM paradigm is widely used in statistics, but has seen relatively less use in deep learning, due to the technical difficulty of bringing hierarchical Bayesian modelling to bear on deep learning. HBMs provide a powerful way to model a set of related problems, by assuming that each problem has its own parameters (e.g., the neural networks that recognise cat vs dog, or car vs bike), but that those problems share a common prior (the prior over such neural networks). Data-efficient learning of the target tasks is then achieved by inferring the prior based on source tasks, and using it to enhance posterior learning over the target task parameters." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 654, + 506, + 733 + ], + "type": "text", + "content": "A Bayesian learning treatment of FSL would be appealing due to the overfitting resistance provided by Bayesian Occam's razor (MacKay, 2003), as well as the ability to improve calibration of inference so that the model's confidence is reflective of its probability of correctness — a crucial property in mission critical applications (Guo et al., 2017). However the limited attempts that have been made to exploit these tools in deep learning have either been incomplete treatments that only model a single Bayesian layer within the neural network (Zhang et al., 2021; Gordon et al., 2019), or else fail to scale up to modern neural architectures (Finn et al., 2018; Yoon et al., 2018)." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "1" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 0 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 182 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 182 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 182 + ], + "type": "text", + "content": "In this paper we present the first complete hierarchical Bayesian learning algorithm for few-shot deep learning. Our algorithm efficiently learns a prior over neural networks during the meta-train phase, and efficiently learns a posterior neural network during each meta-test episode. Importantly, our learning is architecture independent. It can scale up to the state-of-the-art backbones including ViTs (Dosovitskiy et al., 2021), and works smoothly with any few-shot learning architecture - spanning simple linear decoders (Finn et al., 2017; Snell et al., 2017), to those based on sophisticated set-based decoders such as FEAT (Ye et al., 2020) and CNP(Garnelo et al., 2018)/ANP(Kim et al., 2019). We show empirically that our HBM provides improved performance and calibration in all of these cases, as well as providing clear theoretical justification." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 186, + 506, + 308 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 186, + 506, + 308 + ], + "spans": [ + { + "bbox": [ + 104, + 186, + 506, + 308 + ], + "type": "text", + "content": "Our analysis also reveals novel links between seminal FSL methods such as ProtoNet (Snell et al., 2017), MAML (Finn et al., 2017), and Reptile (Nichol et al., 2018), all of which are different special cases of our framework despite their very different appearance. Interestingly, despite its close relatedness to MAML-family algorithms, our Bayesian learner admits an efficient closed-form solution to the task-specific and task-agnostic updates that does not require maintaining the computational graph for reverse-mode backpropagation. This provides a novel solution to a famous meta-learning scalability bottleneck. In summary, our contributions include: (i) The first complete hierarchical Bayesian treatment of the few-shot deep learning problem, and associated theoretical justification. (ii) An efficient algorithmic learning solution that can scale up to modern architectures, and plug into most existing neural FSL meta-learners. (iii) Empirical results demonstrating improved accuracy and calibration performance on both classification and regression benchmarks." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 323, + 214, + 335 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 323, + 214, + 335 + ], + "spans": [ + { + "bbox": [ + 105, + 323, + 214, + 335 + ], + "type": "text", + "content": "2 PROBLEM SETUP" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "spans": [ + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": "Let " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "p(\\mathcal{T})" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " be the (unknown) task/episode distribution, where each task " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "\\mathcal{T} \\sim p(\\mathcal{T})" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " is defined as a distribution " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "p_{\\mathcal{T}}(x,y)" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " for data " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "(x,y)" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " is input and " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " is target. For training, we have a large number of episodes, " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "\\mathcal{T}_1, \\mathcal{T}_2, \\ldots, \\mathcal{T}_N \\sim P(\\mathcal{T})" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " sampled i.i.d., but we only observe a small number of labeled samples from each episode, denoted by " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "D_i = \\{(x_j^i, y_j^i)\\}_{j=1}^{n_i} \\sim p_{\\mathcal{T}_i}(x,y)" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "n_i = |D_i|" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " is the number of samples in " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "D_i" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": ". The goal of the learner, after observing the training data " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "D_1, \\ldots, D_N" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " from a large number of different tasks, is to build a predictor " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "p^*(y|x)" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " for novel unseen tasks " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "\\mathcal{T}^* \\sim p(\\mathcal{T})" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": ". We will often abuse the notation, e.g., " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "i \\sim \\mathcal{T}" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " refers to the episode " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " sampled, i.e., " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "D_i \\sim p_{\\mathcal{T}_i}(x,y)" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "\\mathcal{T}_i \\sim p(\\mathcal{T})" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": ". At the test time we are allowed to have some hints about the new test task " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "\\mathcal{T}^*" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": ", in the form of a few labeled examples from " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "\\mathcal{T}^*" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": ", also known as the support set denoted by " + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "inline_equation", + "content": "D^* \\sim P_{\\mathcal{T}^*}(x,y)" + }, + { + "bbox": [ + 104, + 342, + 506, + 443 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 447, + 326, + 514 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 447, + 326, + 514 + ], + "spans": [ + { + "bbox": [ + 104, + 447, + 326, + 514 + ], + "type": "text", + "content": "In Bayesian perspective, the goal is to infer the posterior distribution with all training episodes and a test support set as evidence, i.e., " + }, + { + "bbox": [ + 104, + 447, + 326, + 514 + ], + "type": "inline_equation", + "content": "p(y|x, D^*, D_{1:N})" + }, + { + "bbox": [ + 104, + 447, + 326, + 514 + ], + "type": "text", + "content": ". A major computational challenge, compared to conventional Bayesian learning, is that the training episodes (evidence) may not be stored/replayed/revisited." + } + ] + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 336, + 447, + 504, + 506 + ], + "blocks": [ + { + "bbox": [ + 336, + 447, + 504, + 506 + ], + "lines": [ + { + "bbox": [ + 336, + 447, + 504, + 506 + ], + "spans": [ + { + "bbox": [ + 336, + 447, + 504, + 506 + ], + "type": "image", + "image_path": "3449a8d454718023d6349df9636034fb0ae50fc171f6ec02846b37ef9f687a27.jpg" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 330, + 506, + 504, + 551 + ], + "lines": [ + { + "bbox": [ + 330, + 506, + 504, + 551 + ], + "spans": [ + { + "bbox": [ + 330, + 506, + 504, + 551 + ], + "type": "text", + "content": "Figure 1: (a) IID episodes. (b) Individual episode. (c): FSL as Bayesian inference (grey nodes = evidences, red = target to infer). " + }, + { + "bbox": [ + 330, + 506, + 504, + 551 + ], + "type": "inline_equation", + "content": "D^{*}" + }, + { + "bbox": [ + 330, + 506, + 504, + 551 + ], + "type": "text", + "content": " = support set for test episode." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 529, + 217, + 541 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 529, + 217, + 541 + ], + "spans": [ + { + "bbox": [ + 105, + 529, + 217, + 541 + ], + "type": "text", + "content": "3 MAIN APPROACH" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "spans": [ + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": "We introduce two types of latent random variables, " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "\\{\\theta_i\\}_{i=1}^N" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": ". Each episode " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " uses neural network weights " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "\\theta_i" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " for modeling the data " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "D_i" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " (" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "i = 1, \\dots, N" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": "). Specifically, " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "D_i" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " is generated (input " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " given and only " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "p(y|x)" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " modeled) by " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "\\theta_i" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " as in the likelihood model in (1). The variable " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " can be viewed as a globally shared variable that is responsible for linking the individual episode-wise parameters " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "\\theta_i" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": ". We assume conditionally independent and identical priors, " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "p(\\{\\theta_i\\}_i|\\phi) = \\prod_i p(\\theta_i|\\phi)" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": ". Thus the prior for the latent variables " + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "inline_equation", + "content": "(\\phi, \\{\\theta_i\\}_{i=1}^N)" + }, + { + "bbox": [ + 104, + 548, + 506, + 626 + ], + "type": "text", + "content": " is formed in a hierarchical manner as follows. (For background on Bayesian modeling, refer to (Murphy, 2022).)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 116, + 628, + 505, + 644 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 628, + 505, + 644 + ], + "spans": [ + { + "bbox": [ + 116, + 628, + 505, + 644 + ], + "type": "interline_equation", + "content": "\\text {(P r i o r)} p \\left(\\phi , \\theta_ {1: N}\\right) = p (\\phi) \\prod_ {i = 1} ^ {N} p \\left(\\theta_ {i} \\mid \\phi\\right), \\quad \\text {(L i k e l i h o o d)} p \\left(D _ {i} \\mid \\theta_ {i}\\right) = \\prod_ {\\left(x, y\\right) \\in D _ {i}} p (y \\mid x, \\theta_ {i}) \\tag {1}", + "image_path": "756ee6bf05c1f62ba863a98e1cad1416b19868bf801d7e89b425c5a2d816eec9.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 647, + 485, + 659 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 647, + 485, + 659 + ], + "spans": [ + { + "bbox": [ + 104, + 647, + 485, + 659 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 647, + 485, + 659 + ], + "type": "inline_equation", + "content": "p(y|x,\\theta_i)" + }, + { + "bbox": [ + 104, + 647, + 485, + 659 + ], + "type": "text", + "content": " is a conventional neural network model. See the graphical model in Fig. 1(a)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "spans": [ + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "type": "text", + "content": "Given the training data " + }, + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "type": "inline_equation", + "content": "\\{D_i\\}_{i = 1}^N" + }, + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "type": "text", + "content": ", the posterior is " + }, + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "type": "inline_equation", + "content": "p(\\phi ,\\theta_{1:N}|D_{1:N})\\propto p(\\phi)\\prod_{i = 1}^{N}p(\\theta_i|\\phi)p(D_i|\\theta_i)" + }, + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "type": "text", + "content": " and we approximate it with variational inference. That is, " + }, + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "type": "inline_equation", + "content": "q(\\phi ,\\theta_{1:N};L)\\approx p(\\phi ,\\bar{\\theta}_{1:N}|D_{1:N})" + }, + { + "bbox": [ + 104, + 664, + 506, + 689 + ], + "type": "text", + "content": " where" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 215, + 691, + 505, + 707 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 215, + 691, + 505, + 707 + ], + "spans": [ + { + "bbox": [ + 215, + 691, + 505, + 707 + ], + "type": "interline_equation", + "content": "q \\left(\\phi , \\theta_ {1: N}; L\\right) := q \\left(\\phi ; L _ {0}\\right) \\cdot \\prod_ {i = 1} ^ {N} q _ {i} \\left(\\theta_ {i}; L _ {i}\\right), \\tag {2}", + "image_path": "d46cde7403ddf77507de42dc5bf1ea1a1284c5c402c1ce3e31835483de686a66.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": "where the variational parameters " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": " consists of " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": " (parameters for " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": ") and " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "\\{L_i\\}_{i=1}^N" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": "’s (parameters of " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "q_i(\\theta_i)" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": "'s for episode " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": "). Note that although " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "\\theta_i" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": "'s are independent across episodes under (2), they" + } + ] + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "2" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 1 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "text", + "content": "are differently modeled (note the subscript " + }, + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "text", + "content": " in notation " + }, + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "inline_equation", + "content": "q_{i}" + }, + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "text", + "content": "), reflecting different posterior beliefs originating from heterogeneity of episodic datasets " + }, + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "inline_equation", + "content": "D_{i}" + }, + { + "bbox": [ + 104, + 82, + 504, + 105 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 110, + 506, + 133 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 110, + 506, + 133 + ], + "spans": [ + { + "bbox": [ + 104, + 110, + 506, + 133 + ], + "type": "text", + "content": "Normal-Inverse-Wishart (NIW) model. We consider NIW distributions for the prior and variational posterior. First, the prior is modeled as a conjugate form of Gaussian-NIW. With " + }, + { + "bbox": [ + 104, + 110, + 506, + 133 + ], + "type": "inline_equation", + "content": "\\phi = (\\mu, \\Sigma)" + }, + { + "bbox": [ + 104, + 110, + 506, + 133 + ], + "type": "text", + "content": "," + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 135, + 505, + 148 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 135, + 505, + 148 + ], + "spans": [ + { + "bbox": [ + 132, + 135, + 505, + 148 + ], + "type": "interline_equation", + "content": "p (\\phi) = \\mathcal {N} \\left(\\mu ; \\mu_ {0}, \\lambda_ {0} ^ {- 1} \\Sigma\\right) \\cdot \\mathcal {I W} \\left(\\Sigma ; \\Sigma_ {0}, \\nu_ {0}\\right), \\quad p \\left(\\theta_ {i} | \\phi\\right) = \\mathcal {N} \\left(\\theta_ {i}; \\mu , \\Sigma\\right), i = 1, \\dots , N, \\tag {3}", + "image_path": "abd35dfe4063cf95a346a88cefc172cf0dd8bb58f3383ab47b76d847d945998d.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "spans": [ + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "inline_equation", + "content": "\\Lambda = \\{\\mu_0,\\Sigma_0,\\lambda_0,\\nu_0\\}" + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "text", + "content": " is the parameters of the NIw. We do not need to pay attention to the choice of values for " + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "inline_equation", + "content": "\\Lambda" + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "text", + "content": " since " + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "inline_equation", + "content": "p(\\phi)" + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "text", + "content": " has vanishing effect on posterior for a large amount of evidence as we will see shortly. Next, our choice of the variational density family for " + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "text", + "content": " is the NIw, mainly because it admits closed-form expressions in the ELBO function due to the conjugacy, allowing efficient local episodic optimisation, as will be shown. For " + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "inline_equation", + "content": "q_{i}(\\theta_{i})" + }, + { + "bbox": [ + 104, + 149, + 504, + 204 + ], + "type": "text", + "content": " 's we adopt Gaussians. That is," + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 147, + 206, + 505, + 220 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 147, + 206, + 505, + 220 + ], + "spans": [ + { + "bbox": [ + 147, + 206, + 505, + 220 + ], + "type": "interline_equation", + "content": "q \\left(\\phi ; L _ {0}\\right) := \\mathcal {N} \\left(\\mu ; m _ {0}, l _ {0} ^ {- 1} \\Sigma\\right) \\cdot \\mathcal {I W} \\left(\\Sigma ; V _ {0}, n _ {0}\\right), \\quad q _ {i} \\left(\\theta_ {i}; L _ {i}\\right) = \\mathcal {N} \\left(\\theta_ {i}; m _ {i}, V _ {i}\\right). \\tag {4}", + "image_path": "89e2c2b270c48c8582aee68b99860f42bc293ee5bc74e1769ec180b20a31a753.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "spans": [ + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "text", + "content": "So, " + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "inline_equation", + "content": "L_0 = \\{m_0, V_0, l_0, n_0\\}" + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "inline_equation", + "content": "V_0" + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "text", + "content": " restricted to be diagonal, and " + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "inline_equation", + "content": "L_i = \\{m_i, V_i\\}" + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "text", + "content": ". Learning (variational inference) amounts to finding " + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "inline_equation", + "content": "L_0" + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "inline_equation", + "content": "\\{L_i\\}_{i=1}^N" + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "text", + "content": " that makes the approximation " + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "inline_equation", + "content": "q(\\phi, \\theta_{1:N}; L) \\approx p(\\phi, \\theta_{1:N}|D_{1:N})" + }, + { + "bbox": [ + 104, + 220, + 506, + 255 + ], + "type": "text", + "content": ", as tight as possible." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 258, + 505, + 281 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 258, + 505, + 281 + ], + "spans": [ + { + "bbox": [ + 104, + 258, + 505, + 281 + ], + "type": "text", + "content": "Variational inference. The negative marginal log-likelihood (NMLL) has the following upper bound (Appendix B.1 for derivations):" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 116, + 283, + 505, + 302 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 283, + 505, + 302 + ], + "spans": [ + { + "bbox": [ + 116, + 283, + 505, + 302 + ], + "type": "interline_equation", + "content": "- \\log p \\left(D _ {1: N}\\right) \\leq \\operatorname {K L} \\left(q (\\phi) \\| p (\\phi)\\right) + \\sum_ {i = 1} ^ {N} \\left(\\mathbb {E} _ {q _ {i} \\left(\\theta_ {i}\\right)} \\left[ l _ {i} \\left(\\theta_ {i}\\right) \\right] + \\mathbb {E} _ {q (\\phi)} \\left[ \\operatorname {K L} \\left(q _ {i} \\left(\\theta_ {i}\\right) \\| p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right]\\right) \\tag {5}", + "image_path": "73af6572ed9dfc9c299a732cd03b17e71d16931e8ee4d7d9e33e72a0e88d4d00.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "spans": [ + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "inline_equation", + "content": "l_{i}(\\theta_{i}) = -\\log p(D_{i}|\\theta_{i})" + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "text", + "content": " is the negative training log-likelihood of " + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "text", + "content": " in episode " + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "text", + "content": ". By dividing both sides by " + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "text", + "content": ", the LHS naturally becomes the effective episode-averaged NMLL " + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "inline_equation", + "content": "-\\frac{1}{N}\\log p(D_{1:N})" + }, + { + "bbox": [ + 104, + 304, + 506, + 327 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "spans": [ + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "text", + "content": "The first KL term in the RHS, " + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "inline_equation", + "content": "\\frac{1}{N}\\mathrm{KL}(q(\\phi)||p(\\phi))" + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "text", + "content": " diminishes for large " + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "text", + "content": ". Using " + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "inline_equation", + "content": "\\frac{1}{N}\\sum_{i=1}^{N}f_i \\approx \\mathbb{E}_{i\\sim \\mathcal{T}}[f_i]" + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "text", + "content": " for any expression " + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "inline_equation", + "content": "f_i" + }, + { + "bbox": [ + 104, + 327, + 505, + 352 + ], + "type": "text", + "content": ", the ELBO learning (approximately) reduces to the following:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 156, + 354, + 505, + 377 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 156, + 354, + 505, + 377 + ], + "spans": [ + { + "bbox": [ + 156, + 354, + 505, + 377 + ], + "type": "interline_equation", + "content": "\\min _ {L _ {0}, \\{L _ {i} \\} _ {i = 1} ^ {N}} \\mathbb {E} _ {i \\sim \\mathcal {T}} \\left[ \\mathbb {E} _ {q _ {i} \\left(\\theta_ {i}; L _ {i}\\right)} \\left[ l _ {i} \\left(\\theta_ {i}\\right) \\right] + \\mathbb {E} _ {q (\\phi ; L _ {0})} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}; L _ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right] \\right]. \\tag {6}", + "image_path": "169e9fe107fa7be0ecdbe93bc2c97cf82141063f58a23b8dfe711d01668bed69.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "spans": [ + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": "Local episodic optimisation (whose solution as a function of global parameters " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": "). Note that (6) is challenging due to a large number of optimisation variables " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "\\{L_i\\}_{i=1}^N" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": " and the nature of episode sampling " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "i \\sim \\mathcal{T}" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": ". Applying conventional SGD would simply fail since each " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "L_{i}" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": " will never be updated more than once. Instead, we tackle it by finding the optimal solutions for " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "L_{i}" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": "'s for fixed " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": ", thus effectively representing the optimal solutions as functions of " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": ", namely " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "\\{L_{i}^{*}(L_{0})\\}_{i=1}^{N}" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": ". Plugging the optimal " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "L_{i}^{*}(L_{0})" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": "’s back to (6) leads to the optimisation problem over " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": " alone. The idea is just like solving: " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "\\min_{x,y} f(x,y) = \\min_{x} f(x,y^{*}(x))" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "y^{*}(x) = \\arg \\min_{y} f(x,y)" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "inline_equation", + "content": "x" + }, + { + "bbox": [ + 104, + 384, + 504, + 462 + ], + "type": "text", + "content": " fixed." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "spans": [ + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "text", + "content": "Note that when we fix " + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "inline_equation", + "content": "L_0" + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "text", + "content": " (i.e., fix " + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "text", + "content": "), the objective (6) is completely separable over " + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "text", + "content": ", and we can optimise individual " + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "text", + "content": " independently. More specifically, for each " + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "inline_equation", + "content": "i \\geq 1" + }, + { + "bbox": [ + 104, + 466, + 504, + 489 + ], + "type": "text", + "content": "," + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 195, + 490, + 505, + 508 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 490, + 505, + 508 + ], + "spans": [ + { + "bbox": [ + 195, + 490, + 505, + 508 + ], + "type": "interline_equation", + "content": "\\min _ {L _ {i}} \\mathbb {E} _ {q _ {i} \\left(\\theta_ {i}; L _ {i}\\right)} \\left[ l _ {i} \\left(\\theta_ {i}\\right) \\right] + \\mathbb {E} _ {\\phi} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}; L _ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right] \\tag {7}", + "image_path": "6022d645c5d6dfa5329e402a7253e800deff67b0f375958f2c167fa01e02d770.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 509, + 505, + 532 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 509, + 505, + 532 + ], + "spans": [ + { + "bbox": [ + 104, + 509, + 505, + 532 + ], + "type": "text", + "content": "As the expected KL term in (7) admits a closed form due to NIw-Gaussian conjugacy (Appendix B.2 for derivations), we can reduce (7) to the following optimisation for " + }, + { + "bbox": [ + 104, + 509, + 505, + 532 + ], + "type": "inline_equation", + "content": "L_{i} = (m_{i},V_{i})" + }, + { + "bbox": [ + 104, + 509, + 505, + 532 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 112, + 533, + 505, + 559 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 533, + 505, + 559 + ], + "spans": [ + { + "bbox": [ + 112, + 533, + 505, + 559 + ], + "type": "interline_equation", + "content": "L _ {i} ^ {*} (L _ {0}) := \\arg \\min _ {m _ {i}, V _ {i}} \\left(\\mathbb {E} _ {\\mathcal {N} \\left(\\theta_ {i}; m _ {i}, V _ {i}\\right)} \\left[ l _ {i} \\left(\\theta_ {i}\\right) \\right] - \\frac {\\log | V _ {i} |}{2} + \\frac {n _ {0}}{2} \\left((m _ {i} - m _ {0}) ^ {2} / V _ {0} + \\operatorname {T r} \\left(V _ {i} / V _ {0}\\right)\\right)\\right) \\tag {8}", + "image_path": "6f011b9217ac044465948f4b078afdd9af2ee815e364918372daa83cb40d5224.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "spans": [ + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "type": "text", + "content": "with " + }, + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "type": "inline_equation", + "content": "L_0 = \\{m_0, V_0, l_0, n_0\\}" + }, + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "type": "text", + "content": " fixed. Here " + }, + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "type": "inline_equation", + "content": "(m_i - m_0)^2" + }, + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "type": "inline_equation", + "content": "\\cdot / V_0" + }, + { + "bbox": [ + 104, + 560, + 477, + 573 + ], + "type": "text", + "content": " are all elementwise operations." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "spans": [ + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "text", + "content": "Quadratic approximation of episodic loss via SGLD. To find the closed-form solution " + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "inline_equation", + "content": "L_{i}^{*}(L_{0})" + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "text", + "content": " in (8), we make quadratic approximation of " + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "inline_equation", + "content": "l_{i}(\\theta_{i}) = -\\log p(D_{i}|\\theta_{i})" + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "text", + "content": ". In general, " + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "inline_equation", + "content": "-\\log p(D_i|\\theta)" + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "text", + "content": ", as a function of " + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 577, + 506, + 611 + ], + "type": "text", + "content": ", can be written as:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 204, + 612, + 505, + 634 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 204, + 612, + 505, + 634 + ], + "spans": [ + { + "bbox": [ + 204, + 612, + 505, + 634 + ], + "type": "interline_equation", + "content": "- \\log p \\left(D _ {i} | \\theta\\right) \\approx \\frac {1}{2} \\left(\\theta - \\bar {m} _ {i}\\right) ^ {\\top} \\bar {A} _ {i} \\left(\\theta - \\bar {m} _ {i}\\right) + \\text {c o n s t .}, \\tag {9}", + "image_path": "4e476a691076671fb5486e222cd1ce20b0dbda759590bcbddc59c6ed849fcac9.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": "for some " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "(\\overline{m}_i,\\overline{A}_i)" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": " that are constant with respect to " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": ". One may attempt to obtain " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "(\\overline{m}_i,\\overline{A}_i)" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": " via Laplace approximation (e.g., the minimiser of " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "-\\log p(D_i|\\theta)" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\overline{m}_i" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": " and the Hessian at the minimiser for " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\overline{A}_i" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": "). However, this involves computationally intensive Hessian computation. Instead, using the fact that the log-posterior " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\log p(\\theta |D_i)" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": " equals (up to constant) " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\log p(D_i|\\theta)" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": " when we use uninformative prior " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "p(\\theta)\\propto 1" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": ", we can obtain samples from the posterior " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "p(\\theta |D_i)" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": " using MCMC sampling, especially the stochastic gradient Langevin dynamics (SGLD) (Welling & Teh, 2011), and estimate sample mean and precision, which become " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\overline{m}_i" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "inline_equation", + "content": "\\overline{A}_i" + }, + { + "bbox": [ + 104, + 635, + 506, + 715 + ], + "type": "text", + "content": ", respectively1. Note that this amounts to performing" + } + ] + } + ], + "index": 20 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 117, + 720, + 498, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 720, + 498, + 732 + ], + "spans": [ + { + "bbox": [ + 117, + 720, + 498, + 732 + ], + "type": "text", + "content": "Similar approaches include the stochastic weight averaging (Izmailov et al., 2018; Maddox et al., 2019)." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "3" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 2 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 115 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 115 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 115 + ], + "type": "text", + "content": "several SGD iterations (skipping a few initial for burn-in), and unlike MAML (Finn et al., 2017) no computation graph needs to be maintained since " + }, + { + "bbox": [ + 104, + 82, + 504, + 115 + ], + "type": "inline_equation", + "content": "(\\overline{m}_i,\\overline{A}_i)" + }, + { + "bbox": [ + 104, + 82, + 504, + 115 + ], + "type": "text", + "content": " are constant. Once we have " + }, + { + "bbox": [ + 104, + 82, + 504, + 115 + ], + "type": "inline_equation", + "content": "(\\overline{m}_i,\\overline{A}_i)" + }, + { + "bbox": [ + 104, + 82, + 504, + 115 + ], + "type": "text", + "content": ", the optimisation (8) admits the closed-form solution (Appendix B.4 for derivations)," + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 138, + 116, + 505, + 130 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 116, + 505, + 130 + ], + "spans": [ + { + "bbox": [ + 138, + 116, + 505, + 130 + ], + "type": "interline_equation", + "content": "m _ {i} ^ {*} (L _ {0}) = (\\bar {A} _ {i} + n _ {0} / V _ {0}) ^ {- 1} (\\bar {A} _ {i} \\bar {m} _ {i} + n _ {0} m _ {0} / V _ {0}), \\quad V _ {i} ^ {*} (L _ {0}) = (\\bar {A} _ {i} + n _ {0} / V _ {0}) ^ {- 1}. \\tag {10}", + "image_path": "cd4f222dbd3bc362ce5bc4997e99ebc80037ea538b02e1b4dce09e1517290ac6.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 131, + 350, + 143 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 131, + 350, + 143 + ], + "spans": [ + { + "bbox": [ + 104, + 131, + 350, + 143 + ], + "type": "text", + "content": "Computation in (10) is cheap since all matrices are diagonal." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 147, + 505, + 159 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 147, + 505, + 159 + ], + "spans": [ + { + "bbox": [ + 104, + 147, + 505, + 159 + ], + "type": "text", + "content": "Final optimisation. Plugging (10) back to (6), the final optimisation is (Appendix B.5 for details):" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 115, + 160, + 509, + 214 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 160, + 509, + 214 + ], + "spans": [ + { + "bbox": [ + 115, + 160, + 509, + 214 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\min _ {L _ {0}} \\mathbb {E} _ {i \\sim \\mathcal {T}} \\left[ f _ {i} (L _ {0}) + \\frac {1}{2} g _ {i} (L _ {0}) + \\frac {d}{2 l _ {0}} \\right] \\mathrm {s . t .} f _ {i} (L _ {0}) = \\mathbb {E} _ {\\epsilon \\sim \\mathcal {N} (0, I)} \\Big [ l _ {i} \\Big (m _ {i} ^ {*} (L _ {0}) + V _ {i} ^ {*} (L _ {0}) ^ {1 / 2} \\epsilon \\Big) \\Big ], \\\\ g _ {i} \\left(L _ {0}\\right) = \\log \\frac {\\left| V _ {0} \\right|}{\\left| V _ {i} ^ {*} \\left(L _ {0}\\right) \\right|} + n _ {0} \\operatorname {T r} \\left(V _ {i} ^ {*} \\left(L _ {0}\\right) / V _ {0}\\right) + n _ {0} \\left(m _ {i} ^ {*} \\left(L _ {0}\\right) - m _ {0}\\right) ^ {2} / V _ {0} - \\psi_ {d} \\left(\\frac {n _ {0}}{2}\\right), \\tag {11} \\\\ \\end{array}", + "image_path": "a656c666c36fc1adbd029131c33817e03d1e758daf8ac547bb8662a42dcd550e.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "spans": [ + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "inline_equation", + "content": "\\psi_d(\\cdot)" + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": " is the multivariate digamma function and " + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "inline_equation", + "content": "d = \\dim (\\theta)" + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": ". As " + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "inline_equation", + "content": "l_{0}" + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": " only appears in the term " + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "inline_equation", + "content": "\\frac{d}{2l_0}" + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": ", the optimal value is " + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "inline_equation", + "content": "l_0^* = \\infty" + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": ". We use SGD to solve (11), repeating the two steps: i) Sample " + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "inline_equation", + "content": "i\\sim \\mathcal{T}" + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": "; ii) " + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "inline_equation", + "content": "L_{0}\\gets L_{0} - \\eta \\nabla_{L_{0}}\\big(f_{i}(L_{0}) + \\frac{1}{2} g_{i}(L_{0})\\big)" + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": ". Note that " + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "inline_equation", + "content": "\\nabla_{L_0}\\left(f_i(L_0) + \\frac{1}{2} g_i(L_0)\\right)" + }, + { + "bbox": [ + 104, + 216, + 272, + 307 + ], + "type": "text", + "content": " is an unbiased stochastic estimate for the gra" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 278, + 220, + 487, + 232 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 278, + 220, + 487, + 232 + ], + "spans": [ + { + "bbox": [ + 278, + 220, + 487, + 232 + ], + "type": "text", + "content": "Algorithm 1 Our few-shot meta learning algorithm." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 286, + 233, + 482, + 244 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 233, + 482, + 244 + ], + "spans": [ + { + "bbox": [ + 286, + 233, + 482, + 244 + ], + "type": "text", + "content": "Initialise: " + }, + { + "bbox": [ + 286, + 233, + 482, + 244 + ], + "type": "inline_equation", + "content": "L_0 = \\{m_0, V_0, n_0\\}" + }, + { + "bbox": [ + 286, + 233, + 482, + 244 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 286, + 233, + 482, + 244 + ], + "type": "inline_equation", + "content": "q(\\phi; L_0)" + }, + { + "bbox": [ + 286, + 233, + 482, + 244 + ], + "type": "text", + "content": " randomly." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 286, + 245, + 390, + 254 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 286, + 245, + 390, + 254 + ], + "spans": [ + { + "bbox": [ + 286, + 245, + 390, + 254 + ], + "type": "text", + "content": "for episode " + }, + { + "bbox": [ + 286, + 245, + 390, + 254 + ], + "type": "inline_equation", + "content": "i = 1,2,\\ldots" + }, + { + "bbox": [ + 286, + 245, + 390, + 254 + ], + "type": "text", + "content": " do" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 297, + 254, + 505, + 285 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 297, + 254, + 492, + 264 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 254, + 492, + 264 + ], + "spans": [ + { + "bbox": [ + 297, + 254, + 492, + 264 + ], + "type": "text", + "content": "Perform SGLD iterations on " + }, + { + "bbox": [ + 297, + 254, + 492, + 264 + ], + "type": "inline_equation", + "content": "D_{i}" + }, + { + "bbox": [ + 297, + 254, + 492, + 264 + ], + "type": "text", + "content": " to estimate " + }, + { + "bbox": [ + 297, + 254, + 492, + 264 + ], + "type": "inline_equation", + "content": "(\\overline{m}_i, \\overline{A}_i)" + }, + { + "bbox": [ + 297, + 254, + 492, + 264 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 297, + 264, + 485, + 274 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 264, + 485, + 274 + ], + "spans": [ + { + "bbox": [ + 297, + 264, + 485, + 274 + ], + "type": "text", + "content": "Compute the episodic minimiser " + }, + { + "bbox": [ + 297, + 264, + 485, + 274 + ], + "type": "inline_equation", + "content": "L_{i}^{*}(L_{0})" + }, + { + "bbox": [ + 297, + 264, + 485, + 274 + ], + "type": "text", + "content": " from (10)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 297, + 274, + 505, + 285 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 297, + 274, + 505, + 285 + ], + "spans": [ + { + "bbox": [ + 297, + 274, + 505, + 285 + ], + "type": "text", + "content": "Update " + }, + { + "bbox": [ + 297, + 274, + 505, + 285 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 297, + 274, + 505, + 285 + ], + "type": "text", + "content": " by the gradient of " + }, + { + "bbox": [ + 297, + 274, + 505, + 285 + ], + "type": "inline_equation", + "content": "f_{i}(L_{0}) + \\frac{1}{2} g_{i}(L_{0})" + }, + { + "bbox": [ + 297, + 274, + 505, + 285 + ], + "type": "text", + "content": " as in (11)." + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 287, + 285, + 317, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 287, + 285, + 317, + 293 + ], + "spans": [ + { + "bbox": [ + 287, + 285, + 317, + 293 + ], + "type": "text", + "content": "end for" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 287, + 293, + 367, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 287, + 293, + 367, + 304 + ], + "spans": [ + { + "bbox": [ + 287, + 293, + 367, + 304 + ], + "type": "text", + "content": "Output: Learned " + }, + { + "bbox": [ + 287, + 293, + 367, + 304 + ], + "type": "inline_equation", + "content": "L_0" + }, + { + "bbox": [ + 287, + 293, + 367, + 304 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "spans": [ + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "type": "text", + "content": "dient of the objective " + }, + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_{i\\sim \\mathcal{T}}[\\dots ]" + }, + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "type": "text", + "content": " in (11). Furthermore, our learning algorithm above (pseudocode in Alg. 1) is fully compatible with the online/batch episode sampling nature of typical FSL. After training, we obtain the learned " + }, + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "type": "text", + "content": ", and the posterior " + }, + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "type": "inline_equation", + "content": "q(\\phi ;L_0)" + }, + { + "bbox": [ + 104, + 307, + 504, + 352 + ], + "type": "text", + "content": " will be used at the meta test time, where we show in Sec. 3.2 that this can be seen as Bayesian inference as well." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "spans": [ + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": "We emphasise that our framework is completely flexible in the choice of the backbone " + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "inline_equation", + "content": "p(y|x,\\theta)" + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": ". It could be the popular instance-based network comprised of a feature extractor and a prediction head where the latter can be either a conventional learnable readout head or the parameter-free one like the nearest centroid classifier (NCC) in ProtoNet (Snell et al., 2017), i.e., " + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "inline_equation", + "content": "p(D|\\theta) = p(Q|S,\\theta)" + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "inline_equation", + "content": "D = S\\cup Q" + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "inline_equation", + "content": "p(y|x,S,\\theta)" + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": " is the NCC prediction with support " + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": ". We can also adopt the set-based networks (Ye et al., 2020; Garnelo et al., 2018; Kim et al., 2019) where " + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "inline_equation", + "content": "p(y|x,S,\\theta)" + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": " itself is modeled by a neural net " + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "inline_equation", + "content": "y = G(x,S;\\theta)" + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": " with input " + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "inline_equation", + "content": "(x,S)" + }, + { + "bbox": [ + 104, + 357, + 506, + 435 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 105, + 447, + 206, + 457 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 447, + 206, + 457 + ], + "spans": [ + { + "bbox": [ + 105, + 447, + 206, + 457 + ], + "type": "text", + "content": "3.1 INTERPRETATION" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 104, + 462, + 506, + 475 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 462, + 506, + 475 + ], + "spans": [ + { + "bbox": [ + 104, + 462, + 506, + 475 + ], + "type": "text", + "content": "We show that our framework unifies seemingly unrelated seminal FSL algorithms into one perspective." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "spans": [ + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": "MAML (Finn et al., 2017) as a special case. Suppose we have spiky variational densities, " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "V_{i} \\rightarrow 0" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " (constant). The local episodic optimisation (8) reduces to: " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "\\arg \\min_{m_i} l_i(\\theta_i) + R(m_i)" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "R(m_i)" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " is the quadratic penalty of " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "m_i" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " deviating from " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "m_0" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": ". One reasonable solution is to perform a few gradient steps with loss " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "l_i" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": ", starting from " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "m_0" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " to have small penalty (" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "R = 0" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " initially). That is, " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "m_i \\gets m_0" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " and a few steps of " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "m_i \\gets m_i - \\alpha \\nabla l_i(m_i)" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " to return " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "m_i^*(L_0)" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": ". Plugging this into (11) and disregarding the " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "g_i" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " term, leads to the MAML algorithm. Obviously, the main drawback is " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "m_i^*(L_0)" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " is a function of " + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "inline_equation", + "content": "m_0 \\in L_0" + }, + { + "bbox": [ + 104, + 479, + 506, + 558 + ], + "type": "text", + "content": " via a full computation graph of SGD steps, compared to our lightweight closed forms (10)." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "spans": [ + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "text", + "content": "ProtoNet (Snell et al., 2017) as a special case. With " + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "inline_equation", + "content": "V_{i} \\to 0" + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "text", + "content": ", if we ignore the negative log-likelihood term in (8), then the optimal solution becomes " + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "inline_equation", + "content": "m_{i}^{*}(L_{0}) = m_{0}" + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "text", + "content": ". If we remove the " + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "inline_equation", + "content": "g_{i}" + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "text", + "content": " term, we can solve (11) by simple gradient descent with " + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "inline_equation", + "content": "\\nabla_{m_0}(-\\log p(D_i|m_0))" + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "text", + "content": ". We then adopt the NCC head and regard " + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "inline_equation", + "content": "m_{0}" + }, + { + "bbox": [ + 104, + 562, + 506, + 608 + ], + "type": "text", + "content": " as sole feature extractor parameters, which becomes exactly the ProtoNet update." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "spans": [ + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": "Reptile (Nichol et al., 2018) as a special case. Instead, if we ignore all penalty terms in (8) and follow our quadratic approximation (9) with " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "V_{i} \\to 0" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": ", then " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "m_{i}^{*}(L_{0}) = \\overline{m}_{i}" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": ". It is constant with respect to " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "L_{0} = (m_{0}, V_{0}, n_{0})" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": ", and makes the optimisation (11) very simple: the optimal " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "m_{0}" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": " is the average of " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "\\overline{m}_{i}" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": " for all tasks " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": ", i.e., " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "m_{0}^{*} = \\mathbb{E}_{i \\sim \\mathcal{T}}[\\overline{m}_{i}]" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": " (we ignore " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "V_{0}" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": " here). Note that Reptile ultimately finds the exponential smoothing of " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "m_{i}^{(k)}" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": " over " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "i \\sim \\mathcal{T}" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "m_{i}^{(k)}" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": " is the iterate after " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": " SGD steps for task " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": ". This can be seen as an online/running estimate of " + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_{i \\sim \\mathcal{T}}[\\overline{m}_{i}]" + }, + { + "bbox": [ + 104, + 612, + 506, + 683 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 105, + 694, + 350, + 704 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 694, + 350, + 704 + ], + "spans": [ + { + "bbox": [ + 105, + 694, + 350, + 704 + ], + "type": "text", + "content": "3.2 META TEST PREDICTION AS BAYESIAN INFERENCE" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": "At meta test time, we need to be able to predict the target " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "y^{*}" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": " of a novel test input " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "x^{*} \\sim \\mathcal{T}^{*}" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": " sampled from the unknown distribution " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "\\mathcal{T}^{*} \\sim p(\\mathcal{T})" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": ". In FSL, we have the test support data " + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "inline_equation", + "content": "D^{*} = \\{(x,y)\\} \\sim \\mathcal{T}^{*}" + }, + { + "bbox": [ + 104, + 709, + 505, + 733 + ], + "type": "text", + "content": ". The" + } + ] + } + ], + "index": 24 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "4" + } + ] + } + ], + "index": 25 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 3 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 116, + 67, + 491, + 106 + ], + "blocks": [ + { + "bbox": [ + 110, + 54, + 499, + 66 + ], + "lines": [ + { + "bbox": [ + 110, + 54, + 499, + 66 + ], + "spans": [ + { + "bbox": [ + 110, + 54, + 499, + 66 + ], + "type": "text", + "content": "Table 1: Three competing Bayesian models for the toy experiment. (Fig. 3 for graphical models)" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 116, + 67, + 491, + 106 + ], + "lines": [ + { + "bbox": [ + 116, + 67, + 491, + 106 + ], + "spans": [ + { + "bbox": [ + 116, + 67, + 491, + 106 + ], + "type": "table", + "html": "
Model IModel IIModel III (Ours)
y = θiT x + βi + εy, p(θi,βi) = N(μ,σ2) ∀iy = θT x + β + εy, p(θ,β) = N(μ,σ2)y = θiT x + βi + εy, p(φ) = N(m,V), p(θi,βi|φ) = N(φ,σ2) ∀i
", + "image_path": "6c98eebadf082b681a41e41ac0e52a03018773e45cb5aa5fd43126afd5f35f62.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "spans": [ + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "text", + "content": "test-time prediction can be seen as a posterior inference problem with additional evidence of the support data " + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "inline_equation", + "content": "D^{*}" + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "text", + "content": " (Fig. 1(c)). More specifically, " + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "inline_equation", + "content": "p(y^{*}|x^{*},D^{*},D_{1:N}) = \\int p(y^{*}|x^{*},\\theta)p(\\theta |D^{*},D_{1:N})d\\theta" + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "text", + "content": ". So, it boils down to " + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "inline_equation", + "content": "p(\\theta |D^{*},D_{1:N})" + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "text", + "content": ", the posterior given both the test support data " + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "inline_equation", + "content": "D^{*}" + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "text", + "content": " and the entire training data " + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "inline_equation", + "content": "D_{1:N}" + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "text", + "content": ". Under our hierarchical model, exploiting conditional independence (Fig. 1(c)), we can link it to our trained " + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 117, + 506, + 174 + ], + "type": "text", + "content": " as:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 112, + 177, + 505, + 202 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 112, + 177, + 505, + 202 + ], + "spans": [ + { + "bbox": [ + 112, + 177, + 505, + 202 + ], + "type": "interline_equation", + "content": "p \\left(\\theta \\mid D ^ {*}, D _ {1: N}\\right) \\approx \\int p \\left(\\theta \\mid D ^ {*}, \\phi\\right) p \\left(\\phi \\mid D _ {1: N}\\right) d \\phi \\approx \\int p \\left(\\theta \\mid D ^ {*}, \\phi\\right) q (\\phi) d \\phi \\approx p \\left(\\theta \\mid D ^ {*}, \\phi^ {*}\\right), \\tag {12}", + "image_path": "2f5959ccc0f769ad55af78310ad6fee0e85a64c54b5b74f0ca85b2b1e5d33d69.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "spans": [ + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": "where in the first approximation in (12) we disregard the impact of " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "D^{*}" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": " on the higher-level " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": " given the joint evidence, i.e., " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "p(\\phi |D^{*},D_{1:N})\\approx p(\\phi |D_{1:N})" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": ", due to dominance of " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "D_{1:N}" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": " compared to smaller " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "D^{*}" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": ". We use the delta function approximation in the last part of (12) with the mode " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "\\phi^*" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": " of " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "\\phi^{*} = (\\mu^{*},\\Sigma^{*})" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": " has a closed form " + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "inline_equation", + "content": "\\mu^{*} = m_{0},\\Sigma^{*} = V_{0} / (n_{0} + d + 2)" + }, + { + "bbox": [ + 104, + 205, + 504, + 251 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "spans": [ + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "text", + "content": "Next, since " + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "inline_equation", + "content": "p(\\theta |D^{*},\\phi^{*})" + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "text", + "content": " involves difficult marginalisation " + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "inline_equation", + "content": "p(D^{*}|\\phi^{*}) = \\int p(D^{*}|\\theta)p(\\theta |\\phi^{*})d\\theta" + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "text", + "content": ", we adopt variational inference, introducing a tractable variational distribution " + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "inline_equation", + "content": "v(\\theta)\\approx p(\\theta |D^{*},\\phi^{*})" + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "text", + "content": ". With the Gaussian family as in the training time (4), i.e., " + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "inline_equation", + "content": "v(\\theta) = \\mathcal{N}(\\theta ;m,V)" + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "inline_equation", + "content": "(m,V)" + }, + { + "bbox": [ + 104, + 255, + 504, + 300 + ], + "type": "text", + "content": " are the variational parameters optimised by ELBO optimisation," + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 124, + 304, + 505, + 323 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 124, + 304, + 505, + 323 + ], + "spans": [ + { + "bbox": [ + 124, + 304, + 505, + 323 + ], + "type": "interline_equation", + "content": "\\min _ {m, V} \\mathbb {E} _ {v (\\theta)} [ - \\log p (D ^ {*} | \\theta) ] + \\mathrm {K L} (v (\\theta) \\| p (\\theta | \\phi^ {*})) \\text {w h e r e} \\phi^ {*} = \\left(m _ {0}, V _ {0} / \\left(n _ {0} + d + 2\\right)\\right). \\tag {13}", + "image_path": "6cd6bf16d1daae3b1450d14177638d9cae3237110e295e570da9a7e161125d27.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "spans": [ + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": "The detailed derivations for (13) can be found in Appendix B.6. Once we have the optimised model " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "v" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": ", our predictive distribution can be approximated by the Monte-Carlo average. " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "p(y^{*}|x^{*},D^{*},D_{1:N})\\approx (1 / M_S)\\sum_{s = 1}^{M_S}p(y^* |x^*,\\theta^{(s)})" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "\\theta^{(s)}\\sim v(\\theta)" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "s = 1,\\dots ,M_S" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": " samples. which simply requires feed-forwarding " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "x^{*}" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": " through the sampled networks " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "\\theta^{(s)}" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": " and averaging. Our meta-test algorithm is also summarised in Alg. 2 (Appendix). Note that we have test-time backbone update as per (13), which can make the final " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": " deviate from the learned mean " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "m_0" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": ". Alternatively, if we drop the first term in (13), the optimal " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "v(\\theta)" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": " equals " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "p(\\theta |\\phi^{*}) = \\mathcal{N}(\\theta ;m_0,V_0 / (n_0 + d + 2))" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": ". This can be seen as using the learned model " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "m_0" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": " with some small random perturbation as a test-time backbone " + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 327, + 506, + 420 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 435, + 441, + 449 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 435, + 441, + 449 + ], + "spans": [ + { + "bbox": [ + 104, + 435, + 441, + 449 + ], + "type": "text", + "content": "4 TOY EXPERIMENT: WHY HIERARCHICAL BAYESIAN MODEL?" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "spans": [ + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": "To demonstrate why our hierarchical Bayesian modelling is effective for few-shot meta learning problems, we devise a simple toy synthetic experiment as a proof of concept. We consider a multi-task (Bayesian) linear regression problem. The data pairs " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "(x\\in \\mathbb{R}^2,y\\in \\mathbb{R})" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " for each episode " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " are generated by the following process: " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "y = (w_{\\mathrm{shared}} + \\epsilon_w)^{\\top}x + b_{j(i)} + \\epsilon_y" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " is the episode-agnostic shared weight vector " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "\\forall i" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": ", and we have episode-dependent intercept " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "b_{j(i)}" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " - among the three candidates " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "\\{b_1,b_2,b_3\\}" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": ", we select " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "j(i)\\sim \\{1,2,3\\}" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " uniformly at random for each episode " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": ". Please refer to Appendix. C for full details and derivations. In this way we ensure that the resulting episodes are not only related to one another through the shared weight vector, but they are differentiated by potentially different intercepts. We sample " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "N = 40" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " episodes for training and 10 episodes for test. Each training episode has " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "|D_i| = 3" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " samples, and the support set at test time also has " + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "inline_equation", + "content": "|D_{*}| = 3" + }, + { + "bbox": [ + 104, + 455, + 504, + 567 + ], + "type": "text", + "content": " labeled samples." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "spans": [ + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "text", + "content": "Three competing models. We consider three Bayesian models with different levels/degrees of flexibility and regularisation as outlined in Table 1. Model I has episode-wise parameters " + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "inline_equation", + "content": "(\\theta_i, \\beta_i)" + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "text", + "content": ", thus highly flexible. However, these parameters are all independent across episodes, hence lacking regularisation. Model II is a conventional (non-hierarchical) Bayesian model where a single parameter set " + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "inline_equation", + "content": "(\\theta, \\beta)" + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "text", + "content": " is shared across episodes, thus too much regularisation with lack of flexibility. Model III is our hierarchical Bayesian model which imposes balanced flexibility and regularisation - episode-wise " + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "inline_equation", + "content": "(\\theta_i, \\beta_i)" + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "text", + "content": "'s allows high flexibility, but unlike Model I, the higher-level variable " + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 571, + 506, + 661 + ], + "type": "text", + "content": " regularises the episode-specific parameters, and captures the inter-episodic shared information." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": "Results. After training (learning " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": " for Model I and II; learning " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "(m,V)" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": " for our Model III), at test time, for each of 10 test episodes, we obtain the posterior means of the weights and intercept parameters " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\mathbb{E}[\\theta ,\\beta |D_{*},D_{1:N}]" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": " for the three models. They all admit closed-form solutions as detailed in Appendix C, and we predict the outputs of " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\sim 50" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": " unseen test inputs. The mean absolute errors (MAE) averaged over 10 test episodes are: Model " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\mathbf{I} = 2.87" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": ", Model " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\mathbf{II} = 3.13" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": ", and Model III (ours) " + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "inline_equation", + "content": "= 1.28" + }, + { + "bbox": [ + 104, + 665, + 506, + 733 + ], + "type": "text", + "content": ", clearly showing the superiority of our model to other competing methods. Fig. 2 visualises" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "5" + } + ] + } + ], + "index": 13 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 4 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 45, + 236, + 144 + ], + "blocks": [ + { + "bbox": [ + 106, + 45, + 236, + 144 + ], + "lines": [ + { + "bbox": [ + 106, + 45, + 236, + 144 + ], + "spans": [ + { + "bbox": [ + 106, + 45, + 236, + 144 + ], + "type": "image", + "image_path": "55eef7be1fab734ffd0c61b433fecdbf00db1cf6e4e216f8d00619fb9f3d83e1.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 143, + 147, + 212, + 159 + ], + "lines": [ + { + "bbox": [ + 143, + 147, + 212, + 159 + ], + "spans": [ + { + "bbox": [ + 143, + 147, + 212, + 159 + ], + "type": "text", + "content": "(a) Weight dim-1" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 240, + 45, + 370, + 144 + ], + "blocks": [ + { + "bbox": [ + 240, + 45, + 370, + 144 + ], + "lines": [ + { + "bbox": [ + 240, + 45, + 370, + 144 + ], + "spans": [ + { + "bbox": [ + 240, + 45, + 370, + 144 + ], + "type": "image", + "image_path": "cd5ba6eb9edb30d638f66b8bd767426c90d81efe9b861dc78d894c476d5c5fae.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 269, + 147, + 340, + 159 + ], + "lines": [ + { + "bbox": [ + 269, + 147, + 340, + 159 + ], + "spans": [ + { + "bbox": [ + 269, + 147, + 340, + 159 + ], + "type": "text", + "content": "(b) Weight dim-2" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "lines": [ + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "spans": [ + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "text", + "content": "Figure 2: Toy experiments. Visualisation of the learned posterior means compared to the true values (blue-circled). (a) weight dim-1 (" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "inline_equation", + "content": "\\theta[0]" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}[0]" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "text", + "content": "), (b) weight dim-2 (" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "inline_equation", + "content": "\\theta[1]" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}[1]" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "text", + "content": ") and (c) intercept (" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "inline_equation", + "content": "b_{j(*)}" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "text", + "content": "). In each plot, the X-axis shows the indices of the true intercepts sampled, that is, " + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "inline_equation", + "content": "j(*) \\in \\{1, 2, 3\\}" + }, + { + "bbox": [ + 104, + 160, + 504, + 217 + ], + "type": "text", + "content": ", for 10 test episodes. In the titles we also report the distances (errors) between the true values and the posterior means for the three methods, averaged over 10 episodes." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 373, + 45, + 503, + 144 + ], + "blocks": [ + { + "bbox": [ + 373, + 45, + 503, + 144 + ], + "lines": [ + { + "bbox": [ + 373, + 45, + 503, + 144 + ], + "spans": [ + { + "bbox": [ + 373, + 45, + 503, + 144 + ], + "type": "image", + "image_path": "d2fd552f47bc51f8ab035176664e7bf88c1d3c23f66df53e278d296b7f87a585.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 415, + 147, + 466, + 159 + ], + "lines": [ + { + "bbox": [ + 415, + 147, + 466, + 159 + ], + "spans": [ + { + "bbox": [ + 415, + 147, + 466, + 159 + ], + "type": "text", + "content": "(c) Intercept" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "spans": [ + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "type": "text", + "content": "the results. First, Model II's posterior means rarely change over test episodes, meaning the impact of test support data " + }, + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "type": "inline_equation", + "content": "D_{*}" + }, + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "type": "text", + "content": " is limited. This behavior is expected since the model imposes too much regularisation with little flexibility, and the test prediction is dominated by the mean model obtained from training data " + }, + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "type": "inline_equation", + "content": "D_{1:N}" + }, + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "type": "text", + "content": ". Model I exhibits highly sensitive predictions over test episodes, which mainly originates from little regularisation – the posterior is too sensitive to the current episode's support data, thus being vulnerable to overfitting especially when the support data size is small, typical in the few-shot learning. The model failed to capture useful shared information " + }, + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}" + }, + { + "bbox": [ + 104, + 228, + 506, + 350 + ], + "type": "text", + "content": ". Our Model III balances between these two extremes, imposing proper amount of regularisation and endowing adequate flexibility. Our posterior estimation best extracts the shared episode-agnostic information (the weight parameters fluctuate less over the test episodes), and captures the episode-specific features the most accurately (the estimated intercepts are aligned the best with the true values)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 365, + 441, + 377 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 365, + 441, + 377 + ], + "spans": [ + { + "bbox": [ + 105, + 365, + 441, + 377 + ], + "type": "text", + "content": "5 THEORETICAL ANALYSIS: GENERALISATION ERROR BOUNDS" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "spans": [ + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "text", + "content": "We offer two theorems for the generalisation error of the proposed model. The first theorem relates the generalisation error to the ultimate ELBO loss (6) that we minimised in our algorithm, and we utilise the recent PAC-Bayes- " + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "text", + "content": " bound (Thiemann et al., 2017; Rivasplata et al., 2019). The second theorem is based on the recent regression analysis technique (Pati et al., 2018; Bai et al., 2020). Without loss of generality we assume " + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "inline_equation", + "content": "|D_{i}| = n" + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "text", + "content": " for all episodes " + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "text", + "content": ". We let " + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "inline_equation", + "content": "(q^{*}(\\phi),\\{q_{i}^{*}(\\theta_{i})\\}_{i = 1}^{N})" + }, + { + "bbox": [ + 104, + 385, + 504, + 452 + ], + "type": "text", + "content": " be the optimal solution of (6). We leave technical details and the proofs for both theorems in Appendix A." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "spans": [ + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": "Theorem 5.1 (PAC-Bayes-" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": " bound). Let " + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "inline_equation", + "content": "R_{i}(\\theta)" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": " be the generalisation error of model " + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": " for the task " + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": ", more specifically, " + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "inline_equation", + "content": "R_{i}(\\theta) = \\mathbb{E}_{(x,y)\\sim \\mathcal{T}_{i}}[-\\log p(y|x,\\theta)]" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": ". As the number of training episodes " + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "inline_equation", + "content": "N\\to \\infty" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": ", the following holds with probability at least " + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "inline_equation", + "content": "1 - \\delta" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": " for arbitrary small " + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "inline_equation", + "content": "\\delta >0" + }, + { + "bbox": [ + 104, + 454, + 506, + 488 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 167, + 492, + 504, + 515 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 492, + 504, + 515 + ], + "spans": [ + { + "bbox": [ + 167, + 492, + 504, + 515 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q _ {i} ^ {*} (\\theta_ {i})} [ R _ {i} (\\theta_ {i}) ] \\leq \\frac {2 \\epsilon^ {*}}{n} \\quad \\text {w h e r e} \\epsilon^ {*} = \\text {t h e o p t i m a l v a l u e o f (6)}. \\tag {14}", + "image_path": "cfaebd4fd3ef3a7f3e53ff2c008be7c89f8d13aabbbcfc455ff1b640e13a0080.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "spans": [ + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "text", + "content": "Theorem 5.2 (Bound derived from regression analysis). Let " + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "inline_equation", + "content": "d_H^2(P_{\\theta_i}, P^i)" + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "text", + "content": " be the expected squared Hellinger distance between the true distribution " + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "inline_equation", + "content": "P^i(y|x)" + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "text", + "content": " and model's " + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "inline_equation", + "content": "P_{\\theta_i}(y|x)" + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "text", + "content": " for task " + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "text", + "content": ". As the number of training episodes " + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "inline_equation", + "content": "N \\to \\infty" + }, + { + "bbox": [ + 104, + 518, + 504, + 554 + ], + "type": "text", + "content": ", the following holds with high probability:" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 197, + 558, + 504, + 581 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 558, + 504, + 581 + ], + "spans": [ + { + "bbox": [ + 197, + 558, + 504, + 581 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q _ {i} ^ {*} (\\theta_ {i})} \\left[ d _ {H} ^ {2} \\left(P _ {\\theta_ {i}}, P ^ {i}\\right) \\right] \\leq O \\left(\\frac {1}{n} + \\epsilon_ {n} ^ {2} + r _ {n}\\right) + \\lambda^ {*}, \\tag {15}", + "image_path": "185fdf1b915a9229a869031bcf25b6a7178cb716a49fa398d25c824704e7450f.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "spans": [ + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "inline_equation", + "content": "\\lambda^{*} = \\mathbb{E}_{i\\sim \\mathcal{T}}[\\lambda_{i}^{*}]" + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "inline_equation", + "content": "\\lambda_{i}^{*} = \\min_{\\theta \\in \\Theta}||\\mathbb{E}_{\\theta}[y|\\cdot ] - \\mathbb{E}^{i}[y|\\cdot ]||_{\\infty}^{2}" + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "text", + "content": " is the lowest possible regression error within " + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "inline_equation", + "content": "\\Theta" + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "inline_equation", + "content": "r_n,\\epsilon_n" + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "text", + "content": " are decreasing sequences vanishing to 0 as " + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 585, + 504, + 608 + ], + "type": "text", + "content": " increases." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 624, + 212, + 635 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 624, + 212, + 635 + ], + "spans": [ + { + "bbox": [ + 105, + 624, + 212, + 635 + ], + "type": "text", + "content": "6 RELATED WORK" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 643, + 506, + 732 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 643, + 506, + 732 + ], + "spans": [ + { + "bbox": [ + 104, + 643, + 506, + 732 + ], + "type": "text", + "content": "Due to limited space it is infeasible to review all general FSL and meta learning algorithms here. We refer the readers to (Hospedales et al., 2022; Wang et al., 2020a), the excellent comprehensive surveys on the latest techniques. We rather focus on discussing recent Bayesian approaches and their relation to ours. Although several Bayesian FSL approaches have been proposed before, most of them dealt with only a small fraction of the network weights (e.g., a readout head alone) as random variables (Garnelo et al., 2018; Kim et al., 2019; Requeima et al., 2019; Gordon et al., 2019; Patacchiola et al., 2020; Zhang et al., 2021). This considerably limits the benefits from uncertainty modeling of full network parameters." + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "6" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 5 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 106, + 60, + 504, + 178 + ], + "blocks": [ + { + "bbox": [ + 106, + 41, + 504, + 60 + ], + "lines": [ + { + "bbox": [ + 106, + 41, + 504, + 60 + ], + "spans": [ + { + "bbox": [ + 106, + 41, + 504, + 60 + ], + "type": "text", + "content": "Table 2: Classification accuracies with standard backbones on miniImageNet and tieredImageNet. (a) miniImageNet (b) tieredImageNet" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 60, + 504, + 178 + ], + "lines": [ + { + "bbox": [ + 106, + 60, + 504, + 178 + ], + "spans": [ + { + "bbox": [ + 106, + 60, + 504, + 178 + ], + "type": "table", + "html": "
ModelBackbone1-Shot5-Shot
AM3 (Xing et al., 2019)ResNet1265.21±0.4975.20±0.36
RelationNet2 (Zhang et al., 2020)ResNet1263.92±0.9877.15±0.59
MetaOpt (Lee et al., 2019)ResNet1264.09±0.6280.00±0.45
SimpleShot (Wang et al., 2019)ResNet1862.85±0.2080.02±0.14
S2M2 (Mangla et al., 2020)ResNet1864.06±0.1880.58±0.12
MetaQDA (Zhang et al., 2021)ResNet1865.12±0.6680.98±0.75
NIW-Meta (Ours)ResNet1865.49±0.5681.71±0.17
SimpleShotWRN28-1063.50±0.2080.33±0.14
S2M2WRN28-1064.93±0.1883.18±0.22
MetaQDAWRN28-1067.83±0.6484.28±0.69
NIW-Meta (Ours)WRN28-1068.54±0.2684.81±0.28
", + "image_path": "2ac84358521c6f158198c2411c525d020b6e865ffe74ddaf87f9c29332753396.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "spans": [ + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "text", + "content": "Bayesian approaches to MAML (Finn et al., 2018; Yoon et al., 2018; Ravi & Beatson, 2019; Nguyen et al., 2020) are popular probabilistic extensions of the gradient-based adaptation in MAML (Finn et al., 2017) with known theoretical support (Chen & Chen, 2022). However, they are weak in several aspects to be considered as principled Bayesian methods. For instance, Probabilistic MAML (PMAML) (Finn et al., 2018; Grant et al., 2018) has a similar hierarchical graphical model structure as ours, but their learning algorithm considerably deviates from the original variational inference objective. Unlike the original derivation of the KL term measuring the divergence between the posterior and prior on the task-specific variable " + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "text", + "content": ", namely " + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_{q(\\phi)}[\\mathrm{KL}(q_i(\\theta_i|\\phi)||p(\\theta_i|\\phi))]" + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "text", + "content": " as in (5), in PMAML they measure the divergence on the global variable " + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "text", + "content": ", aiming to align the two adapted models, one from the support data only " + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "inline_equation", + "content": "q(\\phi |S_i)" + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "text", + "content": " and the other from both support and query " + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "inline_equation", + "content": "q(\\phi |S_i,Q_i)" + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "text", + "content": ". VAMPIRE (Nguyen et al., 2020) incorporates uncertainty modeling to MAML by extending MAML's point estimate to a distributional one that is learned by variational inference. However, it inherits all computational overheads from MAML, hindering scalability. The BMAML (Yoon et al., 2018) is not a hierarchical Bayesian model, but aims to replace MAML's gradient-based deterministic adaptation steps by the stochastic counterpart using the samples (called particles) from " + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "inline_equation", + "content": "p(\\theta_i|S_i)" + }, + { + "bbox": [ + 104, + 186, + 506, + 397 + ], + "type": "text", + "content": ", thus adopting stochastic ensemble-based adaptation steps. If a single particle is used, it reduces exactly to MAML. Thus existing Bayesian approaches are not directly related to our hierarchical Bayesian perspective. A related but different line of research studied Bayesian neural processes (Volpp et al., 2023; 2021; Qi Wang, 2020; Garnelo et al., 2018) by treating the support set embedding as random variates." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 405, + 194, + 417 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 405, + 194, + 417 + ], + "spans": [ + { + "bbox": [ + 105, + 405, + 194, + 417 + ], + "type": "text", + "content": "7 EVALUATION" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 422, + 251, + 433 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 422, + 251, + 433 + ], + "spans": [ + { + "bbox": [ + 105, + 422, + 251, + 433 + ], + "type": "text", + "content": "7.1 FEW-SHOT CLASSIFICATION" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 434, + 504, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 434, + 504, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 434, + 504, + 491 + ], + "type": "text", + "content": "Standard benchmarks with ResNet backbones. For standard benchmark comparison using the popular ResNet backbones, ResNet-18 (He et al., 2016) and WideResNet (Zagoruyko & Komodakis, 2016), we test our method on: miniImagenet and tieredImageNet (Table 2). We follow the standard protocols (details of experimental settings in Appendix D). Our NIw-Meta exhibits consistent improvement over the SOTAs for different settings in support set size and backbones." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 495, + 230, + 604 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 495, + 230, + 604 + ], + "spans": [ + { + "bbox": [ + 105, + 495, + 230, + 604 + ], + "type": "text", + "content": "Large-scale ViT backbones. We also test our method on large-scale (pretrained) ViT backbones DINO-small (Dino/s) and DINO-base (DINO/b) (Caron et al., 2021), similarly as the setup in (Hu et al., 2022). In Table 3 we report results on the three benchmarks: miniImagenet, CIFAR" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 605, + 504, + 672 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 605, + 504, + 672 + ], + "spans": [ + { + "bbox": [ + 104, + 605, + 504, + 672 + ], + "type": "text", + "content": "FS, and tieredImageNet. Our NiW-Meta adopts the same NCC head as ProtoNet after the ViT feature extractor. As claimed in (Hu et al., 2022), using the pretrained feature extractor and further finetuning it significantly boost the performance of few-shot learning algorithms including ours. Among the competing methods, our approach yields the best accuracy for most cases. In particular, compared to the shallow Bayesian MetaQDA (Zhang et al., 2021), treating all network weights as random variates in our model turns out to be more effective than the readout parameters alone." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 677, + 251, + 688 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 677, + 251, + 688 + ], + "spans": [ + { + "bbox": [ + 105, + 677, + 251, + 688 + ], + "type": "text", + "content": "Set-based adaptation backbones." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 688, + 251, + 731 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 688, + 251, + 731 + ], + "spans": [ + { + "bbox": [ + 105, + 688, + 251, + 731 + ], + "type": "text", + "content": "We also conduct experiments using the set-based adaptation architecture called FEAT introduced in (Ye et al., 2020). The network is tailored for" + } + ] + } + ], + "index": 10 + }, + { + "type": "table", + "bbox": [ + 233, + 508, + 506, + 602 + ], + "blocks": [ + { + "bbox": [ + 262, + 497, + 500, + 507 + ], + "lines": [ + { + "bbox": [ + 262, + 497, + 500, + 507 + ], + "spans": [ + { + "bbox": [ + 262, + 497, + 500, + 507 + ], + "type": "text", + "content": "3: Classification accuracies with large-scale ViT backbones." + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 233, + 508, + 506, + 602 + ], + "lines": [ + { + "bbox": [ + 233, + 508, + 506, + 602 + ], + "spans": [ + { + "bbox": [ + 233, + 508, + 506, + 602 + ], + "type": "table", + "html": "
ModelBackbone / PretrainminiImageNetCIFAR-FStieredImageNet
1-shot5-shot1-shot5-shot1-shot5-shot
ProtoNetDINO/s93.1±0.1298.0±0.1481.1±0.2992.5±0.1389.0±0.1195.8±0.09
MetaOptDINO/s92.2±0.2297.8±0.1670.2±0.2284.1±0.2787.5±0.2594.7±0.20
MetaQDADINO/s92.0±0.3197.0±0.1877.2±0.3490.1±0.1887.8±0.2795.6±0.16
NIW-MetaDINO/s93.4±0.1798.2±0.1582.8±0.2692.9±0.1189.3±0.1696.0±0.14
ProtoNetDINO/b95.3±0.1398.4±0.1284.3±0.1992.2±0.1391.2±0.1596.5±0.10
MetaOptDINO/b94.4±0.1998.4±0.1672.0±0.2986.2±0.1889.5±0.2795.7±0.15
MetaQDADINO/b94.7±0.2198.7±0.1480.9±0.3193.8±0.1589.7±0.2196.5±0.07
NIW-MetaDINO/b95.5±0.1598.7±0.1284.7±0.1393.2±0.1791.4±0.2196.7±0.11
", + "image_path": "92ff87bf46326d365537af143c5c35416352cf961d1c044ec4cb3788df29fd3c.jpg" + } + ] + } + ], + "index": 12, + "angle": 0, + "type": "table_body" + } + ], + "index": 12 + }, + { + "type": "table", + "bbox": [ + 255, + 687, + 504, + 727 + ], + "blocks": [ + { + "bbox": [ + 262, + 677, + 491, + 687 + ], + "lines": [ + { + "bbox": [ + 262, + 677, + 491, + 687 + ], + "spans": [ + { + "bbox": [ + 262, + 677, + 491, + 687 + ], + "type": "text", + "content": "Table 4:FEAT vs. our method.Classification accuracies." + } + ] + } + ], + "index": 13, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 255, + 687, + 504, + 727 + ], + "lines": [ + { + "bbox": [ + 255, + 687, + 504, + 727 + ], + "spans": [ + { + "bbox": [ + 255, + 687, + 504, + 727 + ], + "type": "table", + "html": "
ModelminiImageNettieredImageNet
1-shot5-shot1-shot5-shot
FEAT66.7882.0570.80±0.2384.79±0.16
NIW-Meta (Ours)66.91±0.1082.28±0.1570.93±0.2785.20±0.19
", + "image_path": "2d6768db5d10f1d58c83c9552d50726dc6d5014a92109f54dc9a52f7dae7cab2.jpg" + } + ] + } + ], + "index": 14, + "angle": 0, + "type": "table_body" + } + ], + "index": 14 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "7" + } + ] + } + ], + "index": 15 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 6 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "spans": [ + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "text", + "content": "few-shot adaptation, namely " + }, + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "inline_equation", + "content": "y^{Q} = G(x^{Q}, S; \\theta)" + }, + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "text", + "content": " where the network " + }, + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "inline_equation", + "content": "G" + }, + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "text", + "content": " takes the entire support set " + }, + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "inline_equation", + "content": "S" + }, + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "text", + "content": " and query image " + }, + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "inline_equation", + "content": "x^{Q}" + }, + { + "bbox": [ + 104, + 81, + 506, + 127 + ], + "type": "text", + "content": " as input. Note that our NIW-Meta can incorporate any network architecture, even the set-based one like FEAT. As shown in Table 4, the Bayesian treatment leads to further improvement over (Ye et al., 2020) with this set-based architecture." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "spans": [ + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "text", + "content": "Error calibration. Bayesian models are known to be better calibrated than deterministic counterparts. We measure the expected calibration errors (ECE) (Guo et al., 2017) to judge how well the prediction accuracy and the prediction confidence are aligned - " + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "inline_equation", + "content": "ECE = \\sum_{b=1}^{B} \\frac{N_b}{N} |acc(b) - conf(b)|" + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "text", + "content": " where we partition test instances into " + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "text", + "content": " bins along the model's prediction confidence scores, and " + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "inline_equation", + "content": "conf(b)" + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "inline_equation", + "content": "acc(b)" + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "text", + "content": " are the average confidence and accuracy for the " + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "inline_equation", + "content": "b" + }, + { + "bbox": [ + 104, + 131, + 278, + 266 + ], + "type": "text", + "content": "-th bin, respectively. The results on miniImageNet" + } + ] + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 281, + 153, + 506, + 264 + ], + "blocks": [ + { + "bbox": [ + 280, + 131, + 504, + 153 + ], + "lines": [ + { + "bbox": [ + 280, + 131, + 504, + 153 + ], + "spans": [ + { + "bbox": [ + 280, + 131, + 504, + 153 + ], + "type": "text", + "content": "Table 5: ECEs on miniImageNet. \"ECE+TS\" indicates extra tuning of the temperature hyperparameter." + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 281, + 153, + 506, + 264 + ], + "lines": [ + { + "bbox": [ + 281, + 153, + 506, + 264 + ], + "spans": [ + { + "bbox": [ + 281, + 153, + 506, + 264 + ], + "type": "table", + "html": "
ModelBackboneECEECE+TS
1-shot5-shot1-shot5-shot
Linear classifierConv-48.547.483.562.88
SimpleShotConv-433.4545.813.823.35
MetaQDA-MAPConv-48.035.272.750.89
MetaQDA-FBConv-44.322.922.330.45
NIW-Meta (Ours)Conv-42.681.881.470.32
SimpleShotWRN-28-1039.5655.684.051.80
S2M2+LinearWRN-28-1033.2336.844.932.31
MetaQDA-MAPWRN-28-1031.1717.373.940.94
MetaQDA-FBWRN-28-1030.6815.862.710.74
NIW-Meta (Ours)WRN-28-1010.797.112.030.65
", + "image_path": "969daa3004e5f5fd18b74d12b1c3d5607cd555f34498fdb4150d11c67fca1daf.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 266, + 505, + 300 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 266, + 505, + 300 + ], + "spans": [ + { + "bbox": [ + 104, + 266, + 505, + 300 + ], + "type": "text", + "content": "are shown in Table 5. We used 20 bins and optionally performed the temperature search on validation sets, similarly as (Zhang et al., 2021). Again, Bayesian inference of whole network weights in our NIw-Meta leads to a far better calibrated model than the shallow Meta-QDA (Zhang et al., 2021)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 312, + 235, + 323 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 312, + 235, + 323 + ], + "spans": [ + { + "bbox": [ + 105, + 312, + 235, + 323 + ], + "type": "text", + "content": "7.2 FEW-SHOT REGRESSION" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 327, + 506, + 350 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 327, + 506, + 350 + ], + "spans": [ + { + "bbox": [ + 104, + 327, + 506, + 350 + ], + "type": "text", + "content": "Sine-Line dataset (Finn et al., 2018). The " + }, + { + "bbox": [ + 104, + 327, + 506, + 350 + ], + "type": "inline_equation", + "content": "1D(x,y)" + }, + { + "bbox": [ + 104, + 327, + 506, + 350 + ], + "type": "text", + "content": " data pairs are generated by randomly selecting either linear or sine curves with different scales/slopes/frequencies/phases. For" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 350, + 342, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 350, + 342, + 437 + ], + "spans": [ + { + "bbox": [ + 104, + 350, + 342, + 437 + ], + "type": "text", + "content": "the episodic few-shot learning setup, we follow the standard protocol: each episode is comprised of " + }, + { + "bbox": [ + 104, + 350, + 342, + 437 + ], + "type": "inline_equation", + "content": "k = 5" + }, + { + "bbox": [ + 104, + 350, + 342, + 437 + ], + "type": "text", + "content": "-shot support and 45 query samples randomly drawn from a random curve (regarded as a task). To deal with real-valued targets, we adopt the so-called RidgeNet, which has a parameter-free readout head derived from the support data via (closed-form) estimation of the linear coefficient matrix using the ridge regression (the L2 regularisation coefficient " + }, + { + "bbox": [ + 104, + 350, + 342, + 437 + ], + "type": "inline_equation", + "content": "\\lambda = 0.1" + }, + { + "bbox": [ + 104, + 350, + 342, + 437 + ], + "type": "text", + "content": ")." + } + ] + } + ], + "index": 8 + }, + { + "type": "table", + "bbox": [ + 346, + 376, + 506, + 435 + ], + "blocks": [ + { + "bbox": [ + 343, + 355, + 506, + 376 + ], + "lines": [ + { + "bbox": [ + 343, + 355, + 506, + 376 + ], + "spans": [ + { + "bbox": [ + 343, + 355, + 506, + 376 + ], + "type": "text", + "content": "Table 6: Sine-Line results. PMAML w/ 5 inner steps incurred numerical errors." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 346, + 376, + 506, + 435 + ], + "lines": [ + { + "bbox": [ + 346, + 376, + 506, + 435 + ], + "spans": [ + { + "bbox": [ + 346, + 376, + 506, + 435 + ], + "type": "table", + "html": "
ModelMean squared errorR-ECE
RidgeNet0.8210N/A
MAML (1-step)0.8206N/A
MAML (5-step)0.8309N/A
PMAML (1-step)0.91600.2666
NIW-Meta (Ours)0.78220.1728
", + "image_path": "f494562275e752ba5edfcd842bbf1700f2413195e33e951010ab77d581022910.jpg" + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "table_body" + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 437, + 506, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 437, + 506, + 525 + ], + "spans": [ + { + "bbox": [ + 104, + 437, + 506, + 525 + ], + "type": "text", + "content": "It is analogous to the ProtoNet (Snell et al., 2017) in classification which has a parameter-free head derived from NCC on support data. A similar model was introduced in (Bertinetto et al., 2019) but mainly repurposed for classification. We find that RidgeNet leads to much more accurate prediction than the conventional trainable linear head. For instance, the test errors are: " + }, + { + "bbox": [ + 104, + 437, + 506, + 525 + ], + "type": "inline_equation", + "content": "\\text{RidgeNet} = 0.82" + }, + { + "bbox": [ + 104, + 437, + 506, + 525 + ], + "type": "text", + "content": " vs. MAML with linear head " + }, + { + "bbox": [ + 104, + 437, + 506, + 525 + ], + "type": "inline_equation", + "content": "= 1.86" + }, + { + "bbox": [ + 104, + 437, + 506, + 525 + ], + "type": "text", + "content": ". Furthermore, we adopt the ridge head in other models as well, such as MAML, PMAML (Finn et al., 2018), and our NIW-Meta. See Table 6 for the mean squared errors contrasting our NIW-Meta against competing meta learning methods. The table also contains the regression-ECE (R-ECE) calibration errors2. Clearly our model is calibrated the best." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "spans": [ + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "text", + "content": "Object pose estimation on ShapeNet datasets. We consider the recent few-shot regression benchmarks (Gao et al., 2022; Yin et al., 2020) which introduced four datasets for object pose estimation: Pascal-1D, ShapeNet-1D, ShapeNet-2D and Distractor. In all datasets3, the main goal is to estimate the pose (positions in pixel and/or rotation angles) of the target object in an image. Each episode is specified by: i) selecting a target object randomly from a pool of objects with different object categories, and ii) rendering the same object in an image with several different random poses (position/rotation) to generate data instances. There are " + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "text", + "content": " support samples (input images and target pose labels) and " + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "inline_equation", + "content": "k_q" + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "text", + "content": " query samples for each episode. For ShapeNet-1D, for instance, " + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "inline_equation", + "content": "k" + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "text", + "content": " is randomly chosen from 3 to 15 while " + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "inline_equation", + "content": "k_q = 15" + }, + { + "bbox": [ + 104, + 531, + 506, + 631 + ], + "type": "text", + "content": ". Except Pascal-1D, some object categories are dedicated solely for" + } + ] + } + ], + "index": 12 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "spans": [ + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "text", + "content": "2The definition of the R-ECE is quite different from that of the classification ECE in Sec. 7.1. We follow the notion of goodness of cumulative distribution matching used in (Tran et al., 2020; Cui et al., 2020). Specifically, denoting by " + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "inline_equation", + "content": "\\hat{Q}_p(x)" + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "text", + "content": " the " + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "text", + "content": "-th quantile of the predicted distribution " + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "inline_equation", + "content": "\\hat{p}(y|x)" + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "text", + "content": ", we measure the deviation of " + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "inline_equation", + "content": "p_{true}(y \\leq \\hat{Q}_p(x)|x)" + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "text", + "content": " from " + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "inline_equation", + "content": "p" + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "text", + "content": " by absolute difference. So it is 0 for the ideal case " + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "inline_equation", + "content": "\\hat{p}(y|x) = p_{true}(y|x)" + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "text", + "content": ". Note that by definition we can only measure R-ECE for models with probabilistic output " + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "inline_equation", + "content": "\\hat{p}(y|x)" + }, + { + "bbox": [ + 104, + 636, + 504, + 691 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 691, + 504, + 733 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 691, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 691, + 504, + 733 + ], + "type": "text", + "content": "Pascal-1D and ShapeNet-1D are relatively easier datasets than the rest two as we have uniform noise-free backgrounds. To make the few-shot learning problem more challenging, ShapeNet-2D and Distractor datasets further introduce random (real-world) background images and/or so called the distractors which are objects randomly drawn and rendered that have nothing to do with the target pose to estimate." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 308, + 760 + ], + "type": "text", + "content": "8" + } + ] + } + ], + "index": 16 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 7 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 106, + 80, + 504, + 188 + ], + "blocks": [ + { + "bbox": [ + 105, + 46, + 504, + 79 + ], + "lines": [ + { + "bbox": [ + 105, + 46, + 504, + 79 + ], + "spans": [ + { + "bbox": [ + 105, + 46, + 504, + 79 + ], + "type": "text", + "content": "Table 7: Pose estimation results. Mean squared errors in rotation angle differences (Pascal-1D and ShapeNet-1D), quaternion differences " + }, + { + "bbox": [ + 105, + 46, + 504, + 79 + ], + "type": "inline_equation", + "content": "\\times 10^{-2}" + }, + { + "bbox": [ + 105, + 46, + 504, + 79 + ], + "type": "text", + "content": " (ShapeNet-2D) and pixel errors (Distractor). The dataset-wise different augmentation schemes are shown in the parentheses." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 106, + 80, + 504, + 188 + ], + "lines": [ + { + "bbox": [ + 106, + 80, + 504, + 188 + ], + "spans": [ + { + "bbox": [ + 106, + 80, + 504, + 188 + ], + "type": "table", + "html": "
ModelPascal-1D (TA)ShapeNet-1D (TA+DA)ShapeNet-2D (TA+DA+DR)Distractor (DA)
Intra-categoryCross-categoryIntra-categoryCross-categoryIntra-categoryCross-category
MAML1.02 ± 0.0617.9618.79----
CNP (Garnelo et al., 2018)1.98 ± 0.227.66 ± 0.188.66 ± 0.1914.20±0.0613.56±0.282.453.75
CNP+BA (Volpp et al., 2021)---14.16±0.0813.56±0.182.443.97
CNP+FCL (Gao et al., 2022)-----2.003.05
ANP (Kim et al., 2019)1.36 ± 0.255.81 ± 0.236.23 ± 0.1214.12±0.1413.59±0.102.654.08
ANP+FCL (Gao et al., 2022)---14.01±0.0913.32±0.18--
NIW-Meta w/ C+R0.89 ± 0.065.62 ± 0.386.57 ± 0.3921.25±0.7620.82±0.438.90±0.2617.31±0.38
NIW-Meta w/ CNP0.94 ± 0.155.74 ± 0.176.91 ± 0.1813.86±0.2013.04±0.131.80±0.012.94±0.14
NIW-Meta w/ ANP0.95 ± 0.095.47 ± 0.126.06 ± 0.1813.74±0.3012.95±0.483.10±0.485.20±0.88
", + "image_path": "02f1e8c0bbb2fa3f3b61e290071c0a6f86a3f2af9bacd2087ce93d24076fd925.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 192, + 504, + 214 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 192, + 504, + 214 + ], + "spans": [ + { + "bbox": [ + 105, + 192, + 504, + 214 + ], + "type": "text", + "content": "meta testing and not revealed during training, thus yielding two different test scenarios: intra-category and cross-category, in which the test object categories are seen and unseen, respectively." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 219, + 506, + 319 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 219, + 506, + 319 + ], + "spans": [ + { + "bbox": [ + 104, + 219, + 506, + 319 + ], + "type": "text", + "content": "In (Gao et al., 2022), they test different augmentation strategies in their baselines: conventional data augmentation on input images (denoted by DA), task augmentation (TA) (Rajendran et al., 2020) which adds random noise to the target labels to help reducing the memorisation issue (Yin et al., 2020), and domain randomisation (DR) (Tobin et al., 2017) which randomly generates background images during training. Among several possible combinations reported in (Gao et al., 2022), we follow the strategies that perform the best. For the target error metrics (e.g., position Euclidean distances in pixels for Distractor, rotation angle differences for ShapeNet-1D), we follow the metrics used in (Gao et al., 2022). For instance, the quaternion metric may sound reasonable in ShapeNet-2D due to the non-uniform, non-symmetric structures that reside in the target space (3D rotation angles)." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 324, + 504, + 425 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 324, + 504, + 425 + ], + "spans": [ + { + "bbox": [ + 104, + 324, + 504, + 425 + ], + "type": "text", + "content": "The results are summarised in Table 7. In (Gao et al., 2022), they have shown that the set-based backbone networks, especially the Conditional Neural Process (CNP) (Garnelo et al., 2018) and Attentive Neural Process (ANP) (Kim et al., 2019) outperform the conventional architectures of the conv-net feature extractor with the linear head that are adapted by MAML (Finn et al., 2017) (except for the Pascal-1D case). Motivated by this, we adopt the same set-based CNP/ANP architectures within our NIW-Meta. In addition, we also test the ridge-head model with the conv-net feature extractor (denoted by " + }, + { + "bbox": [ + 104, + 324, + 504, + 425 + ], + "type": "inline_equation", + "content": "\\mathbf{C} + \\mathbf{R}" + }, + { + "bbox": [ + 104, + 324, + 504, + 425 + ], + "type": "text", + "content": "). Two additional competing models contrasted here are: the Bayesian context aggregation in CNP (" + }, + { + "bbox": [ + 104, + 324, + 504, + 425 + ], + "type": "inline_equation", + "content": "CNP + BA" + }, + { + "bbox": [ + 104, + 324, + 504, + 425 + ], + "type": "text", + "content": ") (Volpp et al., 2021) and the use of the functional contrastive learning loss as extra regularisation (FCL) (Gao et al., 2022)." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 429, + 506, + 529 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 429, + 506, + 529 + ], + "spans": [ + { + "bbox": [ + 104, + 429, + 506, + 529 + ], + "type": "text", + "content": "For Pascal-1D and ShapeNet-1D, there is a dataset regime where MAML clearly outperforms (Pascal-1D) and underperforms (ShapeNet-1D) the CNP/ANP architectures. Very promisingly, our NIW-Meta consistently performs the best for both datasets, regardless of the choice of the architectures: not just CNP/ANP but also conv-net feature extractor + ridge head " + }, + { + "bbox": [ + 104, + 429, + 506, + 529 + ], + "type": "inline_equation", + "content": "(\\mathrm{C} + \\mathrm{R})" + }, + { + "bbox": [ + 104, + 429, + 506, + 529 + ], + "type": "text", + "content": ". For ShapeNet-2D and Distractor where MAML is not reported due to the known computational issues and poor performance, our NIW-Meta still exhibits the best test performance with CNP/ANP architectures. Unfortunately, the conv-net + ridge head " + }, + { + "bbox": [ + 104, + 429, + 506, + 529 + ], + "type": "inline_equation", + "content": "(\\mathrm{C} + \\mathrm{R})" + }, + { + "bbox": [ + 104, + 429, + 506, + 529 + ], + "type": "text", + "content": " did not work well, and our conjecture is that the presence of heavy noise and distractors in the input data requires more sophisticated modeling of interaction/relation among the input instances, as is mainly aimed (and successfully done) by CNP/ANP." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 533, + 507, + 634 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 533, + 507, + 634 + ], + "spans": [ + { + "bbox": [ + 104, + 533, + 507, + 634 + ], + "type": "text", + "content": "Computational complexity, running time and memory footprint. We have analysed the computational complexity of our NIW-Meta compared to the simple feed-forward workflows (e.g., ProtoNet). Our method incurs only constant-factor overhead compared to the minimal-cost ProtoNet, as summarised in Table 8 in Appendix E. Also in Fig. 5 therein, we also report the memory footprints and running times of MAML and our NIW-Meta on real datasets, which show that NIW-Meta has far lower memory requirement than MAML. MAML suffers from heavy use of memory and the computational overhead of keeping track of a large computational graph for inner gradient descent steps. Our NIW-Meta has a much more efficient strategy of local episodic optimisation that is linked to global parameters, without storing the full optimisation trace. Please see all details in Appendix E." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 649, + 195, + 661 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 649, + 195, + 661 + ], + "spans": [ + { + "bbox": [ + 105, + 649, + 195, + 661 + ], + "type": "text", + "content": "8 CONCLUSION" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 665, + 505, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 665, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 665, + 505, + 733 + ], + "type": "text", + "content": "We have proposed a new hierarchical Bayesian perspective to the episodic FSL problem. By having a higher-level task-agnostic random variate and episode-wise task-specific variables, we formulate a principled Bayesian inference view of the FSL problem with a large number of tasks (evidence). The effectiveness of our approach has been verified empirically in terms of both prediction accuracy and calibration, on a wide range of classification/regression tasks with complex backbones including ViT and set-based adaptation networks." + } + ] + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "spans": [ + { + "bbox": [ + 302, + 751, + 309, + 760 + ], + "type": "text", + "content": "9" + } + ] + } + ], + "index": 10 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 8 + }, + { + "para_blocks": [ + { + "bbox": [ + 107, + 81, + 176, + 93 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 176, + 93 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 176, + 93 + ], + "type": "text", + "content": "REFERENCES" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 99, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 21, + "blocks": [ + { + "bbox": [ + 105, + 99, + 506, + 133 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 99, + 506, + 133 + ], + "spans": [ + { + "bbox": [ + 105, + 99, + 506, + 133 + ], + "type": "text", + "content": "Jincheng Bai, Qifan Song, and Guang Cheng. Efficient Variational Inference for Sparse Deep Learning with Theoretical Guarantee. In Advances in Neural Information Processing Systems, 2020." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 107, + 140, + 506, + 163 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 140, + 506, + 163 + ], + "spans": [ + { + "bbox": [ + 107, + 140, + 506, + 163 + ], + "type": "text", + "content": "Luca Bertinetto, Joao F Henriques, Philip HS Torr, and Andrea Vedaldi. Meta-learning with differentiable closed-form solvers. In International Conference on Learning Representations, 2019." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 170, + 447, + 182 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 170, + 447, + 182 + ], + "spans": [ + { + "bbox": [ + 105, + 170, + 447, + 182 + ], + "type": "text", + "content": "Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 107, + 189, + 506, + 212 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 189, + 506, + 212 + ], + "spans": [ + { + "bbox": [ + 107, + 189, + 506, + 212 + ], + "type": "text", + "content": "S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 107, + 218, + 506, + 241 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 218, + 506, + 241 + ], + "spans": [ + { + "bbox": [ + 107, + 218, + 506, + 241 + ], + "type": "text", + "content": "Michael Braun and Jon McAuliffe. Variational inference for large-scale models of discrete choice. arXiv preprint arXiv:0712.2526, 2008." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 107, + 247, + 504, + 282 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 247, + 504, + 282 + ], + "spans": [ + { + "bbox": [ + 107, + 247, + 504, + 282 + ], + "type": "text", + "content": "Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In Proceedings of the International Conference on Computer Vision (ICCV), 2021." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 107, + 289, + 504, + 312 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 289, + 504, + 312 + ], + "spans": [ + { + "bbox": [ + 107, + 289, + 504, + 312 + ], + "type": "text", + "content": "Lisha Chen and Tianyi Chen. Is Bayesian Model-Agnostic Meta Learning Better than Model-Agnostic Meta Learning, Provably?, 2022. AI and Statistics (AISTATS)." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 107, + 318, + 506, + 341 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 318, + 506, + 341 + ], + "spans": [ + { + "bbox": [ + 107, + 318, + 506, + 341 + ], + "type": "text", + "content": "Peng Cui, Wenbo Hu, , and Jun Zhu. Calibrated reliable regression using maximum mean discrepancy. In Advances in Neural Information Processing Systems, 2020." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 107, + 348, + 506, + 392 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 348, + 506, + 392 + ], + "spans": [ + { + "bbox": [ + 107, + 348, + 506, + 392 + ], + "type": "text", + "content": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. ICLR, 2021." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 107, + 399, + 504, + 422 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 399, + 504, + 422 + ], + "spans": [ + { + "bbox": [ + 107, + 399, + 504, + 422 + ], + "type": "text", + "content": "Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, 2017." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 107, + 429, + 504, + 452 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 429, + 504, + 452 + ], + "spans": [ + { + "bbox": [ + 107, + 429, + 504, + 452 + ], + "type": "text", + "content": "Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic Model-Agnostic Meta-Learning. In Advances in Neural Information Processing Systems, 2018." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 107, + 458, + 506, + 492 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 458, + 506, + 492 + ], + "spans": [ + { + "bbox": [ + 107, + 458, + 506, + 492 + ], + "type": "text", + "content": "Ning Gao, Hanna Ziesche, Ngo Anh Vien, Michael Volpp, and Gerhard Neumann. What matters for meta-learning vision regression tasks? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14776-14786, June 2022." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 107, + 498, + 506, + 533 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 498, + 506, + 533 + ], + "spans": [ + { + "bbox": [ + 107, + 498, + 506, + 533 + ], + "type": "text", + "content": "Marta Garnelo, Dan Rosenbaum, Chris J. Maddison, Tiago Ramalho, David Saxton, Murray Shanahan, Yee Whye Teh, Danilo J. Rezende, and S. M. Ali Eslami. Conditional Neural Processes. In International Conference on Machine Learning, 2018." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 107, + 540, + 504, + 562 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 540, + 504, + 562 + ], + "spans": [ + { + "bbox": [ + 107, + 540, + 504, + 562 + ], + "type": "text", + "content": "Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysis. Texts in statistical science. Chapman & Hall / CRC, 2nd edition, 2003." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 107, + 569, + 506, + 603 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 569, + 506, + 603 + ], + "spans": [ + { + "bbox": [ + 107, + 569, + 506, + 603 + ], + "type": "text", + "content": "Jonathan Gordon, John Bronskill, Matthias Bauer, Sebastian Nowozin, and Richard Turner. Metalearning probabilistic inference for prediction. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=HkxxStoC5F7." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 107, + 609, + 504, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 609, + 504, + 632 + ], + "spans": [ + { + "bbox": [ + 107, + 609, + 504, + 632 + ], + "type": "text", + "content": "Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Tom Griffiths. Recasting gradient-based meta-learning as hierarchical bayes. In ICLR, 2018." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 107, + 639, + 506, + 673 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 639, + 506, + 673 + ], + "spans": [ + { + "bbox": [ + 107, + 639, + 506, + 673 + ], + "type": "text", + "content": "Edward Grefenstette, Brandon Amos, Denis Yarats, Phu Mon Htut, Artem Molchanov, Franziska Meier, Douwe Kiela, Kyunghyun Cho, and Soumith Chintala. Generalized inner loop meta-learning. arXiv preprint arXiv:1910.01727, 2019." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 107, + 680, + 504, + 703 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 680, + 504, + 703 + ], + "spans": [ + { + "bbox": [ + 107, + 680, + 504, + 703 + ], + "type": "text", + "content": "Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International Conference on Machine Learning, 2017." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 107, + 709, + 504, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 709, + 504, + 732 + ], + "spans": [ + { + "bbox": [ + 107, + 709, + 504, + 732 + ], + "type": "text", + "content": "K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2016." + } + ] + } + ], + "index": 20 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "text", + "content": "10" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 9 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 506, + 732 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 107, + 81, + 506, + 116 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 107, + 81, + 506, + 116 + ], + "spans": [ + { + "bbox": [ + 107, + 81, + 506, + 116 + ], + "type": "text", + "content": "Timothy Hesperales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta-learning in neural networks: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44: 5149-5169, 2022." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 122, + 506, + 156 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 122, + 506, + 156 + ], + "spans": [ + { + "bbox": [ + 105, + 122, + 506, + 156 + ], + "type": "text", + "content": "Shell Xu Hu, Da Li, Jan Stuhmer, Minyoung Kim, and Timothy M. Hospedales. Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference. In CVPR, 2022." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 163, + 506, + 198 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 163, + 506, + 198 + ], + "spans": [ + { + "bbox": [ + 105, + 163, + 506, + 198 + ], + "type": "text", + "content": "Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In Uncertainty in Artificial Intelligence, 2018." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 204, + 504, + 239 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 204, + 504, + 239 + ], + "spans": [ + { + "bbox": [ + 105, + 204, + 504, + 239 + ], + "type": "text", + "content": "Hyunjik Kim, Andriy Mnih, Jonathan Schwarz, Marta Garnelo, Ali Eslami, Dan Rosenbaum, Oriol Vinyals, and Yee Whye Teh. Attentive Neural Processes. In International Conference on Learning Representations, 2019." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 245, + 504, + 269 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 245, + 504, + 269 + ], + "spans": [ + { + "bbox": [ + 105, + 245, + 504, + 269 + ], + "type": "text", + "content": "Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 2015." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 274, + 504, + 299 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 274, + 504, + 299 + ], + "spans": [ + { + "bbox": [ + 105, + 274, + 504, + 299 + ], + "type": "text", + "content": "John Langford and Rich Caruana. (Not) Bounding the True Error. In Advances in Neural Information Processing Systems, 2001." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 304, + 504, + 340 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 304, + 504, + 340 + ], + "spans": [ + { + "bbox": [ + 105, + 304, + 504, + 340 + ], + "type": "text", + "content": "Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 346, + 504, + 369 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 346, + 504, + 369 + ], + "spans": [ + { + "bbox": [ + 105, + 346, + 504, + 369 + ], + "type": "text", + "content": "David MacKay. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 375, + 506, + 410 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 375, + 506, + 410 + ], + "spans": [ + { + "bbox": [ + 105, + 375, + 506, + 410 + ], + "type": "text", + "content": "Wesley Maddox, Timur Garipov, Pavel Izmailov, Dmitry Vetrov, and Andrew Gordon Wilson. A Simple Baseline for Bayesian Uncertainty in Deep Learning. arXiv preprint arXiv:1902.02476, 2019." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 416, + 506, + 452 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 416, + 506, + 452 + ], + "spans": [ + { + "bbox": [ + 105, + 416, + 506, + 452 + ], + "type": "text", + "content": "Puneet Mangla, Nupur Kumari, Abhishek Sinha, Mayank Singh, Balaji Krishnamurthy, and Vineeth N Balasubramanian. Charting the right manifold: Manifold mixup for few-shot learning. In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 457, + 486, + 471 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 457, + 486, + 471 + ], + "spans": [ + { + "bbox": [ + 105, + 457, + 486, + 471 + ], + "type": "text", + "content": "Andreas Maurer. A Note on the PAC Bayesian Theorem. arXiv preprint arXiv:0411099, 2004." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 477, + 462, + 490 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 477, + 462, + 490 + ], + "spans": [ + { + "bbox": [ + 105, + 477, + 462, + 490 + ], + "type": "text", + "content": "David McAllester. Some pac-bayesian theorems. Machine Learning, 37:355-363, 1999." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 495, + 504, + 520 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 495, + 504, + 520 + ], + "spans": [ + { + "bbox": [ + 105, + 495, + 504, + 520 + ], + "type": "text", + "content": "Kevin P. Murphy. *Probabilistic Machine Learning: An introduction*. MIT Press, 2022. URL: probml.ai." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 526, + 504, + 561 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 526, + 504, + 561 + ], + "spans": [ + { + "bbox": [ + 105, + 526, + 504, + 561 + ], + "type": "text", + "content": "Cuong Nguyen, Thanh-Toan Do, and Gustavo Carneiro. Uncertainty in model-agnostic meta-learning using variational inference. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3090-3100, 2020." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 567, + 504, + 590 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 567, + 504, + 590 + ], + "spans": [ + { + "bbox": [ + 105, + 567, + 504, + 590 + ], + "type": "text", + "content": "Alex Nichol, Joshua Achiam, and John Schulman. On First-Order Meta-Learning Algorithms. In arXiv preprint arXiv:1803.02999, 2018." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 597, + 506, + 632 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 597, + 506, + 632 + ], + "spans": [ + { + "bbox": [ + 105, + 597, + 506, + 632 + ], + "type": "text", + "content": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 638, + 506, + 672 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 638, + 506, + 672 + ], + "spans": [ + { + "bbox": [ + 105, + 638, + 506, + 672 + ], + "type": "text", + "content": "Massimiliano Patacchiola, Jack Turner, Elliot J. Crowley, and Amos Storkey. Bayesian meta-learning for the few-shot setting via deep kernels. In Advances in Neural Information Processing Systems, 2020." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 105, + 678, + 504, + 702 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 678, + 504, + 702 + ], + "spans": [ + { + "bbox": [ + 105, + 678, + 504, + 702 + ], + "type": "text", + "content": "D. Pati, A. Bhattacharya, and Y. Yang. On the Statistical Optimality of Variational Bayes, 2018. AI and Statistics (AISTATS)." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 105, + 708, + 504, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 708, + 504, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 708, + 504, + 732 + ], + "type": "text", + "content": "Herke van Hoof Qi Wang. Doubly Stochastic Variational Inference for Neural Processes with Hierarchical Latent Variables. In International Conference on Machine Learning, 2020." + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 761 + ], + "type": "text", + "content": "11" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 10 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 732 + ], + "type": "list", + "angle": 0, + "index": 20, + "blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 505, + 106 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 505, + 106 + ], + "type": "text", + "content": "Jananthanan Rajendran, Alex Irpan, and Eric Jang. Meta-Learning Requires Meta-Augmentation. In Advances in Neural Information Processing Systems, 2020." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 111, + 505, + 135 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 111, + 505, + 135 + ], + "spans": [ + { + "bbox": [ + 105, + 111, + 505, + 135 + ], + "type": "text", + "content": "Sachin Ravi and Alex Beatson. Amortized Bayesian meta-learning. In International Conference on Learning Representations, 2019." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 140, + 505, + 175 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 140, + 505, + 175 + ], + "spans": [ + { + "bbox": [ + 105, + 140, + 505, + 175 + ], + "type": "text", + "content": "James Requeima, Jonathan Gordon, John Bronskill, Sebastian Nowozin, and Richard E. Turner. Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes. In Advances in Neural Information Processing Systems, 2019." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 180, + 505, + 205 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 180, + 505, + 205 + ], + "spans": [ + { + "bbox": [ + 105, + 180, + 505, + 205 + ], + "type": "text", + "content": "Omar Rivasplata, Vikram M Tankasali, and Csaba Szepesvari. PAC-Bayes with Backprop. arXiv preprint arXiv:1908.07380, 2019." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 210, + 505, + 245 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 210, + 505, + 245 + ], + "spans": [ + { + "bbox": [ + 105, + 210, + 505, + 245 + ], + "type": "text", + "content": "Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Hadsell. Meta-learning with latent embedding optimization. In International Conference on Learning Representations, 2019." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 250, + 505, + 274 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 250, + 505, + 274 + ], + "spans": [ + { + "bbox": [ + 105, + 250, + 505, + 274 + ], + "type": "text", + "content": "Matthias Seeger. PAC-Bayesian Generalization Error Bounds for Gaussian Process Classification. Journal of Machine Learning Research, 3:233-269, 2002." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 280, + 505, + 304 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 280, + 505, + 304 + ], + "spans": [ + { + "bbox": [ + 105, + 280, + 505, + 304 + ], + "type": "text", + "content": "Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning. CoRR, abs/1703.05175, 2017. URL http://arxiv.org/abs/1703.05175." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 105, + 309, + 505, + 344 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 309, + 505, + 344 + ], + "spans": [ + { + "bbox": [ + 105, + 309, + 505, + 344 + ], + "type": "text", + "content": "Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, and Ren Ng. Learned Initializations for Optimizing Coordinate-Based Neural Representations. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 350, + 505, + 373 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 350, + 505, + 373 + ], + "spans": [ + { + "bbox": [ + 105, + 350, + 505, + 373 + ], + "type": "text", + "content": "Niklas Thiemann, Christian Igel, Olivier Wintenberger, and Yevgeny Seldin. A strongly quasiconvex PAC-Bayesian bound. In International Conference on Algorithmic Learning Theory, 2017." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 379, + 505, + 423 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 379, + 505, + 423 + ], + "spans": [ + { + "bbox": [ + 105, + 379, + 505, + 423 + ], + "type": "text", + "content": "Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 23-30, 2017. doi: 10.1109/IROS.2017.8202133." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 430, + 505, + 464 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 430, + 505, + 464 + ], + "spans": [ + { + "bbox": [ + 105, + 430, + 505, + 464 + ], + "type": "text", + "content": "Kevin Tran, Willie Neiswanger, Junwooong Yoon, Qingyang Zhang, Eric Xing, and Zachary W Ulissi. Methods for comparing uncertainty quantifications for material property predictions. Machine Learning: Science and Technology, 1(2):025006, 2020." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 470, + 505, + 505 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 470, + 505, + 505 + ], + "spans": [ + { + "bbox": [ + 105, + 470, + 505, + 505 + ], + "type": "text", + "content": "Michael Volpp, Fabian Flürenbrock, Lukas Grossberger, Christian Daniel, and Gerhard Neumann. Bayesian Context Aggregation for Neural Processes. In International Conference on Learning Representations, 2021." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 511, + 505, + 545 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 511, + 505, + 545 + ], + "spans": [ + { + "bbox": [ + 105, + 511, + 505, + 545 + ], + "type": "text", + "content": "Michael Volpp, Philipp Dahlinger, Philipp Becker, Christian Daniel, and Gerhard Neumann. Accurate Bayesian Meta-Learning by Accurate Task Posterior Inference. In International Conference on Learning Representations, 2023." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 551, + 505, + 574 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 551, + 505, + 574 + ], + "spans": [ + { + "bbox": [ + 105, + 551, + 505, + 574 + ], + "type": "text", + "content": "Yan Wang, Wei-Lun Chao, Kilian Q. Weinberger, and Laurens van der Maaten. Simpleshot: Revisiting nearestneighbor classification for few-shot learning. In arXiv preprint arXiv:1911.04623, 2019." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 580, + 505, + 604 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 580, + 505, + 604 + ], + "spans": [ + { + "bbox": [ + 105, + 580, + 505, + 604 + ], + "type": "text", + "content": "Yaqing Wang, Quanming Yao, James T. Kwok, and Lionel M. Ni. Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys, 53(3):1-34, 2020a." + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 105, + 609, + 505, + 633 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 609, + 505, + 633 + ], + "spans": [ + { + "bbox": [ + 105, + 609, + 505, + 633 + ], + "type": "text", + "content": "Yaqing Wang, Quanming Yao, James T Kwok, and Lionel M Ni. Generalizing from a few examples: A survey on few-shot learning. ACM Computing Surveys (CSUR), 53(3):1-34, 2020b." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 639, + 505, + 663 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 639, + 505, + 663 + ], + "spans": [ + { + "bbox": [ + 105, + 639, + 505, + 663 + ], + "type": "text", + "content": "Max Welling and Yee Whye Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In International Conference on Machine Learning, 2011." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 105, + 669, + 505, + 692 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 669, + 505, + 692 + ], + "spans": [ + { + "bbox": [ + 105, + 669, + 505, + 692 + ], + "type": "text", + "content": "Chen Xing, Negar Rostamzadeh, Boris Oreshkin, and Pedro O. Pinheiro. Adaptive cross-modal few-shot learning. In Advances in Neural Information Processing Systems, 2019." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 105, + 698, + 505, + 732 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 698, + 505, + 732 + ], + "spans": [ + { + "bbox": [ + 105, + 698, + 505, + 732 + ], + "type": "text", + "content": "Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, and Fei Sha. Few-shot learning via embedding adaptation with set-to-set functions. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8808-8817, 2020." + } + ] + } + ], + "index": 19 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "12" + } + ] + } + ], + "index": 21 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 11 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 81, + 507, + 298 + ], + "type": "list", + "angle": 0, + "index": 7, + "blocks": [ + { + "bbox": [ + 105, + 81, + 505, + 106 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 81, + 505, + 106 + ], + "spans": [ + { + "bbox": [ + 105, + 81, + 505, + 106 + ], + "type": "text", + "content": "Mingzhang Yin, George Tucker, Mingyuan Zhou, Sergey Levine, and Chelsea Finn. Meta-Learning without Memorization. In International Conference on Learning Representations, 2020." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 111, + 507, + 146 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 111, + 507, + 146 + ], + "spans": [ + { + "bbox": [ + 105, + 111, + 507, + 146 + ], + "type": "text", + "content": "Jaesik Yoon, Taesup Kim, Ousmane Dia, Sungwoong Kim, Yoshua Bengio, and Sungjin Ahn. Bayesian Model-Agnostic Meta-Learning. In Advances in Neural Information Processing Systems, 2018." + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 152, + 507, + 186 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 152, + 507, + 186 + ], + "spans": [ + { + "bbox": [ + 105, + 152, + 507, + 186 + ], + "type": "text", + "content": "Sung Whan Yoon, Jun Seo, and Jaekyun Moon. TapNet: Neural network augmented with task-adaptive projection for few-shot learning. In International conference on Machine Learning, 2019." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 193, + 507, + 217 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 193, + 507, + 217 + ], + "spans": [ + { + "bbox": [ + 105, + 193, + 507, + 217 + ], + "type": "text", + "content": "S. Zagoruyko and N. Komodakis. Wide residual networks. In arXiv preprint arXiv:1605.07146, 2016." + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 105, + 223, + 507, + 257 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 223, + 507, + 257 + ], + "spans": [ + { + "bbox": [ + 105, + 223, + 507, + 257 + ], + "type": "text", + "content": "Xueting Zhang, Yuting Qiang, Sung Flood, Yongxin Yang, and Timothy M. Hospedales. RelationNet2: Deep comparison columns for few-shot learning. In International Joint Conference on Neural Networks (IJCNN), 2020." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 105, + 264, + 507, + 298 + ], + "type": "ref_text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 264, + 507, + 298 + ], + "spans": [ + { + "bbox": [ + 105, + 264, + 507, + 298 + ], + "type": "text", + "content": "Xueting Zhang, Debin Meng, Henry Gouk, and Timothy Hospedales. Shallow Bayesian Meta Learning for Real-World Few-Shot Recognition. In International Conference on Computer Vision, 2021." + } + ] + } + ], + "index": 6 + } + ], + "sub_type": "ref_text" + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 300, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 300, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 300, + 750, + 310, + 760 + ], + "type": "text", + "content": "13" + } + ] + } + ], + "index": 8 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 12 + }, + { + "para_blocks": [ + { + "bbox": [ + 261, + 79, + 350, + 101 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 261, + 79, + 350, + 101 + ], + "spans": [ + { + "bbox": [ + 261, + 79, + 350, + 101 + ], + "type": "text", + "content": "Appendix" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 118, + 199, + 131 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 118, + 199, + 131 + ], + "spans": [ + { + "bbox": [ + 105, + 118, + 199, + 131 + ], + "type": "text", + "content": "Table of Contents" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 132, + 137, + 356, + 176 + ], + "type": "list", + "angle": 0, + "index": 6, + "blocks": [ + { + "bbox": [ + 132, + 137, + 337, + 148 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 137, + 337, + 148 + ], + "spans": [ + { + "bbox": [ + 132, + 137, + 337, + 148 + ], + "type": "text", + "content": "- Proofs for Generalisation Error Bounds (Sec. A)" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 149, + 151, + 330, + 163 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 151, + 330, + 163 + ], + "spans": [ + { + "bbox": [ + 149, + 151, + 330, + 163 + ], + "type": "text", + "content": "- Proof for PAC-Bayes-" + }, + { + "bbox": [ + 149, + 151, + 330, + 163 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 149, + 151, + 330, + 163 + ], + "type": "text", + "content": " Bound (Sec. A.1)" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 149, + 164, + 356, + 176 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 164, + 356, + 176 + ], + "spans": [ + { + "bbox": [ + 149, + 164, + 356, + 176 + ], + "type": "text", + "content": "- Proof for Regression Analysis Bound (Sec. A.2)" + } + ] + } + ], + "index": 5 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 132, + 179, + 477, + 265 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 132, + 179, + 261, + 190 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 179, + 261, + 190 + ], + "spans": [ + { + "bbox": [ + 132, + 179, + 261, + 190 + ], + "type": "text", + "content": "- Detailed Derivations (Sec. B)" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 132, + 194, + 477, + 206 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 194, + 477, + 206 + ], + "spans": [ + { + "bbox": [ + 132, + 194, + 477, + 206 + ], + "type": "text", + "content": "- Toy Experiment: Why Hierarchical Bayesian Model? (A Detailed Version) (Sec. C)" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 132, + 209, + 381, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 209, + 381, + 220 + ], + "spans": [ + { + "bbox": [ + 132, + 209, + 381, + 220 + ], + "type": "text", + "content": "- Implementation Details and Experimental Settings (Sec. D)" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 132, + 224, + 442, + 236 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 224, + 442, + 236 + ], + "spans": [ + { + "bbox": [ + 132, + 224, + 442, + 236 + ], + "type": "text", + "content": "- Computational Complexity, Running Time and Memory Footprint (Sec. E)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 132, + 238, + 426, + 251 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 238, + 426, + 251 + ], + "spans": [ + { + "bbox": [ + 132, + 238, + 426, + 251 + ], + "type": "text", + "content": "- Training Stability and Impact of Number of Training Episodes (Sec. F)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 132, + 253, + 272, + 265 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 132, + 253, + 272, + 265 + ], + "spans": [ + { + "bbox": [ + 132, + 253, + 272, + 265 + ], + "type": "text", + "content": "Additional Discussions (Sec. G)" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 149, + 268, + 389, + 305 + ], + "type": "list", + "angle": 0, + "index": 17, + "blocks": [ + { + "bbox": [ + 149, + 268, + 376, + 280 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 268, + 376, + 280 + ], + "spans": [ + { + "bbox": [ + 149, + 268, + 376, + 280 + ], + "type": "text", + "content": "- Comparison to Bayesian Neural Processes (Sec. G.1)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 149, + 281, + 389, + 293 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 281, + 389, + 293 + ], + "spans": [ + { + "bbox": [ + 149, + 281, + 389, + 293 + ], + "type": "text", + "content": "- Justification of Model and Algorithm Choices (Sec. G.2)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 149, + 293, + 326, + 305 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 149, + 293, + 326, + 305 + ], + "spans": [ + { + "bbox": [ + 149, + 293, + 326, + 305 + ], + "type": "text", + "content": "- Limitations and Future Works (Sec. G.3)" + } + ] + } + ], + "index": 16 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 105, + 343, + 374, + 355 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 343, + 374, + 355 + ], + "spans": [ + { + "bbox": [ + 105, + 343, + 374, + 355 + ], + "type": "text", + "content": "A PROOFS FOR GENERALISATION ERROR BOUNDS" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "spans": [ + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "text", + "content": "We prove the two theorems Theorem 5.1 and Theorem 5.2 in the main paper that upper-bound the generalisation error of the model that is averaged over the learned posterior " + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "inline_equation", + "content": "q(\\phi, \\theta_{1:N})" + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "text", + "content": ". Without loss of generality we assume " + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "inline_equation", + "content": "|D_i| = n" + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "text", + "content": " for all episodes " + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "text", + "content": ". We let " + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "inline_equation", + "content": "(q^*(\\phi), \\{q_i^*(\\theta_i)\\}_{i=1}^N)" + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "text", + "content": " be the optimal solution of (6). In these theorems we often make the assumption of " + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "inline_equation", + "content": "N \\to \\infty" + }, + { + "bbox": [ + 104, + 368, + 504, + 446 + ], + "type": "text", + "content": ", that the number of training episodes tends to infinity, mainly for mathematical convenience. This assumption may not be true for practical situations, however, the theorems can still be applicable with approximate guarantees where we provide several justifications for this in Sec. F." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 105, + 459, + 282, + 469 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 459, + 282, + 469 + ], + "spans": [ + { + "bbox": [ + 105, + 459, + 282, + 469 + ], + "type": "text", + "content": "A.1 PROOF FOR PAC-BAYES- " + }, + { + "bbox": [ + 105, + 459, + 282, + 469 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 105, + 459, + 282, + 469 + ], + "type": "text", + "content": " BOUND" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 104, + 479, + 504, + 502 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 479, + 504, + 502 + ], + "spans": [ + { + "bbox": [ + 104, + 479, + 504, + 502 + ], + "type": "text", + "content": "First, Theorem 5.1, reiterated below as Theorem A.1, relates the generalisation error to the ultimate ELBO loss (6) that we minimised in our algorithm." + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "spans": [ + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": "Theorem A.1 (PAC-Bayes-" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": " bound). Let " + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "inline_equation", + "content": "R_{i}(\\theta)" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": " be the generalisation error of model " + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": " for the task " + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": ", more specifically, " + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "inline_equation", + "content": "R_{i}(\\theta) = \\mathbb{E}_{(x,y)\\sim \\mathcal{T}_{i}}[-\\log p(y|x,\\theta)]" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": " with the assumption of [0, 1]-bounded errors. As the number of training episodes " + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "inline_equation", + "content": "N\\to \\infty" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": ", the following holds with probability at least " + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "inline_equation", + "content": "1 - \\delta" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": " for arbitrary small " + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\delta >0" + }, + { + "bbox": [ + 104, + 505, + 506, + 550 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 242, + 554, + 505, + 578 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 242, + 554, + 505, + 578 + ], + "spans": [ + { + "bbox": [ + 242, + 554, + 505, + 578 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q _ {i} ^ {*} (\\theta_ {i})} [ R _ {i} (\\theta_ {i}) ] \\leq \\frac {2 \\epsilon^ {*}}{n}, \\tag {16}", + "image_path": "085610ba40dac6b392bd84dd0c223e983725dfc59bb9bcc62cb3ce86deeab244.jpg" + } + ] + } + ], + "index": 23 + }, + { + "bbox": [ + 105, + 582, + 251, + 594 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 582, + 251, + 594 + ], + "spans": [ + { + "bbox": [ + 105, + 582, + 251, + 594 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 105, + 582, + 251, + 594 + ], + "type": "inline_equation", + "content": "\\epsilon^{*}" + }, + { + "bbox": [ + 105, + 582, + 251, + 594 + ], + "type": "text", + "content": " is the optimal value of (6)." + } + ] + } + ], + "index": 24 + }, + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "spans": [ + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "type": "text", + "content": "Proof. We utilise the recent PAC-Bayes-" + }, + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "type": "inline_equation", + "content": "\\lambda" + }, + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "type": "text", + "content": " bound (Thiemann et al., 2017; Rivasplata et al., 2019), a variant of the traditional PAC-Bayes bounds (McAllester, 1999; Langford & Caruana, 2001; Seeger, 2002; Maurer, 2004). It states that for any " + }, + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "type": "inline_equation", + "content": "\\lambda \\in (0,2)" + }, + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "type": "text", + "content": ", the following holds with probability at least " + }, + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "type": "inline_equation", + "content": "1 - \\delta" + }, + { + "bbox": [ + 104, + 605, + 506, + 650 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 25 + }, + { + "bbox": [ + 121, + 654, + 505, + 681 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 654, + 505, + 681 + ], + "spans": [ + { + "bbox": [ + 121, + 654, + 505, + 681 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {q (\\beta)} [ R (\\beta) ] \\leq \\frac {1}{1 - \\lambda / 2} \\mathbb {E} _ {q (\\beta)} [ \\hat {R} _ {m} (\\beta) ] + \\frac {1}{\\lambda (1 - \\lambda / 2)} \\frac {\\operatorname {K L} (q (\\beta) | | p (\\beta)) + \\log (2 \\sqrt {m} / \\delta)}{m}, \\tag {17}", + "image_path": "52d6715637c1dde98ec36c920a96094e08682852936d180cd131ea98e8e55778.jpg" + } + ] + } + ], + "index": 26 + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "content": " represents all model parameters (random variables), " + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "inline_equation", + "content": "R(\\beta)" + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "content": " is the generalisation error/loss for a given model " + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "inline_equation", + "content": "\\hat{R}_m(\\beta)" + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "content": " is the empirical error/loss on the training data of size " + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "inline_equation", + "content": "m" + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "content": ". It holds for any data-independent (e.g., prior) distribution " + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "inline_equation", + "content": "p(\\beta)" + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "content": " and any distribution (possibly data-dependent, e.g., posterior) " + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "inline_equation", + "content": "q(\\beta)" + }, + { + "bbox": [ + 104, + 685, + 506, + 733 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 27 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "14" + } + ] + } + ], + "index": 28 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 13 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 378, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 378, + 95 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 378, + 95 + ], + "type": "text", + "content": "Now, with " + }, + { + "bbox": [ + 104, + 82, + 378, + 95 + ], + "type": "inline_equation", + "content": "N\\to \\infty" + }, + { + "bbox": [ + 104, + 82, + 378, + 95 + ], + "type": "text", + "content": " , we rewrite (6) in an equivalent form as follows:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 119, + 97, + 505, + 159 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 97, + 505, + 159 + ], + "spans": [ + { + "bbox": [ + 119, + 97, + 505, + 159 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\min _ {L _ {0}, \\left\\{L _ {i} \\right\\} _ {i = 1} ^ {\\infty}} Q \\left(L _ {0}, \\left\\{L _ {i} \\right\\} _ {i = 1} ^ {\\infty}\\right) := \\frac {1}{N} \\left(\\mathbb {E} _ {q (\\phi ; L _ {0}) \\prod_ {i} q _ {i} \\left(\\theta_ {i}; L _ {i}\\right)} \\left[ \\sum_ {i} l _ {i} \\left(\\theta_ {i}\\right) \\right] + \\right. \\tag {18} \\\\ \\left. \\operatorname {K L} \\left(q (\\phi ; L _ {0}) \\prod_ {i} q _ {i} \\left(\\theta_ {i}; L _ {i}\\right) | | p (\\phi) \\prod_ {i} p \\left(\\theta_ {i} \\mid \\phi\\right)\\right)\\right) \\Bigg | _ {N \\rightarrow \\infty} \\\\ \\end{array}", + "image_path": "893a0b92b2c4f4654ee404a22a843788ee78376e63ca2d5f3b2cb3b54655da97.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "spans": [ + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "type": "text", + "content": "Then we set " + }, + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "type": "inline_equation", + "content": "\\beta \\coloneqq \\{\\phi, \\theta_{1:N}\\}" + }, + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "type": "inline_equation", + "content": "q(\\beta) \\coloneqq q(\\phi) \\prod_i q_i(\\theta_i)" + }, + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "type": "inline_equation", + "content": "p(\\beta) \\coloneqq p(\\phi) \\prod_i p(\\theta_i|\\phi)" + }, + { + "bbox": [ + 104, + 160, + 504, + 183 + ], + "type": "text", + "content": ". We also define the generalisation loss and the empirical loss as follows:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 184, + 186, + 505, + 219 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 186, + 505, + 219 + ], + "spans": [ + { + "bbox": [ + 184, + 186, + 505, + 219 + ], + "type": "interline_equation", + "content": "R (\\beta) := \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} _ {(x, y) \\sim \\mathcal {T} _ {i}} [ - \\log p (y | x, \\theta) ] = \\frac {1}{N} \\sum_ {i = 1} ^ {N} R _ {i} (\\theta) \\tag {19}", + "image_path": "cb4fd5c3d167f0f137bc94fea2ffea446ce2d6069068c7f875d346434a4ddedf.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 114, + 221, + 505, + 255 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 114, + 221, + 505, + 255 + ], + "spans": [ + { + "bbox": [ + 114, + 221, + 505, + 255 + ], + "type": "interline_equation", + "content": "\\hat {R} _ {m} (\\beta) := \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} _ {(x, y) \\sim D _ {i}} [ - \\log p (y | x, \\theta) ] = \\frac {1}{n} \\frac {1}{N} \\sum_ {i = 1} ^ {N} - \\log p \\left(D _ {i} \\mid \\theta_ {i}\\right) = \\frac {1}{n} \\frac {1}{N} \\sum_ {i = 1} ^ {N} l _ {i} \\left(\\theta_ {i}\\right) \\tag {20}", + "image_path": "3fc854f65f687d33d51a0075cd77dfb4893f8d5be143873e7c130cf811465d4c.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 256, + 503, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 256, + 503, + 269 + ], + "spans": [ + { + "bbox": [ + 104, + 256, + 503, + 269 + ], + "type": "text", + "content": "Note that the empirical data size " + }, + { + "bbox": [ + 104, + 256, + 503, + 269 + ], + "type": "inline_equation", + "content": "m = nN" + }, + { + "bbox": [ + 104, + 256, + 503, + 269 + ], + "type": "text", + "content": " in our case. Plugging these into (17) with " + }, + { + "bbox": [ + 104, + 256, + 503, + 269 + ], + "type": "inline_equation", + "content": "\\lambda = 1" + }, + { + "bbox": [ + 104, + 256, + 503, + 269 + ], + "type": "text", + "content": " leads to:" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 115, + 271, + 505, + 335 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 271, + 505, + 335 + ], + "spans": [ + { + "bbox": [ + 115, + 271, + 505, + 335 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ R _ {i} (\\theta_ {i}) ] \\leq \\\\ 2 \\left(\\frac {1}{n} \\frac {1}{N} \\sum_ {i = 1} ^ {N} \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ l _ {i} (\\theta_ {i}) ] + \\frac {\\mathrm {K L} (q (\\phi) \\prod_ {i} q _ {i} (\\theta_ {i}) | | p (\\phi) \\prod_ {i} p (\\theta_ {i} | \\phi)) + \\log (2 \\sqrt {n N} / \\delta)}{n N}\\right) \\tag {21} \\\\ \\end{array}", + "image_path": "2dfe29f2d42e970354cbe982d7f8e5f3e6bfaff52686e405d79b540de29c7411.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "spans": [ + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "text", + "content": "Taking " + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "inline_equation", + "content": "N\\to \\infty" + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "text", + "content": " in (21) makes i) the LHS become " + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_{i\\sim \\mathcal{T}}\\mathbb{E}_{q_i(\\theta_i)}[R_i(\\theta_i)]" + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "text", + "content": ", ii) the complexity term " + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "inline_equation", + "content": "\\frac{\\log(2\\sqrt{nN} / \\delta)}{nN}" + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "text", + "content": " in the RHS vanish, and iii) the RHS converge to " + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "inline_equation", + "content": "\\frac{2}{n} Q(L_0,\\{L_i\\}_{i = 1}^{\\infty})" + }, + { + "bbox": [ + 104, + 336, + 504, + 366 + ], + "type": "text", + "content": ". That is," + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 214, + 368, + 505, + 392 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 214, + 368, + 505, + 392 + ], + "spans": [ + { + "bbox": [ + 214, + 368, + 505, + 392 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q _ {i} \\left(\\theta_ {i}\\right)} \\left[ R _ {i} \\left(\\theta_ {i}\\right) \\right] \\leq \\frac {2}{n} Q \\left(L _ {0}, \\left\\{L _ {i} \\right\\} _ {i = 1} ^ {\\infty}\\right). \\tag {22}", + "image_path": "4398ff00dbebf08c6939276689bd8a19fc3a8b9ca8308090c0795e37254e26cb.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 392, + 455, + 405 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 392, + 455, + 405 + ], + "spans": [ + { + "bbox": [ + 104, + 392, + 455, + 405 + ], + "type": "text", + "content": "Since (22) holds for any " + }, + { + "bbox": [ + 104, + 392, + 455, + 405 + ], + "type": "inline_equation", + "content": "q" + }, + { + "bbox": [ + 104, + 392, + 455, + 405 + ], + "type": "text", + "content": ", we take the minimiser " + }, + { + "bbox": [ + 104, + 392, + 455, + 405 + ], + "type": "inline_equation", + "content": "q^{*}" + }, + { + "bbox": [ + 104, + 392, + 455, + 405 + ], + "type": "text", + "content": " of (6), which completes the proof." + } + ] + } + ], + "index": 10 + }, + { + "type": "image", + "bbox": [ + 494, + 393, + 504, + 401 + ], + "blocks": [ + { + "bbox": [ + 494, + 393, + 504, + 401 + ], + "lines": [ + { + "bbox": [ + 494, + 393, + 504, + 401 + ], + "spans": [ + { + "bbox": [ + 494, + 393, + 504, + 401 + ], + "type": "image", + "image_path": "b1b9d9223d04c26ca918aa40eae8961e5be70c45a309f64450429e64e82034ef.jpg" + } + ] + } + ], + "index": 11, + "angle": 0, + "type": "image_body" + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 417, + 321, + 428 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 417, + 321, + 428 + ], + "spans": [ + { + "bbox": [ + 105, + 417, + 321, + 428 + ], + "type": "text", + "content": "A.2 PROOF FOR REGRESSION ANALYSIS BOUND" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "spans": [ + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": "Theorem 5.2, reiterated below as Theorem A.2 in a more detailed form, is based on the recent regression analysis techniques (Pati et al., 2018; Bai et al., 2020). Before we prove the theorem, we formally state some core assumptions and notations. Let " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "P^i(x,y)" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": " be the true data distribution for episode/task " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "i = 1,\\dots,N" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "N\\to \\infty" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": ". We consider regression-based data modeling, assuming that the target " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "y" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": " is real vector-valued " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "(y\\in \\mathbb{R}^{S_y})" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": ". Also it is assumed that there exists a true regression function " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "f^i:\\mathbb{R}^{S_x}\\rightarrow \\mathbb{R}^{S_y}" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": " for each " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": ", more formally " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "P^i (y|x) = \\mathcal{N}(y;f^i (x),\\sigma_\\epsilon^2 I)" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "inline_equation", + "content": "\\sigma_{\\epsilon}^{2}" + }, + { + "bbox": [ + 104, + 437, + 506, + 515 + ], + "type": "text", + "content": " is constant Gaussian output noise variance." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "spans": [ + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": "For easier analysis we assume that the backbone network is an MLP with " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "L" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " width- " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "M" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " hidden layers, and all activation functions " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "\\sigma (\\cdot)" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " are Lipschitz continuous with 1. We consider the bounded parameter space, " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "\\theta \\in \\Theta = \\{\\theta \\in \\mathbb{R}^G:||\\theta ||_\\infty \\leq B\\}" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": ", where " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "G = \\dim (\\theta)" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "B" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " is the maximal norm bound. Then the prediction (regression) function " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "f_{\\theta}: \\mathbb{R}^{S_x} \\to \\mathbb{R}^{S_y}" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " is induced from " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " as: " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "P_{\\theta}(y|x) = \\mathcal{N}(y;f_{\\theta}(x),\\sigma_{\\epsilon}^{2}I)" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": ", where the true noise variance is assumed to be known. The expressions " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_{\\theta}[\\cdot ]" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "\\mathbb{E}^i [\\cdot ]" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " refer to the expectations with respect to model's " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "P_{\\theta}" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " and the true " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "P^i" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": ", respectively. The generalisation error measure that we consider is the expected squared Hellinger distance between the true " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "P^i" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": " and the model " + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "inline_equation", + "content": "P_{\\theta}" + }, + { + "bbox": [ + 104, + 520, + 506, + 609 + ], + "type": "text", + "content": ", more specifically," + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 111, + 611, + 505, + 654 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 611, + 505, + 654 + ], + "spans": [ + { + "bbox": [ + 111, + 611, + 505, + 654 + ], + "type": "interline_equation", + "content": "d ^ {2} \\left(P _ {\\theta}, P ^ {i}\\right) = \\mathbb {E} _ {x \\sim P ^ {i} (x)} \\left[ H ^ {2} \\left(P _ {\\theta} (y | x), P ^ {i} (y | x)\\right) \\right] = \\mathbb {E} _ {x \\sim P ^ {i} (x)} \\left[ 1 - \\exp \\left(- \\frac {\\left| \\left| f _ {\\theta} (x) - f ^ {i} (x) \\right| \\right| _ {2} ^ {2}}{8 \\sigma_ {\\epsilon} ^ {2}}\\right) \\right]. \\tag {23}", + "image_path": "7aa063cc301d955918e68f019c72779e5cc523a008c9fe2bdfa87d9375df415c.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 658, + 216, + 668 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 658, + 216, + 668 + ], + "spans": [ + { + "bbox": [ + 104, + 658, + 216, + 668 + ], + "type": "text", + "content": "Now we state our theorem." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "spans": [ + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "text", + "content": "Theorem A.2 (Bound derived from regression analysis). Let " + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "inline_equation", + "content": "d^{2}(P_{\\theta_{i}}, P^{i})" + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "text", + "content": " be the expected squared Hellinger distance between the true distribution " + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "inline_equation", + "content": "P^{i}(y|x)" + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "text", + "content": " and model's " + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "inline_equation", + "content": "P_{\\theta_{i}}(y|x)" + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "text", + "content": " for task/episode " + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 670, + 506, + 705 + ], + "type": "text", + "content": ". Then the following holds with high probability:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 184, + 708, + 505, + 731 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 184, + 708, + 505, + 731 + ], + "spans": [ + { + "bbox": [ + 184, + 708, + 505, + 731 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q _ {i} ^ {*} (\\theta_ {i})} [ d ^ {2} (P _ {\\theta_ {i}}, P ^ {i}) ] \\leq \\frac {C _ {0}}{n} + C _ {1} \\epsilon_ {n} ^ {2} + C _ {2} \\left(r _ {n} + \\lambda^ {*}\\right), \\tag {24}", + "image_path": "8cdeb97b66ed6e18bcfbe4b1bc0c45a842c866190f8b5d9edbcc82424f448211.jpg" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "15" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 14 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "spans": [ + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "inline_equation", + "content": "C_{\\bullet} > 0" + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "content": " are some constant, " + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "inline_equation", + "content": "\\lambda^{*} = \\mathbb{E}_{i\\sim \\mathcal{T}}[\\lambda_i^* ]" + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "inline_equation", + "content": "\\lambda_{i}^{*} = \\min_{\\theta \\in \\Theta}\\max_{x}||\\mathbb{E}_{\\theta}[y|x] - \\mathbb{E}^{i}[y|x]||^{2}" + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "content": " is the lowest possible regression error within the underlying network " + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "inline_equation", + "content": "\\Theta" + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "inline_equation", + "content": "r_n = \\frac{G}{n}\\Bigg((L + 1)\\log M + \\log \\left(S_x\\sqrt{\\frac{n}{G}}\\right)\\Bigg)" + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "content": ", and " + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "inline_equation", + "content": "\\epsilon_{n} = \\sqrt{r_{n}}\\log^{\\delta}(n)" + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "inline_equation", + "content": "\\delta >1" + }, + { + "bbox": [ + 104, + 81, + 506, + 144 + ], + "type": "text", + "content": " constant." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 158, + 504, + 192 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 158, + 504, + 192 + ], + "spans": [ + { + "bbox": [ + 104, + 158, + 504, + 192 + ], + "type": "text", + "content": "Proof. We utilise the Donsker-Varadhan's (DV) theorem (Boucheron et al., 2013) to relate the variational ELBO objective function to the Hellinger distance. The DV theorem says that the following inequality holds for any distributions " + }, + { + "bbox": [ + 104, + 158, + 504, + 192 + ], + "type": "inline_equation", + "content": "p, q" + }, + { + "bbox": [ + 104, + 158, + 504, + 192 + ], + "type": "text", + "content": " and any (bounded) function " + }, + { + "bbox": [ + 104, + 158, + 504, + 192 + ], + "type": "inline_equation", + "content": "h(z)" + }, + { + "bbox": [ + 104, + 158, + 504, + 192 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 201, + 198, + 505, + 218 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 201, + 198, + 505, + 218 + ], + "spans": [ + { + "bbox": [ + 201, + 198, + 505, + 218 + ], + "type": "interline_equation", + "content": "\\log \\mathbb {E} _ {p (z)} [ e ^ {h (z)} ] = \\max _ {q} \\left(\\mathbb {E} _ {q (z)} [ h (z) ] - \\mathrm {K L} (q \\| p)\\right). \\tag {25}", + "image_path": "61dbe83144cc15dac89a75ee68aff2120e6164d89b55a8bd15169b9be8de7638.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "spans": [ + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "type": "text", + "content": "In our case, we define: " + }, + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "type": "inline_equation", + "content": "p(z) \\coloneqq p(\\theta_i|\\phi)" + }, + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "type": "inline_equation", + "content": "q(z) \\coloneqq q_i(\\theta_i)" + }, + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "type": "text", + "content": ", " + }, + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "type": "inline_equation", + "content": "h(z) \\coloneqq \\log \\eta_i(\\theta_i)" + }, + { + "bbox": [ + 104, + 224, + 426, + 238 + ], + "type": "text", + "content": " with" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 197, + 243, + 505, + 259 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 243, + 505, + 259 + ], + "spans": [ + { + "bbox": [ + 197, + 243, + 505, + 259 + ], + "type": "interline_equation", + "content": "\\eta_ {i} \\left(\\theta_ {i}\\right) := \\exp \\left(\\rho \\left(P _ {\\theta_ {i}} \\left(D _ {i}\\right), P ^ {i} \\left(D _ {i}\\right)\\right) + n d ^ {2} \\left(P _ {\\theta_ {i}}, P ^ {i}\\right)\\right) \\tag {26}", + "image_path": "cab5d19ed4a953501f93918cb5162c2a4a454a2761682708e6478ad99125b254.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "spans": [ + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "type": "inline_equation", + "content": "\\rho(P_{\\theta_i}(D_i), P^i(D_i)) := \\log \\frac{P_{\\theta_i}(D_i)}{P^i(D_i)}" + }, + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "type": "text", + "content": " is the log-ratio. Note that " + }, + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "type": "inline_equation", + "content": "P(D_i) = P(Y_i | X_i)" + }, + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "type": "text", + "content": ". Plugging these into (25) leads to the following inequality which holds for any " + }, + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 266, + 504, + 294 + ], + "type": "text", + "content": ":" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 155, + 299, + 505, + 329 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 155, + 299, + 505, + 329 + ], + "spans": [ + { + "bbox": [ + 155, + 299, + 505, + 329 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} n \\cdot \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ d ^ {2} (P _ {\\theta_ {i}}, P ^ {i}) ] \\leq \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ - \\rho (P _ {\\theta_ {i}} (D _ {i}), P ^ {i} (D _ {i})) ] + \\\\ \\operatorname {K L} \\left(q _ {i} \\left(\\theta_ {i}\\right) \\mid \\mid p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) + \\log \\mathbb {E} _ {p \\left(\\theta_ {i} \\mid \\phi\\right)} \\left[ \\eta_ {i} \\left(\\theta_ {i}\\right) \\right]. \\tag {27} \\\\ \\end{array}", + "image_path": "7ec25037ce72fd1db9c9b9400221733eab5040489108b3e36b9128fd481872e7.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 335, + 343, + 347 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 335, + 343, + 347 + ], + "spans": [ + { + "bbox": [ + 104, + 335, + 343, + 347 + ], + "type": "text", + "content": "We take the expectation with respect to " + }, + { + "bbox": [ + 104, + 335, + 343, + 347 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 335, + 343, + 347 + ], + "type": "text", + "content": ", which yields:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 117, + 354, + 505, + 385 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 117, + 354, + 505, + 385 + ], + "spans": [ + { + "bbox": [ + 117, + 354, + 505, + 385 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} n \\cdot \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ d ^ {2} (P _ {\\theta_ {i}}, P ^ {i}) ] \\leq \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ - \\rho (P _ {\\theta_ {i}} (D _ {i}), P ^ {i} (D _ {i})) ] + \\\\ \\mathbb {E} _ {q (\\phi)} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}\\right) | | p \\left(\\theta_ {i} | \\phi\\right)\\right) \\right] + \\mathbb {E} _ {q (\\phi)} \\left[ \\log \\mathbb {E} _ {p \\left(\\theta_ {i} | \\phi\\right)} \\left[ \\eta_ {i} \\left(\\theta_ {i}\\right) \\right] \\right]. \\tag {28} \\\\ \\end{array}", + "image_path": "b56d38aa113bdc7f10c52a440583d83aecd58cf9ce8cab1025030265da662517.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "spans": [ + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "text", + "content": "From the regression theorem (Pati et al., 2018) (Theorem 3.1 therein), it is known that " + }, + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_{s(\\theta)}[\\eta (\\theta)]\\leq" + }, + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "inline_equation", + "content": "e^{Cn\\epsilon_n^2}" + }, + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "text", + "content": " for any distribution " + }, + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "inline_equation", + "content": "s(\\theta)" + }, + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "text", + "content": " with high probability. We apply this result to the last term of (28). Summing it over " + }, + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "inline_equation", + "content": "i = 1,\\ldots ,N" + }, + { + "bbox": [ + 104, + 390, + 506, + 428 + ], + "type": "text", + "content": " leads to:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 146, + 434, + 505, + 502 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 146, + 434, + 505, + 502 + ], + "spans": [ + { + "bbox": [ + 146, + 434, + 505, + 502 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} n \\cdot \\sum_ {i = 1} ^ {N} \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ d ^ {2} (P _ {\\theta_ {i}}, P ^ {i}) ] \\leq \\sum_ {i = 1} ^ {N} \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ - \\rho (P _ {\\theta_ {i}} (D _ {i}), P ^ {i} (D _ {i})) ] + \\\\ \\sum_ {i = 1} ^ {N} \\mathbb {E} _ {q (\\phi)} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right] + N C n \\epsilon_ {n} ^ {2}. \\tag {29} \\\\ \\end{array}", + "image_path": "b4bb611aa58809ac87be6274fc176fbc2dadc3c3f3894d199a5715d8a721015e.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 105, + 508, + 350, + 521 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 508, + 350, + 521 + ], + "spans": [ + { + "bbox": [ + 105, + 508, + 350, + 521 + ], + "type": "text", + "content": "By dividing both sides by " + }, + { + "bbox": [ + 105, + 508, + 350, + 521 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 105, + 508, + 350, + 521 + ], + "type": "text", + "content": " and sending " + }, + { + "bbox": [ + 105, + 508, + 350, + 521 + ], + "type": "inline_equation", + "content": "N\\to \\infty" + }, + { + "bbox": [ + 105, + 508, + 350, + 521 + ], + "type": "text", + "content": ", we have:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 110, + 525, + 505, + 579 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 110, + 525, + 505, + 579 + ], + "spans": [ + { + "bbox": [ + 110, + 525, + 505, + 579 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} n \\cdot \\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q _ {i} (\\theta_ {i})} [ d ^ {2} (P _ {\\theta_ {i}}, P ^ {i}) ] \\leq \\\\ \\underbrace {\\mathbb {E} _ {i \\sim \\mathcal {T}} \\left[ \\mathbb {E} _ {q _ {i} (\\theta_ {i})} \\left[ - \\rho \\left(P _ {\\theta_ {i}} \\left(D _ {i}\\right) , P ^ {i} \\left(D _ {i}\\right)\\right) \\right] + \\mathbb {E} _ {q (\\phi)} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right] \\right]} _ {= - \\operatorname {E L B O} (q) + \\log P ^ {i} (D _ {i})} + C n \\epsilon_ {n} ^ {2}. \\tag {30} \\\\ \\end{array}", + "image_path": "9aaebbe7f48fcc756b87674e03d291709a7ad3520bf31a80f8b4c42e7c33faec.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "spans": [ + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "type": "text", + "content": "As indicated, the right hand side is composed of " + }, + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "type": "inline_equation", + "content": "-\\mathrm{ELBO}(q)" + }, + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "type": "text", + "content": " (the objective function of (6)), the constant " + }, + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "type": "inline_equation", + "content": "\\log P^i(D_i)" + }, + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "type": "text", + "content": ", and the complexity term " + }, + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "type": "inline_equation", + "content": "Cn\\epsilon_n^2" + }, + { + "bbox": [ + 104, + 586, + 504, + 610 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "spans": [ + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "type": "text", + "content": "The next step is to plug in the optimal " + }, + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "type": "inline_equation", + "content": "q^{*}" + }, + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "type": "text", + "content": " to have a meaningful upper bound. To this end, we introduce/define " + }, + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "type": "inline_equation", + "content": "\\tilde{q}_i(\\theta_i)" + }, + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "type": "inline_equation", + "content": "\\tilde{q} (\\phi)" + }, + { + "bbox": [ + 104, + 613, + 504, + 636 + ], + "type": "text", + "content": " as follows:" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 126, + 643, + 505, + 663 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 126, + 643, + 505, + 663 + ], + "spans": [ + { + "bbox": [ + 126, + 643, + 505, + 663 + ], + "type": "interline_equation", + "content": "\\tilde {q} _ {i} \\left(\\theta_ {i}\\right) = \\mathcal {N} \\left(\\theta_ {i}; \\theta_ {i} ^ {*}, \\sigma_ {n} ^ {2} I\\right), \\quad \\tilde {q} (\\phi) = \\underset {q (\\phi)} {\\arg \\min } \\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q (\\phi)} [ \\mathrm {K L} \\left(\\tilde {q} _ {i} \\left(\\theta_ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) ], \\quad \\text {w h e r e} \\tag {31}", + "image_path": "dd2f5804bcd50679a78164be26cf80b43206c3814f9533c063f73bcb77606be9.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 195, + 666, + 505, + 689 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 666, + 505, + 689 + ], + "spans": [ + { + "bbox": [ + 195, + 666, + 505, + 689 + ], + "type": "interline_equation", + "content": "\\theta_ {i} ^ {*} = \\arg \\min _ {\\theta \\in \\Theta} \\max _ {x \\in \\mathbb {R} ^ {S _ {x}}} | | f _ {\\theta} (x) - f ^ {i} (x) | | ^ {2}, \\sigma_ {n} ^ {2} = \\frac {G}{8 n} A, \\tag {32}", + "image_path": "098b59b5751593a6fff0d7096362e9fd080d3499ade37828216f2553b6be47e1.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 109, + 692, + 505, + 729 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 109, + 692, + 505, + 729 + ], + "spans": [ + { + "bbox": [ + 109, + 692, + 505, + 729 + ], + "type": "interline_equation", + "content": "A ^ {- 1} = \\log \\left(3 S _ {x} M\\right) \\cdot (2 B M) ^ {2 (L + 1)} \\cdot \\left(\\left(S _ {x} + 1 + \\frac {1}{B M - 1}\\right) ^ {2} + \\frac {1}{(2 B M) ^ {2} - 1} + \\frac {2}{(2 B M - 1) ^ {2}}\\right). \\tag {33}", + "image_path": "175430e3a77faa5655bb9027ac2b15f35458d834bcd4e2160ad889331f849964.jpg" + } + ] + } + ], + "index": 18 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 751, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 751, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 751, + 310, + 760 + ], + "type": "text", + "content": "16" + } + ] + } + ], + "index": 19 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 15 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "spans": [ + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "type": "text", + "content": "Since " + }, + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "type": "inline_equation", + "content": "\\left(\\{q_i^*(\\theta_i)\\}_{i=1}^N, q^*(\\phi)\\right)" + }, + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "type": "text", + "content": " is the minimiser of the negative ELBO (6), we clearly have " + }, + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "type": "inline_equation", + "content": "-\\mathrm{ELBO}(q^*) \\leq -\\mathrm{ELBO}(\\tilde{q})" + }, + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "type": "text", + "content": ". We plug " + }, + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "type": "inline_equation", + "content": "q^*" + }, + { + "bbox": [ + 104, + 81, + 504, + 106 + ], + "type": "text", + "content": " into (30) and apply this ELBO inequality to have:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 148, + 110, + 505, + 143 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 148, + 110, + 505, + 143 + ], + "spans": [ + { + "bbox": [ + 148, + 110, + 505, + 143 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} n \\cdot \\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q _ {i} ^ {*} (\\theta_ {i})} [ d ^ {2} (P _ {\\theta_ {i}}, P ^ {i}) ] \\leq \\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {\\tilde {q} _ {i} (\\theta_ {i})} [ - \\rho (P _ {\\theta_ {i}} (D _ {i}), P ^ {i} (D _ {i})) ] + \\\\ \\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {\\tilde {q} (\\phi)} \\left[ \\mathrm {K L} \\left(\\tilde {q} _ {i} \\left(\\theta_ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right] + C n \\epsilon_ {n} ^ {2}. \\tag {34} \\\\ \\end{array}", + "image_path": "6e1e2faa2c182916469e502f1f19562e6e3727e52a3a0a8589376bc66017227a.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 149, + 504, + 184 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 149, + 504, + 184 + ], + "spans": [ + { + "bbox": [ + 104, + 149, + 504, + 184 + ], + "type": "text", + "content": "The second term of the right hand side of (34) is constant (independent of " + }, + { + "bbox": [ + 104, + 149, + 504, + 184 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 149, + 504, + 184 + ], + "type": "text", + "content": ") and denoted by " + }, + { + "bbox": [ + 104, + 149, + 504, + 184 + ], + "type": "inline_equation", + "content": "\\tilde{C}" + }, + { + "bbox": [ + 104, + 149, + 504, + 184 + ], + "type": "text", + "content": ". For the first term of the right hand side, we use the following fact from the proof of Lemma 4.1 in (Bai et al., 2020), which says that with high probability," + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 203, + 190, + 505, + 205 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 203, + 190, + 505, + 205 + ], + "spans": [ + { + "bbox": [ + 203, + 190, + 505, + 205 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {\\tilde {q} _ {i} \\left(\\theta_ {i}\\right)} \\left[ - \\rho \\left(P _ {\\theta_ {i}} \\left(D _ {i}\\right), P ^ {i} \\left(D _ {i}\\right)\\right) \\right] \\leq C ^ {\\prime} n \\left(r _ {n} + \\lambda_ {i} ^ {*}\\right), \\tag {35}", + "image_path": "6fa55d0c992c4ffdeccd845478a6f4cbf76fa70751ea2f6879c2999547fb1fcf.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 210, + 412, + 222 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 210, + 412, + 222 + ], + "spans": [ + { + "bbox": [ + 104, + 210, + 412, + 222 + ], + "type": "text", + "content": "for some constant " + }, + { + "bbox": [ + 104, + 210, + 412, + 222 + ], + "type": "inline_equation", + "content": "C' > 0" + }, + { + "bbox": [ + 104, + 210, + 412, + 222 + ], + "type": "text", + "content": ". Using this bound, (34) can be written as follows:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 156, + 228, + 505, + 248 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 156, + 228, + 505, + 248 + ], + "spans": [ + { + "bbox": [ + 156, + 228, + 505, + 248 + ], + "type": "interline_equation", + "content": "n \\cdot \\mathbb {E} _ {i \\sim \\mathcal {T}} \\mathbb {E} _ {q _ {i} ^ {*} (\\theta_ {i})} [ d ^ {2} (P _ {\\theta_ {i}}, P ^ {i}) ] \\leq \\tilde {C} + C ^ {\\prime} n \\left(r _ {n} + \\mathbb {E} _ {i \\sim \\mathcal {T}} \\left[ \\lambda_ {i} ^ {*} \\right]\\right) + C n \\epsilon_ {n} ^ {2}. \\tag {36}", + "image_path": "492329469d4535150a141e64f68066b4c0fe23a1a897547cdfae4148ca0c0009.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 254, + 304, + 266 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 254, + 304, + 266 + ], + "spans": [ + { + "bbox": [ + 104, + 254, + 304, + 266 + ], + "type": "text", + "content": "The proof completes by dividing both sides by " + }, + { + "bbox": [ + 104, + 254, + 304, + 266 + ], + "type": "inline_equation", + "content": "n" + }, + { + "bbox": [ + 104, + 254, + 304, + 266 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 7 + }, + { + "type": "image", + "bbox": [ + 494, + 254, + 504, + 264 + ], + "blocks": [ + { + "bbox": [ + 494, + 254, + 504, + 264 + ], + "lines": [ + { + "bbox": [ + 494, + 254, + 504, + 264 + ], + "spans": [ + { + "bbox": [ + 494, + 254, + 504, + 264 + ], + "type": "image", + "image_path": "d066f0a262db04133c0a93658e28409b628a479d92a85378045f1c1e18708df0.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 281, + 255, + 294 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 281, + 255, + 294 + ], + "spans": [ + { + "bbox": [ + 105, + 281, + 255, + 294 + ], + "type": "text", + "content": "B DETAILED DERIVATIONS" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 307, + 253, + 319 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 307, + 253, + 319 + ], + "spans": [ + { + "bbox": [ + 105, + 307, + 253, + 319 + ], + "type": "text", + "content": "B.1 ELBO DERIVATION FOR (5)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 327, + 504, + 351 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 327, + 504, + 351 + ], + "spans": [ + { + "bbox": [ + 104, + 327, + 504, + 351 + ], + "type": "text", + "content": "We derive the upper bound of the negative marginal log-likelihood for our Bayesian FSL model, that is, deriving (5) in the main paper." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 111, + 356, + 505, + 403 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 356, + 505, + 403 + ], + "spans": [ + { + "bbox": [ + 111, + 356, + 505, + 403 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\operatorname {K L} \\left(q (\\phi , \\theta_ {1: N}) | | p (\\phi , \\theta_ {1: N} | D _ {1: N})\\right) = \\mathbb {E} _ {q} \\left[ \\log \\frac {q (\\phi) \\cdot \\prod_ {i} q _ {i} (\\theta_ {i}) \\cdot p (D _ {1 : N})}{p (\\phi) \\cdot \\prod_ {i} p (\\theta_ {i} | \\phi) \\cdot \\prod_ {i} p (D _ {i} | \\theta_ {i})} \\right] \\tag {37} \\\\ = \\log p \\left(D _ {1: N}\\right) + \\\\ \\end{array}", + "image_path": "cbddccd7cb73b26d5b1201ef5e9ab9a57374eff16b9e10db5693cc24fb8909af.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 156, + 406, + 505, + 455 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 156, + 406, + 505, + 455 + ], + "spans": [ + { + "bbox": [ + 156, + 406, + 505, + 455 + ], + "type": "interline_equation", + "content": "\\underbrace {\\operatorname {K L} \\left(q (\\phi) \\mid \\mid p (\\phi)\\right) + \\sum_ {i = 1} ^ {N} \\left(\\mathbb {E} _ {q _ {i} \\left(\\theta_ {i}\\right)} \\left[ - \\log p \\left(D _ {i} \\mid \\theta_ {i}\\right) \\right] + \\mathbb {E} _ {q (\\phi)} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}\\right) \\mid \\mid p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right]\\right)} _ {=: \\mathcal {L} (L)}. \\tag {38}", + "image_path": "d74730fbe9a8b030ec50b84ccaf4f0d5bcc58b7c0c408f3cc429b85a80aaaaeb.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "spans": [ + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "text", + "content": "Since KL divergence is non-negative, " + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "inline_equation", + "content": "-\\mathcal{L}(L)" + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "text", + "content": " must be lower bound of the data log-likelihood " + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "inline_equation", + "content": "\\log p(D_{1:N})" + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "text", + "content": ", rendering " + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "inline_equation", + "content": "\\mathcal{L}(L)" + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "text", + "content": " an upper bound of " + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "inline_equation", + "content": "-\\log p(D_{1:N})" + }, + { + "bbox": [ + 104, + 461, + 504, + 486 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 498, + 359, + 513 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 498, + 359, + 513 + ], + "spans": [ + { + "bbox": [ + 105, + 498, + 359, + 513 + ], + "type": "text", + "content": "B.2 DERIVATION FOR " + }, + { + "bbox": [ + 105, + 498, + 359, + 513 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_{q(\\phi)}\\big[\\mathrm{KL}(q_i(\\theta_i)||p(\\theta_i|\\phi))\\big]" + }, + { + "bbox": [ + 105, + 498, + 359, + 513 + ], + "type": "text", + "content": " IN (6-7)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 521, + 504, + 555 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 521, + 504, + 555 + ], + "spans": [ + { + "bbox": [ + 104, + 521, + 504, + 555 + ], + "type": "text", + "content": "We will derive the full closed-form formula for " + }, + { + "bbox": [ + 104, + 521, + 504, + 555 + ], + "type": "inline_equation", + "content": "\\mathbb{E}_{q(\\phi)}\\big[\\mathrm{KL}(q_i(\\theta_i)||p(\\theta_i|\\phi))\\big]" + }, + { + "bbox": [ + 104, + 521, + 504, + 555 + ], + "type": "text", + "content": ", which not only leads to equivalence between (7) and (8), but is also used in deriving (11). In a nutshell, the formula that we will prove is as follows:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 105, + 560, + 505, + 604 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 560, + 505, + 604 + ], + "spans": [ + { + "bbox": [ + 105, + 560, + 505, + 604 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathbb {E} _ {q (\\phi)} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right] = \\tag {39} \\\\ \\frac {1}{2} \\bigg (- d \\log (2 e) + \\log \\frac {| V _ {0} |}{| V _ {i} |} - \\psi_ {d} \\Big (\\frac {n _ {0}}{2} \\Big) + \\frac {d}{l _ {0}} + n _ {0} \\big (m _ {i} - m _ {0} \\big) ^ {\\top} V _ {0} ^ {- 1} \\big (m _ {i} - m _ {0} \\big) + n _ {0} \\mathrm {T r} \\big (V _ {i} V _ {0} ^ {- 1} \\big) \\bigg), \\\\ \\end{array}", + "image_path": "73b6496bff97824bf3f9de0f801a23eeae189f0219af99f112de7dd9ee06d2c7.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 104, + 610, + 504, + 635 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 610, + 504, + 635 + ], + "spans": [ + { + "bbox": [ + 104, + 610, + 504, + 635 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 104, + 610, + 504, + 635 + ], + "type": "inline_equation", + "content": "\\psi_d(a) = \\sum_{j=1}^d \\psi(a + (1 - j)/2)" + }, + { + "bbox": [ + 104, + 610, + 504, + 635 + ], + "type": "text", + "content": " is the multivariate digamma function, and " + }, + { + "bbox": [ + 104, + 610, + 504, + 635 + ], + "type": "inline_equation", + "content": "\\psi(\\cdot)" + }, + { + "bbox": [ + 104, + 610, + 504, + 635 + ], + "type": "text", + "content": " is the digamma function." + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 104, + 640, + 310, + 652 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 640, + 310, + 652 + ], + "spans": [ + { + "bbox": [ + 104, + 640, + 310, + 652 + ], + "type": "text", + "content": "We begin with the definition of the KL divergence," + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 159, + 658, + 505, + 673 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 658, + 505, + 673 + ], + "spans": [ + { + "bbox": [ + 159, + 658, + 505, + 673 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {q (\\phi)} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right] = - \\mathbb {H} \\left(q _ {i} \\left(\\theta_ {i}\\right)\\right) + \\mathbb {E} _ {q (\\phi) q _ {i} \\left(\\theta_ {i}\\right)} \\left[ - \\log p \\left(\\theta_ {i} \\mid \\phi\\right) \\right], \\tag {40}", + "image_path": "68fc8ef6b6fa4e06ec1f0f3f72927759e91f45c990ce80ded90f507b17f1b17a.jpg" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 104, + 677, + 504, + 702 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 677, + 504, + 702 + ], + "spans": [ + { + "bbox": [ + 104, + 677, + 504, + 702 + ], + "type": "text", + "content": "where the first term is the negative entropy which admits a closed form due to Gaussian " + }, + { + "bbox": [ + 104, + 677, + 504, + 702 + ], + "type": "inline_equation", + "content": "q_{i}(\\theta_{i}) = \\mathcal{N}(\\theta_{i};m_{i},V_{i})" + }, + { + "bbox": [ + 104, + 677, + 504, + 702 + ], + "type": "text", + "content": "," + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 220, + 707, + 505, + 731 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 220, + 707, + 505, + 731 + ], + "spans": [ + { + "bbox": [ + 220, + 707, + 505, + 731 + ], + "type": "interline_equation", + "content": "- \\mathbb {H} \\left(q _ {i} \\left(\\theta_ {i}\\right)\\right) = - \\frac {d}{2} \\log (2 \\pi e) - \\frac {1}{2} \\log | V _ {i} |. \\tag {41}", + "image_path": "9a63dc3d04b460bc3dbf3e1bdcf6f10277e7c22e04603c1936c489a3b5f98ad3.jpg" + } + ] + } + ], + "index": 22 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "17" + } + ] + } + ], + "index": 23 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 16 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 432, + 95 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 432, + 95 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 432, + 95 + ], + "type": "text", + "content": "Next we expand the second term of (40) using " + }, + { + "bbox": [ + 104, + 82, + 432, + 95 + ], + "type": "inline_equation", + "content": "p(\\theta_i|\\phi) = \\mathcal{N}(\\theta_i;\\mu ,\\Sigma)" + }, + { + "bbox": [ + 104, + 82, + 432, + 95 + ], + "type": "text", + "content": " as follows:" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 99, + 505, + 148 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 99, + 505, + 148 + ], + "spans": [ + { + "bbox": [ + 105, + 99, + 505, + 148 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {q (\\phi) q _ {i} \\left(\\theta_ {i}\\right)} [ - \\log p \\left(\\theta_ {i} \\mid \\phi\\right) ] = \\underbrace {\\frac {1}{2} \\mathbb {E} _ {q (\\phi)} \\left[ \\log | \\Sigma | \\right]} _ {=: T _ {1}} + \\underbrace {\\frac {1}{2} \\mathbb {E} _ {q (\\phi) q _ {i} \\left(\\theta_ {i}\\right)} \\left[ \\left(\\theta_ {i} - \\mu\\right) ^ {\\top} \\Sigma^ {- 1} \\left(\\theta_ {i} - \\mu\\right) \\right]} _ {=: T _ {2}} + \\frac {d}{2} \\log (2 \\pi). \\tag {42}", + "image_path": "68a77d1d6b1098959701105cc55bbac894257eb1fcbc84d1dc47790273dee59f.jpg" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 154, + 406, + 166 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 154, + 406, + 166 + ], + "spans": [ + { + "bbox": [ + 105, + 154, + 406, + 166 + ], + "type": "text", + "content": "Using the following facts from (Bishop, 2006; Braun & McAuliffe, 2008):" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 195, + 171, + 505, + 184 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 195, + 171, + 505, + 184 + ], + "spans": [ + { + "bbox": [ + 195, + 171, + 505, + 184 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {\\mathcal {I W} (\\Sigma ; \\Psi , \\nu)} \\log | \\Sigma | = - d \\log 2 + \\log | \\Psi | - \\psi_ {d} (\\nu / 2) \\tag {43}", + "image_path": "f058f06ae58092c39de7ba4bfb6e3a7b97de835dfeea63f3e3c1cc4ac9c94876.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 206, + 186, + 505, + 201 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 206, + 186, + 505, + 201 + ], + "spans": [ + { + "bbox": [ + 206, + 186, + 505, + 201 + ], + "type": "interline_equation", + "content": "\\mathbb {E} _ {\\mathcal {I W} (\\Sigma ; \\Psi , \\nu)} \\Sigma^ {- 1} = \\nu \\Psi^ {- 1}, \\tag {44}", + "image_path": "810c8a1499c51f8030a29f0dee33e15df3bf79c0c40e63ccd0785397b5af9304.jpg" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 206, + 506, + 220 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 206, + 506, + 220 + ], + "spans": [ + { + "bbox": [ + 104, + 206, + 506, + 220 + ], + "type": "text", + "content": "we can derive the two terms " + }, + { + "bbox": [ + 104, + 206, + 506, + 220 + ], + "type": "inline_equation", + "content": "T_{1}" + }, + { + "bbox": [ + 104, + 206, + 506, + 220 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 206, + 506, + 220 + ], + "type": "inline_equation", + "content": "T_{2}" + }, + { + "bbox": [ + 104, + 206, + 506, + 220 + ], + "type": "text", + "content": " as follows (Recall: " + }, + { + "bbox": [ + 104, + 206, + 506, + 220 + ], + "type": "inline_equation", + "content": "q(\\phi) = \\mathcal{N}(\\mu ;m_0,l_0^{-1}\\Sigma)\\cdot \\mathcal{I}\\mathcal{W}(\\Sigma ;V_0,n_0))" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 115, + 224, + 505, + 251 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 224, + 505, + 251 + ], + "spans": [ + { + "bbox": [ + 115, + 224, + 505, + 251 + ], + "type": "interline_equation", + "content": "\\left(T _ {1} =\\right) \\frac {1}{2} \\mathbb {E} _ {q (\\phi)} [ \\log | \\Sigma | ] = \\frac {1}{2} \\left(- d \\log 2 + \\log | V _ {0} | - \\psi_ {d} \\left(\\frac {n _ {0}}{2}\\right)\\right) \\tag {45}", + "image_path": "f69cbb57095cafe6498f2faeb7fd178b7822db403383172f80f235a235a19cc4.jpg" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 115, + 252, + 505, + 423 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 252, + 505, + 423 + ], + "spans": [ + { + "bbox": [ + 115, + 252, + 505, + 423 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\left(T _ {2} =\\right) \\frac {1}{2} \\mathbb {E} _ {q (\\phi) q _ {i} \\left(\\theta_ {i}\\right)} \\left[ \\left(\\theta_ {i} - \\mu\\right) ^ {\\top} \\Sigma^ {- 1} \\left(\\theta_ {i} - \\mu\\right) \\right] = \\frac {1}{2} \\mathbb {E} _ {q (\\phi) q _ {i} \\left(\\theta_ {i}\\right)} \\operatorname {T r} \\left(\\left(\\theta_ {i} - \\mu\\right) \\left(\\theta_ {i} - \\mu\\right) ^ {\\top} \\Sigma^ {- 1}\\right) (46) \\\\ = \\frac {1}{2} \\operatorname {T r} \\left(\\mathbb {E} _ {q (\\phi)} \\left[ \\mathbb {E} _ {q _ {i} \\left(\\theta_ {i}\\right)} \\left[ \\left(\\theta_ {i} - \\mu\\right) \\left(\\theta_ {i} - \\mu\\right) ^ {\\top} \\right] \\Sigma^ {- 1} \\right]\\right) (47) \\\\ = \\frac {1}{2} \\operatorname {T r} \\left(\\mathbb {E} _ {q (\\phi)} \\left[ \\left(m _ {i} m _ {i} ^ {\\top} - \\mu m _ {i} ^ {\\top} - m _ {i} \\mu^ {\\top} + \\mu \\mu^ {\\top} + V _ {i}\\right) \\Sigma^ {- 1} \\right]\\right) (48) \\\\ = \\frac {1}{2} \\operatorname {T r} \\left(\\mathbb {E} _ {\\mathcal {I W} (\\Sigma ; V _ {0}, n _ {0})} \\left[ \\mathbb {E} _ {\\mathcal {N} (\\mu ; m _ {0}, l _ {0} ^ {- 1} \\Sigma)} \\left[ m _ {i} m _ {i} ^ {\\top} - \\mu m _ {i} ^ {\\top} - m _ {i} \\mu^ {\\top} + \\mu \\mu^ {\\top} + V _ {i} \\right] \\Sigma^ {- 1} \\right]\\right) (49) \\\\ = \\frac {1}{2} \\operatorname {T r} \\left(\\mathbb {E} _ {\\mathcal {I W} (\\Sigma ; V _ {0}, n _ {0})} \\left[ \\left(m _ {i} m _ {i} ^ {\\top} - m _ {0} m _ {i} ^ {\\top} - m _ {i} m _ {0} ^ {\\top} + m _ {0} m _ {0} ^ {\\top} + l _ {0} ^ {- 1} \\Sigma + V _ {i}\\right) \\Sigma^ {- 1} \\right]\\right) (50) \\\\ = \\frac {1}{2} \\operatorname {T r} \\left(\\frac {1}{l _ {0}} I + \\left((m _ {i} - m _ {0}) (m _ {i} - m _ {0}) ^ {\\top} + V _ {i}\\right) n _ {0} V _ {0} ^ {- 1}\\right) (51) \\\\ = \\frac {1}{2} \\left(\\frac {d}{l _ {0}} + n _ {0} (m _ {i} - m _ {0}) ^ {\\top} V _ {0} ^ {- 1} (m _ {i} - m _ {0}) + n _ {0} \\operatorname {T r} \\left(V _ {i} V _ {0} ^ {- 1}\\right)\\right) (52) \\\\ \\end{array}", + "image_path": "289eefd6a270bd25ab7c4a14dafec27357ec485c3483518c83616968d15295bd.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 428, + 331, + 440 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 428, + 331, + 440 + ], + "spans": [ + { + "bbox": [ + 105, + 428, + 331, + 440 + ], + "type": "text", + "content": "Combining all the above results yields the formula (39)." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 453, + 265, + 464 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 453, + 265, + 464 + ], + "spans": [ + { + "bbox": [ + 105, + 453, + 265, + 464 + ], + "type": "text", + "content": "B.3 DERIVATION FOR (8) FROM (7)" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 473, + 504, + 496 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 473, + 504, + 496 + ], + "spans": [ + { + "bbox": [ + 104, + 473, + 504, + 496 + ], + "type": "text", + "content": "Using the result (39), we can easily show that the local episodic optimisation (7) in the main paper ((53) below) reduces to (8) ((54) below)." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 190, + 500, + 505, + 518 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 190, + 500, + 505, + 518 + ], + "spans": [ + { + "bbox": [ + 190, + 500, + 505, + 518 + ], + "type": "interline_equation", + "content": "\\min _ {L _ {i}} \\mathbb {E} _ {q _ {i} \\left(\\theta_ {i}; L _ {i}\\right)} \\left[ l _ {i} \\left(\\theta_ {i}\\right) \\right] + \\mathbb {E} _ {q (\\phi)} \\left[ \\mathrm {K L} \\left(q _ {i} \\left(\\theta_ {i}; L _ {i}\\right) | | p \\left(\\theta_ {i} \\mid \\phi\\right)\\right) \\right] \\tag {53}", + "image_path": "ee9bbe9f3cb752a1000d6f483119e45cee1e7de3f8eedc2d9397e97221ad0c6a.jpg" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 113, + 521, + 505, + 544 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 113, + 521, + 505, + 544 + ], + "spans": [ + { + "bbox": [ + 113, + 521, + 505, + 544 + ], + "type": "interline_equation", + "content": "\\min _ {m _ {i}, V _ {i}} \\mathbb {E} _ {\\mathcal {N} \\left(\\theta_ {i}; m _ {i}, V _ {i}\\right)} \\left[ l _ {i} \\left(\\theta_ {i}\\right) \\right] - \\frac {1}{2} \\log | V _ {i} | + \\frac {n _ {0}}{2} \\left(m _ {i} - m _ {0}\\right) ^ {\\top} V _ {0} ^ {- 1} \\left(m _ {i} - m _ {0}\\right) + \\frac {n _ {0}}{2} \\operatorname {T r} \\left(V _ {i} V _ {0} ^ {- 1}\\right) \\tag {54}", + "image_path": "50195efa7344a7c6efc43509cc24a826d4313d92e92c949ce0961189afce05a6.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "spans": [ + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "type": "text", + "content": "Recall that the optimisation is with respect to " + }, + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "type": "inline_equation", + "content": "L_{i} = (m_{i},V_{i})" + }, + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "type": "inline_equation", + "content": "L_0 = \\{m_0,V_0,l_0,n_0\\}" + }, + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "type": "text", + "content": " fixed. Plugging (39) into (53) and removing the terms other than " + }, + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "type": "inline_equation", + "content": "(m_i,V_i)" + }, + { + "bbox": [ + 104, + 550, + 506, + 574 + ], + "type": "text", + "content": " leads to (54)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 105, + 586, + 228, + 597 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 586, + 228, + 597 + ], + "spans": [ + { + "bbox": [ + 105, + 586, + 228, + 597 + ], + "type": "text", + "content": "B.4 DERIVATION FOR (10)" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 606, + 505, + 640 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 606, + 505, + 640 + ], + "spans": [ + { + "bbox": [ + 104, + 606, + 505, + 640 + ], + "type": "text", + "content": "For the quadratic approximation of " + }, + { + "bbox": [ + 104, + 606, + 505, + 640 + ], + "type": "inline_equation", + "content": "l_{i}(\\theta_{i}) = -\\log p(D_{i}|\\theta_{i}) \\approx \\frac{1}{2} (\\theta_{i} - \\overline{m}_{i})^{\\top}\\overline{A}_{i}(\\theta_{i} - \\overline{m}_{i}) + \\mathrm{const.}" + }, + { + "bbox": [ + 104, + 606, + 505, + 640 + ], + "type": "text", + "content": ", here we show that the minimiser of (8) ((54) above) can be obtained by the closed-form formula (10) ((55) below)." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 123, + 644, + 505, + 659 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 123, + 644, + 505, + 659 + ], + "spans": [ + { + "bbox": [ + 123, + 644, + 505, + 659 + ], + "type": "interline_equation", + "content": "m _ {i} ^ {*} (L _ {0}) = (\\bar {A} _ {i} + n _ {0} V _ {0} ^ {- 1}) ^ {- 1} (\\bar {A} _ {i} \\bar {m} _ {i} + n _ {0} V _ {0} ^ {- 1} m _ {0}), \\quad V _ {i} ^ {*} (L _ {0}) = (\\bar {A} _ {i} + n _ {0} V _ {0} ^ {- 1}) ^ {- 1}. \\tag {55}", + "image_path": "b7bc1ab1d9694f8959a900a72708f56745abda4b573dbb8ab22b3332932c4364.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 104, + 663, + 504, + 685 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 663, + 504, + 685 + ], + "spans": [ + { + "bbox": [ + 104, + 663, + 504, + 685 + ], + "type": "text", + "content": "By replacing " + }, + { + "bbox": [ + 104, + 663, + 504, + 685 + ], + "type": "inline_equation", + "content": "l_{i}(\\theta_{i})" + }, + { + "bbox": [ + 104, + 663, + 504, + 685 + ], + "type": "text", + "content": " by the quadratic approximation, the expected loss term in (8) or (54) can be written as follows:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 136, + 689, + 505, + 735 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 136, + 689, + 505, + 735 + ], + "spans": [ + { + "bbox": [ + 136, + 689, + 505, + 735 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\mathbb {E} _ {\\mathcal {N} \\left(\\theta_ {i}; m _ {i}, V _ {i}\\right)} \\left[ l _ {i} \\left(\\theta_ {i}\\right) \\right] \\approx \\mathbb {E} _ {\\mathcal {N} \\left(\\theta_ {i}; m _ {i}, V _ {i}\\right)} \\left[ \\frac {1}{2} \\left(\\theta_ {i} - \\bar {m} _ {i}\\right) ^ {\\top} \\bar {A} _ {i} \\left(\\theta_ {i} - \\bar {m} _ {i}\\right) \\right] + \\text {c o n s t .} (56) \\\\ = \\frac {1}{2} \\left(\\operatorname {T r} \\left(\\mathbb {E} \\left[ \\theta \\theta^ {\\top} \\right] \\bar {A} _ {i}\\right) - \\bar {m} _ {i} ^ {\\top} \\bar {A} _ {i} m _ {i} - m _ {i} ^ {\\top} \\bar {A} _ {i} \\bar {m} _ {i} + \\bar {m} _ {i} ^ {\\top} \\bar {A} _ {i} \\bar {m} _ {i}\\right) + \\text {c o n s t .} (57) \\\\ \\end{array}", + "image_path": "113374ffbccb3a04a278f912ff781f8927d23ded5ddb30b2dd645cc28317d6ca.jpg" + } + ] + } + ], + "index": 19 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "18" + } + ] + } + ], + "index": 20 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 17 + }, + { + "para_blocks": [ + { + "type": "code", + "bbox": [ + 114, + 95, + 397, + 167 + ], + "blocks": [ + { + "bbox": [ + 106, + 82, + 285, + 94 + ], + "lines": [ + { + "bbox": [ + 106, + 82, + 285, + 94 + ], + "spans": [ + { + "bbox": [ + 106, + 82, + 285, + 94 + ], + "type": "text", + "content": "Algorithm 2 Meta-test prediction algorithm." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "code_caption" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "lines": [ + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "spans": [ + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": "Input: Test support data " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "D^{*}" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " and learned " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "q(\\phi ;L_0)" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "L_{0} = \\{m_{0},V_{0},n_{0}\\}" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "M_V =" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " number of test-time variational inference steps. " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "M_S =" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " number of test-time model samples. \nCompute the mode " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "\\phi^{*} = (\\mu^{*} = m_{0},\\Sigma^{*} = V_{0} / (n_{0} + d + 2))" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " \nInitialise " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "(m,V)" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " with " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "(\\mu^{*},\\Sigma^{*})" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " \nfor " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "i = 1,\\dots ,M_V" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " do Take a gradient descent update for " + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "inline_equation", + "content": "(m,V)" + }, + { + "bbox": [ + 114, + 95, + 397, + 167 + ], + "type": "text", + "content": " with the objective in (64)." + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "code_body" + } + ], + "index": 2, + "sub_type": "algorithm" + }, + { + "bbox": [ + 115, + 167, + 144, + 175 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 167, + 144, + 175 + ], + "spans": [ + { + "bbox": [ + 115, + 167, + 144, + 175 + ], + "type": "text", + "content": "end for" + } + ] + } + ], + "index": 3 + }, + { + "type": "code", + "bbox": [ + 115, + 175, + 290, + 186 + ], + "blocks": [ + { + "bbox": [ + 115, + 175, + 290, + 186 + ], + "lines": [ + { + "bbox": [ + 115, + 175, + 290, + 186 + ], + "spans": [ + { + "bbox": [ + 115, + 175, + 290, + 186 + ], + "type": "text", + "content": "Sample " + }, + { + "bbox": [ + 115, + 175, + 290, + 186 + ], + "type": "inline_equation", + "content": "\\theta^{(s)}\\sim \\mathcal{N}(\\theta ;m,V)" + }, + { + "bbox": [ + 115, + 175, + 290, + 186 + ], + "type": "text", + "content": " for " + }, + { + "bbox": [ + 115, + 175, + 290, + 186 + ], + "type": "inline_equation", + "content": "s = 1,\\dots ,M_S" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "code_body" + } + ], + "index": 4, + "sub_type": "algorithm" + }, + { + "type": "code", + "bbox": [ + 115, + 186, + 472, + 200 + ], + "blocks": [ + { + "bbox": [ + 115, + 186, + 472, + 200 + ], + "lines": [ + { + "bbox": [ + 115, + 186, + 472, + 200 + ], + "spans": [ + { + "bbox": [ + 115, + 186, + 472, + 200 + ], + "type": "text", + "content": "Output: Sample-averaged predictive distribution, " + }, + { + "bbox": [ + 115, + 186, + 472, + 200 + ], + "type": "inline_equation", + "content": "p(y^{*}|x^{*},D^{*},D_{1:\\infty})\\approx \\frac{1}{S}\\sum_{s = 1}^{MS}p(y^{*}|x^{*},\\theta^{(s)})" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "code_body" + } + ], + "index": 5, + "sub_type": "algorithm" + }, + { + "bbox": [ + 160, + 219, + 505, + 266 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 160, + 219, + 505, + 266 + ], + "spans": [ + { + "bbox": [ + 160, + 219, + 505, + 266 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} = \\frac {1}{2} \\left(\\operatorname {T r} \\left(V _ {i} \\bar {A} _ {i}\\right) + m _ {i} ^ {\\top} \\bar {A} _ {i} m _ {i} - \\bar {m} _ {i} ^ {\\top} \\bar {A} _ {i} m _ {i} - m _ {i} ^ {\\top} \\bar {A} _ {i} \\bar {m} _ {i} + \\bar {m} _ {i} ^ {\\top} \\bar {A} _ {i} \\bar {m} _ {i}\\right) + \\text {c o n s t .} (58) \\\\ = \\frac {1}{2} \\left(\\operatorname {T r} \\left(V _ {i} \\bar {A} _ {i}\\right) + \\left(m _ {i} - \\bar {m} _ {i}\\right) ^ {\\top} \\bar {A} _ {i} \\left(m _ {i} - \\bar {m} _ {i}\\right)\\right) + \\text {c o n s t .} (59) \\\\ \\end{array}", + "image_path": "cb469fedd082ea722c665707ef126bd91a866b28e30803008f383ff7cbe64d93.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 268, + 504, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 268, + 504, + 289 + ], + "spans": [ + { + "bbox": [ + 104, + 268, + 504, + 289 + ], + "type": "text", + "content": "After plugging this back to (54), we take the derivatives of the objective with respect to " + }, + { + "bbox": [ + 104, + 268, + 504, + 289 + ], + "type": "inline_equation", + "content": "m_{i}" + }, + { + "bbox": [ + 104, + 268, + 504, + 289 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 268, + 504, + 289 + ], + "type": "inline_equation", + "content": "V_{i}" + }, + { + "bbox": [ + 104, + 268, + 504, + 289 + ], + "type": "text", + "content": " and set them to 0:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 196, + 293, + 505, + 308 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 196, + 293, + 505, + 308 + ], + "spans": [ + { + "bbox": [ + 196, + 293, + 505, + 308 + ], + "type": "interline_equation", + "content": "\\nabla_ {m _ {i}} (\\cdot) = \\bar {A} _ {i} \\left(m _ {i} - \\bar {m} _ {i}\\right) + n _ {0} V _ {0} ^ {- 1} \\left(m _ {i} - m _ {0}\\right) = 0 \\tag {60}", + "image_path": "23f24b4f6ceb79a2d4efeb2ad907203e220c2f44ba79ebaf028ff2beab346459.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 218, + 310, + 505, + 332 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 218, + 310, + 505, + 332 + ], + "spans": [ + { + "bbox": [ + 218, + 310, + 505, + 332 + ], + "type": "interline_equation", + "content": "\\nabla_ {V _ {i}} (\\cdot) = \\frac {1}{2} \\left(\\bar {A} _ {i} - V _ {i} ^ {- 1} + n _ {0} V _ {0} ^ {- 1}\\right) = 0 \\tag {61}", + "image_path": "de15fc8719be798d5c09cfec6e1343a86ab1d2904d85b2f053a82be0e5128355.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 105, + 335, + 247, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 335, + 247, + 346 + ], + "spans": [ + { + "bbox": [ + 105, + 335, + 247, + 346 + ], + "type": "text", + "content": "The solution becomes (10) or (55)." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 105, + 359, + 228, + 370 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 359, + 228, + 370 + ], + "spans": [ + { + "bbox": [ + 105, + 359, + 228, + 370 + ], + "type": "text", + "content": "B.5 DERIVATION FOR (11)" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 380, + 504, + 403 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 380, + 504, + 403 + ], + "spans": [ + { + "bbox": [ + 104, + 380, + 504, + 403 + ], + "type": "text", + "content": "It is quite straightforward that by plugging (10) or (55) and also (39) in (6), we have our final optimisation problem (11) in the main paper. It is reiterated below:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 115, + 406, + 507, + 459 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 406, + 507, + 459 + ], + "spans": [ + { + "bbox": [ + 115, + 406, + 507, + 459 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\min _ {L _ {0}} \\mathbb {E} _ {i \\sim \\mathcal {T}} \\Big [ f _ {i} (L _ {0}) + \\frac {1}{2} g _ {i} (L _ {0}) + \\frac {d}{2 l _ {0}} \\Big ] \\quad \\text {s . t .} \\quad f _ {i} (L _ {0}) = \\mathbb {E} _ {\\epsilon \\sim \\mathcal {N} (0, I)} \\Big [ l _ {i} \\Big (m _ {i} ^ {*} (L _ {0}) + V _ {i} ^ {*} (L _ {0}) ^ {1 / 2} \\epsilon \\Big) \\Big ], \\\\ g _ {i} \\left(L _ {0}\\right) = \\log \\frac {\\left| V _ {0} \\right|}{\\left| V _ {i} ^ {*} \\left(L _ {0}\\right) \\right|} + n _ {0} \\operatorname {T r} \\left(V _ {i} ^ {*} \\left(L _ {0}\\right) / V _ {0}\\right) + n _ {0} \\left(m _ {i} ^ {*} \\left(L _ {0}\\right) - m _ {0}\\right) ^ {2} / V _ {0} - \\psi_ {d} \\left(\\frac {n _ {0}}{2}\\right), \\tag {62} \\\\ \\end{array}", + "image_path": "35f596341200af7e68fbda184511258100b0b4f8e9718e82028e76e84b65aa79.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 105, + 469, + 369, + 481 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 469, + 369, + 481 + ], + "spans": [ + { + "bbox": [ + 105, + 469, + 369, + 481 + ], + "type": "text", + "content": "B.6 FORMULAS FOR TEST-TIME ELBO OPTIMISATION (13)" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "spans": [ + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "text", + "content": "We provide formulas for the test-time ELBO in (13) ((63) below). For the test-time variational density " + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "inline_equation", + "content": "v(\\theta) = \\mathcal{N}(\\theta ;m,V)" + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "text", + "content": " to approximate " + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "inline_equation", + "content": "p(\\theta |D^{*},\\phi^{*})" + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "text", + "content": " for test support data " + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "inline_equation", + "content": "D^{*}" + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "text", + "content": " and learned " + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "inline_equation", + "content": "\\phi^{*} = (\\mu^{*} = m_{0},\\Sigma^{*} = V_{0} / (n_{0} + d + 2))" + }, + { + "bbox": [ + 104, + 490, + 504, + 525 + ], + "type": "text", + "content": ", we had" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 205, + 527, + 505, + 545 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 205, + 527, + 505, + 545 + ], + "spans": [ + { + "bbox": [ + 205, + 527, + 505, + 545 + ], + "type": "interline_equation", + "content": "\\min _ {m, V} \\mathbb {E} _ {v (\\theta)} [ - \\log p (D ^ {*} | \\theta) ] + \\operatorname {K L} (v (\\theta) | | p (\\theta | \\phi^ {*})). \\tag {63}", + "image_path": "ae91d6d32699c1cd60e0de0f6ddf7649a88b8f58e11e4ec17d162f5dd807b8a9.jpg" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 549, + 504, + 571 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 549, + 504, + 571 + ], + "spans": [ + { + "bbox": [ + 104, + 549, + 504, + 571 + ], + "type": "text", + "content": "Using the closed-form Gaussian KL divergence and the reparametrised sampling trick, we can express (63) as:" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 167, + 575, + 505, + 641 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 575, + 505, + 641 + ], + "spans": [ + { + "bbox": [ + 167, + 575, + 505, + 641 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\min _ {m, V} \\left\\{\\mathbb {E} _ {\\epsilon \\sim \\mathcal {N} (0, I)} \\left[ - \\log p \\left(D ^ {*} | m + V ^ {1 / 2} \\epsilon\\right) \\right] - \\frac {1}{2} \\log | V | + \\right. \\\\ \\left. \\frac {n _ {0} + d + 2}{2} \\left(\\operatorname {T r} \\left(V _ {0} ^ {- 1} V\\right) + (m - m _ {0}) ^ {\\top} V _ {0} ^ {- 1} (m - m _ {0})\\right) \\right\\}. \\tag {64} \\\\ \\end{array}", + "image_path": "d2d6a13b72bb5e1003917a98da98d5a9c13a125d2dfb673f6de74f68cfcba310.jpg" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 105, + 643, + 437, + 656 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 643, + 437, + 656 + ], + "spans": [ + { + "bbox": [ + 105, + 643, + 437, + 656 + ], + "type": "text", + "content": "Also, our meta-test prediction algorithm is summarised as a pseudo code in Alg. 2." + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 104, + 670, + 460, + 697 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 670, + 460, + 697 + ], + "spans": [ + { + "bbox": [ + 104, + 670, + 460, + 697 + ], + "type": "text", + "content": "C TOY EXPERIMENT: WHY HIERARCHICAL BAYESIAN MODEL? (A DETAILED VERSION)" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "content": "To demonstrate why our hierarchical Bayesian modelling is effective for few-shot meta learning problems, we devise a simple toy synthetic experiment as a proof of concept." + } + ] + } + ], + "index": 21 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "19" + } + ] + } + ], + "index": 22 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 18 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 127, + 71, + 147, + 119 + ], + "blocks": [ + { + "bbox": [ + 127, + 71, + 147, + 119 + ], + "lines": [ + { + "bbox": [ + 127, + 71, + 147, + 119 + ], + "spans": [ + { + "bbox": [ + 127, + 71, + 147, + 119 + ], + "type": "image", + "image_path": "fa9e4308be15abbbd29b9f5e904640c3571920f54a145802c7fffb40c1461184.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 151, + 122, + 198, + 133 + ], + "lines": [ + { + "bbox": [ + 151, + 122, + 198, + 133 + ], + "spans": [ + { + "bbox": [ + 151, + 122, + 198, + 133 + ], + "type": "text", + "content": "(a) Model I" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 154, + 72, + 197, + 119 + ], + "blocks": [ + { + "bbox": [ + 154, + 72, + 197, + 119 + ], + "lines": [ + { + "bbox": [ + 154, + 72, + 197, + 119 + ], + "spans": [ + { + "bbox": [ + 154, + 72, + 197, + 119 + ], + "type": "image", + "image_path": "e020804862e0cf1206fc8f035fac6424255107df2a83fa7b9ac256a07db6e0dd.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 138, + 504, + 161 + ], + "lines": [ + { + "bbox": [ + 104, + 138, + 504, + 161 + ], + "spans": [ + { + "bbox": [ + 104, + 138, + 504, + 161 + ], + "type": "text", + "content": "Figure 3: (Toy experiment) Graphical models for the three competing Bayesian models. Here " + }, + { + "bbox": [ + 104, + 138, + 504, + 161 + ], + "type": "inline_equation", + "content": "\\Theta = [\\theta, \\beta]" + }, + { + "bbox": [ + 104, + 138, + 504, + 161 + ], + "type": "text", + "content": " is the concatenation of the weight and intercept random variables." + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_caption" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 204, + 72, + 222, + 119 + ], + "blocks": [ + { + "bbox": [ + 204, + 72, + 222, + 119 + ], + "lines": [ + { + "bbox": [ + 204, + 72, + 222, + 119 + ], + "spans": [ + { + "bbox": [ + 204, + 72, + 222, + 119 + ], + "type": "image", + "image_path": "52a83a0a043bc829c481d5730ce19cccd2c6b52ecca8cd2ef9608f0b9d1d4b50.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 258, + 72, + 351, + 120 + ], + "blocks": [ + { + "bbox": [ + 258, + 72, + 351, + 120 + ], + "lines": [ + { + "bbox": [ + 258, + 72, + 351, + 120 + ], + "spans": [ + { + "bbox": [ + 258, + 72, + 351, + 120 + ], + "type": "image", + "image_path": "417cdb65dd4e8c22d4d863bbd0001f88a7b9e969b729636a3d12af0656edb872.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 281, + 122, + 332, + 133 + ], + "lines": [ + { + "bbox": [ + 281, + 122, + 332, + 133 + ], + "spans": [ + { + "bbox": [ + 281, + 122, + 332, + 133 + ], + "type": "text", + "content": "(b) Model II" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 388, + 56, + 480, + 121 + ], + "blocks": [ + { + "bbox": [ + 388, + 56, + 480, + 121 + ], + "lines": [ + { + "bbox": [ + 388, + 56, + 480, + 121 + ], + "spans": [ + { + "bbox": [ + 388, + 56, + 480, + 121 + ], + "type": "image", + "image_path": "8107665720e5024af4328a8d6ba144eff4d3a475b6bec9568b0a6fecfa64bde0.jpg" + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 392, + 122, + 474, + 133 + ], + "lines": [ + { + "bbox": [ + 392, + 122, + 474, + 133 + ], + "spans": [ + { + "bbox": [ + 392, + 122, + 474, + 133 + ], + "type": "text", + "content": "(c) Model III (Ours)" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_caption" + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "spans": [ + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "text", + "content": "The problem that we consider is basically a multi-task (Bayesian) linear regression problem. First, we generate the multi-task/episodic data by the following process: The input-output data pairs " + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "inline_equation", + "content": "(x\\in \\mathbb{R}^2,y\\in \\mathbb{R})" + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "text", + "content": " are generated from a linear model with a shared normal vector and episode-specific intercepts. More specifically, let " + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}\\in \\mathbb{R}^2" + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "text", + "content": " be the episode-agnostic shared weight vector, and " + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "inline_equation", + "content": "\\{b_1,b_2,b_3\\}" + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "inline_equation", + "content": "(b_{j}\\in \\mathbb{R})" + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "text", + "content": " be the three candidate intercepts among which each episode can take one randomly. The actual values of the true parameters are: " + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}} = [-5.4282,4.9867],b_{1} = 1.4149,b_{2} = -7.5315,b_{3} = -2.8930." + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "text", + "content": " The true data distribution " + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "inline_equation", + "content": "\\mathcal{T}_i" + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "text", + "content": " for the episode " + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "inline_equation", + "content": "i(= 1,2,\\dots ,N)" + }, + { + "bbox": [ + 104, + 180, + 506, + 269 + ], + "type": "text", + "content": " is defined by the following linear process:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 128, + 277, + 457, + 304 + ], + "type": "list", + "angle": 0, + "index": 13, + "blocks": [ + { + "bbox": [ + 128, + 277, + 457, + 289 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 277, + 457, + 289 + ], + "spans": [ + { + "bbox": [ + 128, + 277, + 457, + 289 + ], + "type": "text", + "content": "1. Sample the intercept ID for this episode, " + }, + { + "bbox": [ + 128, + 277, + 457, + 289 + ], + "type": "inline_equation", + "content": "j(i) \\sim \\{1, 2, 3\\}" + }, + { + "bbox": [ + 128, + 277, + 457, + 289 + ], + "type": "text", + "content": " uniformly at random." + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 128, + 292, + 400, + 304 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 292, + 400, + 304 + ], + "spans": [ + { + "bbox": [ + 128, + 292, + 400, + 304 + ], + "type": "text", + "content": "2. Repeat the following to collect " + }, + { + "bbox": [ + 128, + 292, + 400, + 304 + ], + "type": "inline_equation", + "content": "(x,y)" + }, + { + "bbox": [ + 128, + 292, + 400, + 304 + ], + "type": "text", + "content": " pairs (so that " + }, + { + "bbox": [ + 128, + 292, + 400, + 304 + ], + "type": "inline_equation", + "content": "(x,y)\\sim \\mathcal{T}_i" + }, + { + "bbox": [ + 128, + 292, + 400, + 304 + ], + "type": "text", + "content": "):" + } + ] + } + ], + "index": 12 + } + ], + "sub_type": "text" + }, + { + "bbox": [ + 250, + 308, + 504, + 323 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 250, + 308, + 504, + 323 + ], + "spans": [ + { + "bbox": [ + 250, + 308, + 504, + 323 + ], + "type": "interline_equation", + "content": "y = \\left(w _ {\\text {s h a r e d}} + \\epsilon_ {w}\\right) ^ {\\top} x + b _ {j (i)} + \\epsilon_ {y}, \\tag {65}", + "image_path": "6513d8d00b470151b7bd3056af4866cf71a4c0b693058a51b7b1a67a4ba46718.jpg" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 140, + 327, + 398, + 340 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 327, + 398, + 340 + ], + "spans": [ + { + "bbox": [ + 140, + 327, + 398, + 340 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 140, + 327, + 398, + 340 + ], + "type": "inline_equation", + "content": "x\\sim \\mathcal{N}(0,I)" + }, + { + "bbox": [ + 140, + 327, + 398, + 340 + ], + "type": "inline_equation", + "content": "\\epsilon_w\\sim \\mathcal{N}(0,10^{-4}I)" + }, + { + "bbox": [ + 140, + 327, + 398, + 340 + ], + "type": "text", + "content": " , and " + }, + { + "bbox": [ + 140, + 327, + 398, + 340 + ], + "type": "inline_equation", + "content": "\\epsilon_y\\sim \\mathcal{N}(0,10^{-4})" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "spans": [ + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "content": "In this way we ensure that the resulting episodes are not only related to one another through the shared weight vector " + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}" + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "content": ", but they are differentiated by potentially different intercepts. We generate 50 episodes where " + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "inline_equation", + "content": "N = 40" + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "content": " episodes are used for training and the rest 10 episodes serve as test data. For each training episode " + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "content": ", we have three " + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "inline_equation", + "content": "(x, y)" + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "content": " samples as an episodic training set " + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "inline_equation", + "content": "D_i" + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "content": " (all available to a training algorithm, where we make no distinction between support and query sets). At test time, we take three samples as a (labeled) support set " + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "inline_equation", + "content": "D_*" + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "content": " (* denotes each of the 10 test episodes), and test performance is measured on about 50 unseen samples from the same distribution " + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "inline_equation", + "content": "\\mathcal{T}_*" + }, + { + "bbox": [ + 104, + 348, + 506, + 426 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 104, + 430, + 504, + 509 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 430, + 504, + 509 + ], + "spans": [ + { + "bbox": [ + 104, + 430, + 504, + 509 + ], + "type": "text", + "content": "Three competing Bayesian methods. We consider three Bayesian models which exhibit different levels/degrees of flexibility and regularisation. The first one is highly flexible by modeling each individual episode independently with its own parameters, thus with lack of regularisation. The second case is a conventional (non-hierarchical) Bayesian model where we consider a single parameter set shared across episodes, thus too much regularised with lack of flexibility. At last, our hierarchical Bayesian model imposes balanced flexibility and regularisation by introducing the higher-level variable " + }, + { + "bbox": [ + 104, + 430, + 504, + 509 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 430, + 504, + 509 + ], + "type": "text", + "content": " that captures the inter-episode shared information." + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 129, + 517, + 504, + 539 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 129, + 517, + 504, + 539 + ], + "spans": [ + { + "bbox": [ + 129, + 517, + 504, + 539 + ], + "type": "text", + "content": "1. Model I: This model has episode-wise parameters while they are all independent with minimal regularisation. More formally," + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 281, + 544, + 504, + 559 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 281, + 544, + 504, + 559 + ], + "spans": [ + { + "bbox": [ + 281, + 544, + 504, + 559 + ], + "type": "interline_equation", + "content": "y = \\theta_ {i} ^ {\\top} x + \\beta_ {i} + \\epsilon_ {y} \\tag {66}", + "image_path": "3e9dac665230f3a3ff6904c4522e385ef2ad35b3569e59e094aeab25f1cc73c7.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "spans": [ + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "inline_equation", + "content": "(\\theta_{i},\\beta_{i})" + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": " is the parameters for the episode " + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": ". We place the prior " + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "inline_equation", + "content": "p(\\theta_i,\\beta_i) = \\mathcal{N}(\\mu ,10^{-4}I)" + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": " with the model parameter " + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": " shared over episodes. The training amounts to learning the parameter " + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "inline_equation", + "content": "\\mu \\in \\mathbb{R}^3" + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": " via marginal likelihood maximisation (i.e., " + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "inline_equation", + "content": "\\max_{\\mu}\\log p(D_1,\\ldots ,D_N|\\mu))" + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": ". At test time we do inference " + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "inline_equation", + "content": "p(\\theta_{*},\\beta_{*}|D_{*},D_{1:N})" + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": " which boils down to " + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "inline_equation", + "content": "p(\\theta_{*},\\beta_{*}|D_{*})" + }, + { + "bbox": [ + 139, + 562, + 505, + 628 + ], + "type": "text", + "content": " due to the cross-episode independence assumption. The graphical model diagram for the model is shown in Fig. 3(a)." + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 128, + 632, + 506, + 654 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 632, + 506, + 654 + ], + "spans": [ + { + "bbox": [ + 128, + 632, + 506, + 654 + ], + "type": "text", + "content": "2. Model II: Unlike introducing episode-specific variables, this model has a single set of variables shared across all episodes. More specifically," + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 282, + 658, + 504, + 673 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 282, + 658, + 504, + 673 + ], + "spans": [ + { + "bbox": [ + 282, + 658, + 504, + 673 + ], + "type": "interline_equation", + "content": "y = \\theta^ {\\top} x + \\beta + \\epsilon_ {y} \\tag {67}", + "image_path": "d2db7f2a31ccc9aa49fe41fbed75b6481544339feca428d8d8d878d1a932103d.jpg" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "inline_equation", + "content": "(\\theta, \\beta)" + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "text", + "content": " is the episode-agnostic parameters, endowed with the prior " + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "inline_equation", + "content": "p(\\theta, \\beta) = \\mathcal{N}(\\mu, 10^{-4}I)" + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "text", + "content": ". Due to this parameter sharing, this model is highly regularised but at the expense of significantly reduced flexibility. Once " + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "text", + "content": " is trained, the inference at test time is done by " + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "inline_equation", + "content": "p(\\theta, \\beta | D_{*}, D_{1:N})" + }, + { + "bbox": [ + 139, + 676, + 505, + 733 + ], + "type": "text", + "content": " which is not simplified further and has to take into account all training and test data. The graphical model diagram in Fig. 3(b)." + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "20" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 19 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 106, + 51, + 236, + 149 + ], + "blocks": [ + { + "bbox": [ + 106, + 51, + 236, + 149 + ], + "lines": [ + { + "bbox": [ + 106, + 51, + 236, + 149 + ], + "spans": [ + { + "bbox": [ + 106, + 51, + 236, + 149 + ], + "type": "image", + "image_path": "1972ddf90a5f4637cfff7b884059219f7897adfe733c48293a5898e96339aa2e.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 143, + 152, + 212, + 163 + ], + "lines": [ + { + "bbox": [ + 143, + 152, + 212, + 163 + ], + "spans": [ + { + "bbox": [ + 143, + 152, + 212, + 163 + ], + "type": "text", + "content": "(a) Weight dim-1" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 239, + 51, + 370, + 148 + ], + "blocks": [ + { + "bbox": [ + 239, + 51, + 370, + 148 + ], + "lines": [ + { + "bbox": [ + 239, + 51, + 370, + 148 + ], + "spans": [ + { + "bbox": [ + 239, + 51, + 370, + 148 + ], + "type": "image", + "image_path": "f3c4f89f9c7cad7353b9f556116086487a244cfd494f9557620bc7ab71412e0d.jpg" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 269, + 152, + 340, + 164 + ], + "lines": [ + { + "bbox": [ + 269, + 152, + 340, + 164 + ], + "spans": [ + { + "bbox": [ + 269, + 152, + 340, + 164 + ], + "type": "text", + "content": "(b) Weight dim-2" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_caption" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "lines": [ + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "spans": [ + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "text", + "content": "Figure 4: Toy experiments. Visualisation of the learned posterior means compared to the true values (blue-circled). (a) weight dim-1 (" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "inline_equation", + "content": "\\theta[0]" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}[0]" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "text", + "content": "), (b) weight dim-2 (" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "inline_equation", + "content": "\\theta[1]" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}[1]" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "text", + "content": ") and (c) intercept (" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "inline_equation", + "content": "\\beta" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "text", + "content": " vs. " + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "inline_equation", + "content": "b_{j(*)}" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "text", + "content": "). In each plot, the X-axis shows the indices of the true intercepts sampled, that is, " + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "inline_equation", + "content": "j(*) \\in \\{1, 2, 3\\}" + }, + { + "bbox": [ + 104, + 167, + 504, + 225 + ], + "type": "text", + "content": ", for 10 test episodes. In the titles we also report the distances (errors) between the true values and the posterior means for the three methods, averaged over 10 episodes." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 3 + }, + { + "type": "image", + "bbox": [ + 373, + 51, + 503, + 148 + ], + "blocks": [ + { + "bbox": [ + 373, + 51, + 503, + 148 + ], + "lines": [ + { + "bbox": [ + 373, + 51, + 503, + 148 + ], + "spans": [ + { + "bbox": [ + 373, + 51, + 503, + 148 + ], + "type": "image", + "image_path": "6965c63b6c3a8c068014813c2ac10060d70dce035ad16a4b46dc8e15a1e44772.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 415, + 152, + 466, + 164 + ], + "lines": [ + { + "bbox": [ + 415, + 152, + 466, + 164 + ], + "spans": [ + { + "bbox": [ + 415, + 152, + 466, + 164 + ], + "type": "text", + "content": "(c) Intercept" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "bbox": [ + 128, + 238, + 504, + 282 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 128, + 238, + 504, + 282 + ], + "spans": [ + { + "bbox": [ + 128, + 238, + 504, + 282 + ], + "type": "text", + "content": "3. Model III (Ours): This is our hierarchical model where each episode has its own parameters (like Model I) but there is a globally governing variable " + }, + { + "bbox": [ + 128, + 238, + 504, + 282 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 128, + 238, + 504, + 282 + ], + "type": "text", + "content": " to regularise the episode-wise parameters. That is, we have the same regression form as (66) with episode-wise " + }, + { + "bbox": [ + 128, + 238, + 504, + 282 + ], + "type": "inline_equation", + "content": "(\\theta_{i},\\beta_{i})" + }, + { + "bbox": [ + 128, + 238, + 504, + 282 + ], + "type": "text", + "content": " parameters, but our prior distributions are defined hierarchically as:" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 220, + 285, + 504, + 298 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 220, + 285, + 504, + 298 + ], + "spans": [ + { + "bbox": [ + 220, + 285, + 504, + 298 + ], + "type": "interline_equation", + "content": "p (\\phi) = \\mathcal {N} (m, V), \\quad p \\left(\\theta_ {i}, \\beta_ {i} \\mid \\phi\\right) = \\mathcal {N} \\left(\\phi , 1 0 ^ {- 4} I\\right), \\tag {68}", + "image_path": "583a05c6ad3295d7a3f6f002914b7a29adfd04f4d56f93b25b408465b54f0529.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "spans": [ + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "text", + "content": "where " + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "inline_equation", + "content": "(m,V)" + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "text", + "content": " is the model parameters to be learned. At test time, we infer " + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "inline_equation", + "content": "p(\\theta_{*},\\beta_{*}|D_{*},D_{1:N})" + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "text", + "content": ", however, unlike the Model II case, each training data " + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "inline_equation", + "content": "D_{i}" + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "text", + "content": " and the test data " + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "inline_equation", + "content": "D_{*}" + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "text", + "content": " do not have equal, symmetric contribution. Note that the training data affect the posterior indirectly only through the higher-level variable " + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 140, + 300, + 504, + 346 + ], + "type": "text", + "content": " as follows:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 197, + 349, + 504, + 373 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 197, + 349, + 504, + 373 + ], + "spans": [ + { + "bbox": [ + 197, + 349, + 504, + 373 + ], + "type": "interline_equation", + "content": "p \\left(\\theta_ {*}, \\beta_ {*} \\mid D _ {*}, D _ {1: N}\\right) = \\int p \\left(\\theta_ {*}, \\beta_ {*} \\mid \\phi , D _ {*}\\right) p \\left(\\phi \\mid D _ {*}, D _ {1: N}\\right) d \\phi . \\tag {69}", + "image_path": "d2cb3e603cb1f53be9919ba664b55e4601b4881f5252b775f6ae5eea5ea2fc72.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 140, + 376, + 504, + 409 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 140, + 376, + 504, + 409 + ], + "spans": [ + { + "bbox": [ + 140, + 376, + 504, + 409 + ], + "type": "text", + "content": "That is, while our model is as flexible as Model I due to the episode-wise parameters, their impacts on the test prediction are controlled/regularised in a very sensible manner. The graphical model diagram in Fig. 3(c)." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 104, + 417, + 504, + 450 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 417, + 504, + 450 + ], + "spans": [ + { + "bbox": [ + 104, + 417, + 504, + 450 + ], + "type": "text", + "content": "Note that all the posterior inferences of the above three models can be done in closed forms due to the linear-Gaussian properties. The details of the posterior distributions and derivations as well as the model training can be found in Sec. C.1." + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "spans": [ + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "content": "Results. For the 10 test episodes, we obtain the posterior means of the weights and intercept parameters " + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "inline_equation", + "content": "\\mathbb{E}[\\theta ,\\beta |D_{*},D_{1:N}]" + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "content": " for the three models via the closed-form solutions as detailed in Sec. C.1. The test support set size " + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "inline_equation", + "content": "|D_{*}| = 3" + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "content": ". With these posterior means, we predict the outputs of about 50 unseen test inputs. The mean absolute errors (MAE) of the three models averaged over 10 test episodes are: Model I = 2.87, Model II = 3.13, and Model III (ours) = 1.28, clearly showing the superiority of our model to other competing methods. In Fig. 4 we also visualise the posterior means to check how much they deviate from the true weights and intercept, namely the difference between the posterior mean of " + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "inline_equation", + "content": "(\\theta ,\\beta)" + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "content": " and the true " + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "inline_equation", + "content": "(w_{\\mathrm{shared}},b_{j(*)})" + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "content": ". First, we see that Model II's posterior means rarely change over the test episodes, in other words, the impact of test support data " + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "inline_equation", + "content": "D_{*}" + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "content": " is diminished. This behavior is expected since the model imposes too much regularisation with little flexibility, and the test prediction is dominated by the mean model obtained from training data " + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "inline_equation", + "content": "D_{1:N}" + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "content": ". Secondly, Model I exhibits highly sensitive predictions over the test episodes, which mainly originates from little regularisation. In other words, the posterior is affected too sensitively by the current episode's support data, thus being vulnerable to overfitting especially when the support data size is small, typical in the few-shot learning. The model failed to capture useful shared information, in this case " + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "inline_equation", + "content": "w_{\\mathrm{shared}}" + }, + { + "bbox": [ + 104, + 456, + 506, + 677 + ], + "type": "text", + "content": ", from diverse training episodes. On the other hand, our Model III takes the balance between the above two extremes, imposing proper amount of regularisation and endowing adequate flexibility. Our posterior estimation best extracts the shared episode-agnostic information (the weight parameters fluctuate less over the test episodes), and at the same time, captures the episode-specific features the most accurately (the estimated intercepts are aligned well with the true values)." + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 104, + 689, + 432, + 700 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 689, + 432, + 700 + ], + "spans": [ + { + "bbox": [ + 104, + 689, + 432, + 700 + ], + "type": "text", + "content": "C.1 DERIVATIONS FOR TRAINING AND POSTERIORS IN TOY EXPERIMENTS" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "content": "For the convenience in notation, we let " + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "inline_equation", + "content": "\\Theta = [\\theta ,\\beta ]\\in \\mathbb{R}^3" + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "content": " be the concatenated random variables (subscripts can be applied accordingly). The training data " + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "inline_equation", + "content": "D_{i}" + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "content": " for each episode " + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 709, + 504, + 733 + ], + "type": "text", + "content": " consists of the inputs" + } + ] + } + ], + "index": 16 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "21" + } + ] + } + ], + "index": 17 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 20 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "content": "with the constant 1 appended, denoted as " + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "inline_equation", + "content": "X_{i} \\in \\mathbb{R}^{n \\times 3}" + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "content": ", and the outputs " + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "inline_equation", + "content": "Y_{i} \\in \\mathbb{R}^{n}" + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "content": ". With this notation, the linear regression model can be succinctly written as " + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "inline_equation", + "content": "Y_{i} = \\Theta X_{i}" + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "content": ". Similarly, the test-time support data " + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "inline_equation", + "content": "D_{*}" + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "content": " is decomposed into " + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "inline_equation", + "content": "(X_{*}, Y_{*})" + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "content": ". The noise standard deviation is denoted by " + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "inline_equation", + "content": "\\sigma = 10^{-2}" + }, + { + "bbox": [ + 104, + 82, + 506, + 118 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 127, + 182, + 138 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 127, + 182, + 138 + ], + "spans": [ + { + "bbox": [ + 105, + 127, + 182, + 138 + ], + "type": "text", + "content": "C.1.1 MODEL I" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 147, + 405, + 159 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 147, + 405, + 159 + ], + "spans": [ + { + "bbox": [ + 104, + 147, + 405, + 159 + ], + "type": "text", + "content": "Model I (the episode-wise independent model) can be formally written as:" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 171, + 163, + 505, + 177 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 163, + 505, + 177 + ], + "spans": [ + { + "bbox": [ + 171, + 163, + 505, + 177 + ], + "type": "interline_equation", + "content": "p \\left(\\Theta_ {i}\\right) = \\mathcal {N} \\left(\\Theta_ {i}; \\mu , \\sigma^ {2} I\\right), \\quad p \\left(D _ {i} \\mid \\Theta_ {i}\\right) = \\mathcal {N} \\left(Y _ {i}; X _ {i} \\Theta_ {i}, \\sigma^ {2} I\\right), \\quad \\forall i. \\tag {70}", + "image_path": "f2ec502e3393c32e827baa424fd7bc862bff3646e43d4685abc8bc7276bff99f.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "spans": [ + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "type": "text", + "content": "The posterior " + }, + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "type": "inline_equation", + "content": "p(\\Theta_1, \\ldots, \\Theta_N | D_1, \\ldots, D_N)" + }, + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "type": "text", + "content": " is fully factorised over " + }, + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "type": "text", + "content": ", and we can deal with individual terms " + }, + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "type": "inline_equation", + "content": "p(\\Theta_i | D_i)" + }, + { + "bbox": [ + 104, + 182, + 504, + 205 + ], + "type": "text", + "content": " where" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 164, + 210, + 505, + 225 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 164, + 210, + 505, + 225 + ], + "spans": [ + { + "bbox": [ + 164, + 210, + 505, + 225 + ], + "type": "interline_equation", + "content": "p (\\Theta_ {i} | D _ {i}) \\propto p (\\Theta_ {i}) \\cdot p (D _ {i} | \\Theta_ {i}) = \\mathcal {N} (\\Theta_ {i}; \\mu , \\sigma^ {2} I) \\cdot \\mathcal {N} (Y _ {i}; X _ {i} \\Theta_ {i}, \\sigma^ {2} I). \\tag {71}", + "image_path": "a2e5cd1afbd143ae3677f4f5bf31c2f3c71fe56b7e0670108d790d9e9ca6f370.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 229, + 405, + 241 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 229, + 405, + 241 + ], + "spans": [ + { + "bbox": [ + 104, + 229, + 405, + 241 + ], + "type": "text", + "content": "Due to the product-of-Gaussians form, we have the closed-form posterior:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 119, + 246, + 505, + 270 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 246, + 505, + 270 + ], + "spans": [ + { + "bbox": [ + 119, + 246, + 505, + 270 + ], + "type": "interline_equation", + "content": "p \\left(\\Theta_ {i} \\mid D _ {i}\\right) = \\mathcal {N} \\left(\\Theta_ {i}; A _ {i} ^ {- 1} b _ {i}, A _ {i} ^ {- 1}\\right) \\text {w h e r e} A _ {i} = \\frac {1}{\\sigma^ {2}} \\left(I + X _ {i} ^ {\\top} X _ {i}\\right), b _ {i} = \\frac {1}{\\sigma^ {2}} \\left(\\mu + X _ {i} ^ {\\top} Y _ {i}\\right). \\tag {72}", + "image_path": "4a3c183ed49eaf00c1586606cef51bb7823e1796a0142379798bbcb48c2b46f8.jpg" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 274, + 504, + 297 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 274, + 504, + 297 + ], + "spans": [ + { + "bbox": [ + 104, + 274, + 504, + 297 + ], + "type": "text", + "content": "The training amounts to maximising the data (log-)likelihood, " + }, + { + "bbox": [ + 104, + 274, + 504, + 297 + ], + "type": "inline_equation", + "content": "\\max_{\\mu}\\log p(D_1,\\ldots ,D_N|\\mu)" + }, + { + "bbox": [ + 104, + 274, + 504, + 297 + ], + "type": "text", + "content": " where the objective is fully decomposed over " + }, + { + "bbox": [ + 104, + 274, + 504, + 297 + ], + "type": "inline_equation", + "content": "i" + }, + { + "bbox": [ + 104, + 274, + 504, + 297 + ], + "type": "text", + "content": " as follows:" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 162, + 302, + 505, + 335 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 302, + 505, + 335 + ], + "spans": [ + { + "bbox": [ + 162, + 302, + 505, + 335 + ], + "type": "interline_equation", + "content": "\\log p \\left(D _ {1}, \\dots , D _ {N} \\mid \\mu\\right) = \\sum_ {i = 1} ^ {N} \\log p \\left(D _ {i}\\right) \\quad \\text {w h e r e} \\tag {73}", + "image_path": "6d360e9c037cf6987f09d90769200abb5e42a770f1dbc08f7096b818c6f9413c.jpg" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 162, + 336, + 505, + 364 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 162, + 336, + 505, + 364 + ], + "spans": [ + { + "bbox": [ + 162, + 336, + 505, + 364 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\log p \\left(D _ {i}\\right) = \\log p \\left(\\Theta_ {i}, D _ {i}\\right) - \\log p \\left(\\Theta_ {i} \\mid D _ {i}\\right) \\quad (\\text {f o r a n y} \\Theta_ {i}) (74) \\\\ = \\log p \\left(\\Theta_ {i}\\right) + \\log p \\left(D _ {i} \\mid \\Theta_ {i}\\right) - \\log p \\left(\\Theta_ {i} \\mid D _ {i}\\right) \\quad (\\text {f o r a n y} \\Theta_ {i}). (75) \\\\ \\end{array}", + "image_path": "da1efe4b9052611d15b0e96fa1bea4ff110cf9508c6db478e22c51958c9f0f7d.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "spans": [ + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "content": "Using the Gaussian posterior form (72), we can easily evaluate " + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "inline_equation", + "content": "\\log p(D_1, \\ldots, D_N | \\mu)" + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "content": " and also optimise it with respect to " + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "content": ". At test time, the posterior of " + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "inline_equation", + "content": "\\Theta_*" + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "content": " given all the training data and the test support data, " + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "inline_equation", + "content": "p(\\Theta_* | D_*, D_1, \\ldots, D_N)" + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "content": " equals " + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "inline_equation", + "content": "p(\\Theta_* | D_*)" + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "content": " due to the independence assumption, and admits the same Gaussian form as (72) with the test support data " + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "inline_equation", + "content": "(X_*, Y_*)" + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "content": " in the place of " + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "inline_equation", + "content": "(X_i, Y_i)" + }, + { + "bbox": [ + 104, + 369, + 504, + 414 + ], + "type": "text", + "content": "." + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 105, + 425, + 186, + 436 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 425, + 186, + 436 + ], + "spans": [ + { + "bbox": [ + 105, + 425, + 186, + 436 + ], + "type": "text", + "content": "C.1.2 MODEL II" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 445, + 395, + 456 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 445, + 395, + 456 + ], + "spans": [ + { + "bbox": [ + 104, + 445, + 395, + 456 + ], + "type": "text", + "content": "Model II (the shared model across episodes) can be formally written as:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 189, + 460, + 505, + 475 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 189, + 460, + 505, + 475 + ], + "spans": [ + { + "bbox": [ + 189, + 460, + 505, + 475 + ], + "type": "interline_equation", + "content": "p (\\Theta) = \\mathcal {N} (\\Theta ; \\mu , \\sigma^ {2} I), \\quad p (D _ {i} | \\Theta) = \\mathcal {N} (Y _ {i}; X _ {i} \\Theta , \\sigma^ {2} I). \\tag {76}", + "image_path": "3075fe475f12c891fdb3f79fb0b2ba0e3a2f869d030268fb7f8ef9783a5bd9bd.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 479, + 345, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 479, + 345, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 479, + 345, + 491 + ], + "type": "text", + "content": "The posterior " + }, + { + "bbox": [ + 104, + 479, + 345, + 491 + ], + "type": "inline_equation", + "content": "p(\\Theta |D_1,\\dots ,D_N)" + }, + { + "bbox": [ + 104, + 479, + 345, + 491 + ], + "type": "text", + "content": " can be derived as follows:" + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 138, + 497, + 505, + 530 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 138, + 497, + 505, + 530 + ], + "spans": [ + { + "bbox": [ + 138, + 497, + 505, + 530 + ], + "type": "interline_equation", + "content": "p (\\Theta | D _ {1}, \\dots , D _ {N}) \\propto p (\\Theta) \\cdot \\prod_ {i = 1} ^ {N} p (D _ {i} | \\Theta) = \\mathcal {N} (\\Theta ; \\mu , \\sigma^ {2} I) \\cdot \\prod_ {i = 1} ^ {N} \\mathcal {N} \\left(Y _ {i}; X _ {i} \\Theta , \\sigma^ {2} I\\right). \\tag {77}", + "image_path": "9877f903589a1e042c53108b18eababc6d7a9fcad1f8af4294913d0fb8921538.jpg" + } + ] + } + ], + "index": 17 + }, + { + "bbox": [ + 104, + 535, + 430, + 547 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 535, + 430, + 547 + ], + "spans": [ + { + "bbox": [ + 104, + 535, + 430, + 547 + ], + "type": "text", + "content": "Again, due to the product-of-Gaussians form, we have the closed-form posterior:" + } + ] + } + ], + "index": 18 + }, + { + "bbox": [ + 179, + 551, + 376, + 566 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 179, + 551, + 376, + 566 + ], + "spans": [ + { + "bbox": [ + 179, + 551, + 376, + 566 + ], + "type": "interline_equation", + "content": "p (\\Theta | D _ {1}, \\ldots , D _ {N}) = \\mathcal {N} (\\Theta ; A ^ {- 1} b, A ^ {- 1}) \\mathrm {w h e r e}", + "image_path": "b0a4fa7b2daef96a903c86b0ec0fdc13361e8ff9b8c1a9bcc723b541ed9cef6b.jpg" + } + ] + } + ], + "index": 19 + }, + { + "bbox": [ + 200, + 567, + 505, + 600 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 200, + 567, + 505, + 600 + ], + "spans": [ + { + "bbox": [ + 200, + 567, + 505, + 600 + ], + "type": "interline_equation", + "content": "A = \\frac {1}{\\sigma^ {2}} \\left(I + \\sum_ {i = 1} ^ {N} X _ {i} ^ {\\top} X _ {i}\\right), b _ {i} = \\frac {1}{\\sigma^ {2}} \\left(\\mu + \\sum_ {i = 1} ^ {N} X _ {i} ^ {\\top} Y _ {i}\\right). \\tag {78}", + "image_path": "56ff3bdddf46e4d00133885e932aeaf70c1078bfb78a0ce031cf80b22fa70ea2.jpg" + } + ] + } + ], + "index": 20 + }, + { + "bbox": [ + 104, + 605, + 504, + 628 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 605, + 504, + 628 + ], + "spans": [ + { + "bbox": [ + 104, + 605, + 504, + 628 + ], + "type": "text", + "content": "Likewise, the training amounts to maximising the data (log-)likelihood, " + }, + { + "bbox": [ + 104, + 605, + 504, + 628 + ], + "type": "inline_equation", + "content": "\\max_{\\mu}\\log p(D_1,\\ldots ,D_N|\\mu)" + }, + { + "bbox": [ + 104, + 605, + 504, + 628 + ], + "type": "text", + "content": " where the objective becomes:" + } + ] + } + ], + "index": 21 + }, + { + "bbox": [ + 119, + 633, + 505, + 681 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 119, + 633, + 505, + 681 + ], + "spans": [ + { + "bbox": [ + 119, + 633, + 505, + 681 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\log p \\left(D _ {1}, \\dots , D _ {N} \\mid \\mu\\right) = \\log p (\\Theta , D _ {1}, \\dots , D _ {N}) - \\log p (\\Theta | D _ {1}, \\dots , D _ {N}) \\quad (\\text {f o r a n y} \\Theta) (79) \\\\ = \\log p (\\Theta) + \\sum_ {i = 1} ^ {N} \\log p \\left(D _ {i} \\mid \\Theta\\right) - \\log p (\\Theta \\mid D _ {1}, \\dots , D _ {N}) \\quad (\\text {f o r a n y} \\Theta). (80) \\\\ \\end{array}", + "image_path": "0007bf23472e1252490fdd63412d201bda4d29833e4b22ae511d56a644a90f99.jpg" + } + ] + } + ], + "index": 22 + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "text", + "content": "Again, using the Gaussian posterior form (78), we can easily evaluate " + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "inline_equation", + "content": "\\log p(D_1,\\dots ,D_N|\\mu)" + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "text", + "content": " and also optimise it with respect to " + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "inline_equation", + "content": "\\mu" + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "text", + "content": ". At test time, the posterior of " + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "inline_equation", + "content": "\\Theta_{*}" + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "text", + "content": " given all the training data and the test support data, " + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "inline_equation", + "content": "p(\\Theta_{*}|D_{*},D_{1},\\ldots ,D_{N})" + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "text", + "content": " admits a Gaussian form, derived similarly as (78) with the test support data statistics " + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "inline_equation", + "content": "X_{*}^{\\top}X_{*}" + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "inline_equation", + "content": "X_{*}^{\\top}Y_{*}" + }, + { + "bbox": [ + 104, + 688, + 505, + 733 + ], + "type": "text", + "content": " additionally added to the training statistics." + } + ] + } + ], + "index": 23 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 761 + ], + "type": "text", + "content": "22" + } + ] + } + ], + "index": 24 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 21 + }, + { + "para_blocks": [ + { + "bbox": [ + 105, + 82, + 224, + 94 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 82, + 224, + 94 + ], + "spans": [ + { + "bbox": [ + 105, + 82, + 224, + 94 + ], + "type": "text", + "content": "C.1.3 MODEL III (OURS)" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 101, + 412, + 114 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 101, + 412, + 114 + ], + "spans": [ + { + "bbox": [ + 104, + 101, + 412, + 114 + ], + "type": "text", + "content": "Our Model III (the hierarchical Bayesian model) can be formally written as:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 167, + 116, + 505, + 129 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 116, + 505, + 129 + ], + "spans": [ + { + "bbox": [ + 167, + 116, + 505, + 129 + ], + "type": "interline_equation", + "content": "p (\\phi) = \\mathcal {N} (\\phi ; m, V), \\tag {81}", + "image_path": "47e05841ad6139d9af4c48538a2f755b486a0cc02e8aba1d148ebff29705c290.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 167, + 131, + 505, + 145 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 167, + 131, + 505, + 145 + ], + "spans": [ + { + "bbox": [ + 167, + 131, + 505, + 145 + ], + "type": "interline_equation", + "content": "p \\left(\\Theta_ {i} | \\phi\\right) = \\mathcal {N} \\left(\\Theta_ {i}; \\phi , \\sigma^ {2} I\\right), \\quad p \\left(D _ {i} \\mid \\Theta_ {i}\\right) = \\mathcal {N} \\left(Y _ {i}; X _ {i} \\Theta_ {i}, \\sigma^ {2} I\\right), \\quad \\forall i. \\tag {82}", + "image_path": "56abad4f4c8d06fdbfad439044df70746a479f222c9a95928749678090f34ddb.jpg" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 148, + 343, + 159 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 148, + 343, + 159 + ], + "spans": [ + { + "bbox": [ + 104, + 148, + 343, + 159 + ], + "type": "text", + "content": "The posterior " + }, + { + "bbox": [ + 104, + 148, + 343, + 159 + ], + "type": "inline_equation", + "content": "p(\\phi |D_1,\\dots ,D_N)" + }, + { + "bbox": [ + 104, + 148, + 343, + 159 + ], + "type": "text", + "content": " can be derived as follows:" + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 121, + 163, + 505, + 327 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 121, + 163, + 505, + 327 + ], + "spans": [ + { + "bbox": [ + 121, + 163, + 505, + 327 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} p \\left(\\phi \\mid D _ {1}, \\dots , D _ {N}\\right) \\propto p (\\phi) \\cdot \\int p \\left(\\Theta_ {1}, \\dots , \\Theta_ {N} \\mid \\phi\\right) \\cdot p \\left(D _ {1}, \\dots , D _ {N} \\mid \\Theta_ {1}, \\dots , \\Theta_ {N}\\right) d \\Theta_ {1: N} (83) \\\\ = p (\\phi) \\cdot \\int \\prod_ {i = 1} ^ {N} \\left(p \\left(\\Theta_ {i} | \\phi\\right) \\cdot p \\left(D _ {i} \\mid \\Theta_ {i}\\right)\\right) d \\Theta_ {1: N} (84) \\\\ = p (\\phi) \\cdot \\prod_ {i = 1} ^ {N} \\int p \\left(\\Theta_ {i} \\mid \\phi\\right) \\cdot p \\left(D _ {i} \\mid \\Theta_ {i}\\right) d \\Theta_ {i} (85) \\\\ = \\mathcal {N} (\\phi ; m, V) \\cdot \\prod_ {i = 1} ^ {N} \\int \\mathcal {N} \\left(\\Theta_ {i}; \\phi , \\sigma^ {2} I\\right) \\cdot \\mathcal {N} \\left(Y _ {i}; X _ {i} \\Theta_ {i}, \\sigma^ {2} I\\right) d \\Theta_ {i} (86) \\\\ = \\mathcal {N} (\\phi ; m, V) \\cdot \\prod_ {i = 1} ^ {N} \\mathcal {N} \\left(Y _ {i}; X _ {i} \\phi , \\sigma^ {2} \\left(I + X _ {i} X _ {i} ^ {\\top}\\right)\\right), (87) \\\\ \\end{array}", + "image_path": "6fdbb43247d41eb5d1a923defa79e70b4a8547f740511de8a0274f77fe077dad.jpg" + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 330, + 504, + 353 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 330, + 504, + 353 + ], + "spans": [ + { + "bbox": [ + 104, + 330, + 504, + 353 + ], + "type": "text", + "content": "where we use the property of the product of Gaussians for the derivation from (86) to (87). Now, in (87), due to the product-of-Gaussians form, we have the closed-form posterior:" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 111, + 354, + 304, + 368 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 111, + 354, + 304, + 368 + ], + "spans": [ + { + "bbox": [ + 111, + 354, + 304, + 368 + ], + "type": "inline_equation", + "content": "p(\\phi |D_1,\\ldots ,D_N) = \\mathcal{N}(\\phi ;A^{-1}b,A^{-1})" + }, + { + "bbox": [ + 111, + 354, + 304, + 368 + ], + "type": "text", + "content": " where" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 127, + 371, + 505, + 403 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 127, + 371, + 505, + 403 + ], + "spans": [ + { + "bbox": [ + 127, + 371, + 505, + 403 + ], + "type": "interline_equation", + "content": "A = V ^ {- 1} + \\frac {1}{\\sigma^ {2}} \\sum_ {i = 1} ^ {N} X _ {i} ^ {\\top} \\left(X _ {i} X _ {i} ^ {\\top} + I\\right) ^ {- 1} X _ {i}, b = V ^ {- 1} m + \\frac {1}{\\sigma^ {2}} \\sum_ {i = 1} ^ {N} X _ {i} ^ {\\top} \\left(X _ {i} X _ {i} ^ {\\top} + I\\right) ^ {- 1} Y _ {i}. \\tag {88}", + "image_path": "d36db6f52f8986af4d73007ccaa769a1b7726c5f641eaddeadd4b5f5f85fa379.jpg" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 407, + 504, + 430 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 407, + 504, + 430 + ], + "spans": [ + { + "bbox": [ + 104, + 407, + 504, + 430 + ], + "type": "text", + "content": "The training amounts to maximising the data (log-)likelihood, " + }, + { + "bbox": [ + 104, + 407, + 504, + 430 + ], + "type": "inline_equation", + "content": "\\max_{m,V}\\log p(D_1,\\ldots ,D_N|m,V)" + }, + { + "bbox": [ + 104, + 407, + 504, + 430 + ], + "type": "text", + "content": " where the objective becomes:" + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 115, + 432, + 505, + 460 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 115, + 432, + 505, + 460 + ], + "spans": [ + { + "bbox": [ + 115, + 432, + 505, + 460 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\log p \\left(D _ {1}, \\dots , D _ {N} \\mid m, V\\right) = \\log p (\\phi , D _ {1}, \\dots , D _ {N}) - \\log p (\\phi \\mid D _ {1}, \\dots , D _ {N}) \\quad (\\text {f o r a n y} \\phi) (89) \\\\ = \\log p (\\phi) + \\log p \\left(D _ {1}, \\dots , D _ {N} \\mid \\phi\\right) - \\log p \\left(\\phi \\mid D _ {1}, \\dots , D _ {N}\\right) \\quad \\text {(f o r a n y} \\phi), (90) \\\\ \\end{array}", + "image_path": "d339f60cf30a927bc72598abe43e2c0f9461206c3388fbc104ca6b25240efc6b.jpg" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 104, + 462, + 470, + 474 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 462, + 470, + 474 + ], + "spans": [ + { + "bbox": [ + 104, + 462, + 470, + 474 + ], + "type": "text", + "content": "and the " + }, + { + "bbox": [ + 104, + 462, + 470, + 474 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 462, + 470, + 474 + ], + "type": "text", + "content": "-conditioned data log-likelihood " + }, + { + "bbox": [ + 104, + 462, + 470, + 474 + ], + "type": "inline_equation", + "content": "\\log p(D_1, \\ldots, D_N|\\phi)" + }, + { + "bbox": [ + 104, + 462, + 470, + 474 + ], + "type": "text", + "content": " can be derived as follows:" + } + ] + } + ], + "index": 12 + }, + { + "bbox": [ + 171, + 478, + 505, + 615 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 171, + 478, + 505, + 615 + ], + "spans": [ + { + "bbox": [ + 171, + 478, + 505, + 615 + ], + "type": "interline_equation", + "content": "\\begin{array}{l} \\log p \\left(D _ {1}, \\dots , D _ {N} \\mid \\phi\\right) = \\log \\int \\prod_ {i = 1} ^ {N} \\left(p \\left(\\Theta_ {i} \\mid \\phi\\right) \\cdot p \\left(D _ {i} \\mid \\Theta_ {i}\\right)\\right) d \\Theta_ {1: N} (91) \\\\ = \\log \\prod_ {i = 1} ^ {N} \\int p (\\Theta_ {i} | \\phi) \\cdot p (D _ {i} | \\Theta_ {i}) d \\Theta_ {i} (92) \\\\ = \\sum_ {i = 1} ^ {N} \\log \\int p (\\Theta_ {i} | \\phi) \\cdot p (D _ {i} | \\Theta_ {i}) d \\Theta_ {i} (93) \\\\ = \\sum_ {i = 1} ^ {N} \\log \\mathcal {N} \\left(Y _ {i}; X _ {i} \\phi , \\sigma^ {2} \\left(I + X _ {i} X _ {i} ^ {\\top}\\right)\\right). (94) \\\\ \\end{array}", + "image_path": "e45284e9461ff4048ad2cbd7cde55d69037e48b6b188d9efd33c4fc262a6a218.jpg" + } + ] + } + ], + "index": 13 + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "spans": [ + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "text", + "content": "So we can easily evaluate " + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "inline_equation", + "content": "\\log p(D_1, \\ldots, D_N | m, V)" + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "text", + "content": " and also optimise it with respect to " + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "inline_equation", + "content": "(m, V)" + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "text", + "content": ". At test time, the posterior of " + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "inline_equation", + "content": "\\Theta_*" + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "text", + "content": " given all the training data and the test support data, " + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "inline_equation", + "content": "p(\\Theta_* | D_*, D_1, \\ldots, D_N)" + }, + { + "bbox": [ + 104, + 618, + 506, + 653 + ], + "type": "text", + "content": " can be derived as follows. We start with:" + } + ] + } + ], + "index": 14 + }, + { + "bbox": [ + 159, + 655, + 505, + 680 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 159, + 655, + 505, + 680 + ], + "spans": [ + { + "bbox": [ + 159, + 655, + 505, + 680 + ], + "type": "interline_equation", + "content": "p \\left(\\Theta_ {*} \\mid D _ {*}, D _ {1}, \\dots , D _ {N}\\right) = \\int p \\left(\\Theta_ {*} \\mid \\phi , D _ {*}\\right) \\cdot p \\left(\\phi \\mid D _ {*}, D _ {1}, \\dots , D _ {N}\\right) d \\phi , \\tag {95}", + "image_path": "c896d78816433be51bf8461881874259274a8796ab41fb9819df3021a1d82612.jpg" + } + ] + } + ], + "index": 15 + }, + { + "bbox": [ + 104, + 683, + 504, + 717 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 683, + 504, + 717 + ], + "spans": [ + { + "bbox": [ + 104, + 683, + 504, + 717 + ], + "type": "text", + "content": "and the first term in the integration, " + }, + { + "bbox": [ + 104, + 683, + 504, + 717 + ], + "type": "inline_equation", + "content": "p(\\Theta_{*}|\\phi ,D_{*})" + }, + { + "bbox": [ + 104, + 683, + 504, + 717 + ], + "type": "text", + "content": " , since it is proportional to the product of two Gaussians " + }, + { + "bbox": [ + 104, + 683, + 504, + 717 + ], + "type": "inline_equation", + "content": "p(\\Theta_{*}|\\phi ,D_{*})\\propto p(D_{*}|\\Theta_{*})\\cdot p(\\Theta_{*}|\\phi) = \\mathcal{N}(Y_{*};X_{*}\\Theta_{*},\\sigma^{2}I)\\cdot \\mathcal{N}(\\Theta_{*};\\phi ,\\sigma^{2}I)" + }, + { + "bbox": [ + 104, + 683, + 504, + 717 + ], + "type": "text", + "content": " , it admits Gaussian (from the Gaussian properties)," + } + ] + } + ], + "index": 16 + }, + { + "bbox": [ + 153, + 719, + 505, + 734 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 153, + 719, + 505, + 734 + ], + "spans": [ + { + "bbox": [ + 153, + 719, + 505, + 734 + ], + "type": "interline_equation", + "content": "p \\left(\\Theta_ {*} \\mid \\phi , D _ {*}\\right) = \\mathcal {N} \\left(\\Theta_ {*}; C \\phi + d, E\\right), \\quad \\text {w h e r e} \\tag {96}", + "image_path": "10eb3bf51ac7ef108393eb06d411afd4fe85b92c8f4e50cea6bf0095215612dc.jpg" + } + ] + } + ], + "index": 17 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 310, + 760 + ], + "type": "text", + "content": "23" + } + ] + } + ], + "index": 18 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 22 + }, + { + "para_blocks": [ + { + "bbox": [ + 183, + 80, + 505, + 95 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 183, + 80, + 505, + 95 + ], + "spans": [ + { + "bbox": [ + 183, + 80, + 505, + 95 + ], + "type": "interline_equation", + "content": "C = I - K X _ {*}, d = K Y _ {*}, E = \\sigma^ {2} C, K = X _ {*} ^ {\\top} \\left(X _ {*} X _ {*} ^ {\\top} + I\\right) ^ {- 1}. \\tag {97}", + "image_path": "128a0ca462a1bc194177de0022e2e68b1640001dab791c462378768687efc5ba.jpg" + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "spans": [ + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "type": "text", + "content": "The second term of (95) becomes a Gaussian following the derivation similar to (88) with the test support data " + }, + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "type": "inline_equation", + "content": "X_{*}" + }, + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "type": "inline_equation", + "content": "Y_{*}" + }, + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "type": "text", + "content": " included. Consequently we let " + }, + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "type": "inline_equation", + "content": "p(\\phi |D_{*},D_{1},\\ldots ,D_{N}) = \\mathcal{N}(\\phi ;A_{*}^{-1}b_{*},A_{*}^{-1})" + }, + { + "bbox": [ + 104, + 100, + 506, + 144 + ], + "type": "text", + "content": ". At last, (95) is the marginalisation of the product of two Gaussians, which admits the following closed form:" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 170, + 149, + 506, + 165 + ], + "type": "interline_equation", + "angle": 0, + "lines": [ + { + "bbox": [ + 170, + 149, + 506, + 165 + ], + "spans": [ + { + "bbox": [ + 170, + 149, + 506, + 165 + ], + "type": "interline_equation", + "content": "p \\left(\\Theta_ {*} \\mid D _ {*}, D _ {1}, \\dots , D _ {N}\\right) = \\mathcal {N} \\left(\\Theta_ {*}; C A _ {*} ^ {- 1} b _ {*} + d, C A _ {*} ^ {- 1} C ^ {\\top} + E\\right). \\tag {98}", + "image_path": "e48b75d41a55ff6604bf39335eea18159476ff35c84444e3c59b04d82c2f814a.jpg" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 178, + 434, + 192 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 178, + 434, + 192 + ], + "spans": [ + { + "bbox": [ + 105, + 178, + 434, + 192 + ], + "type": "text", + "content": "D IMPLEMENTATION DETAILS AND EXPERIMENTAL SETTINGS" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "spans": [ + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "type": "text", + "content": "We implement our NIW-Meta using PyTorch (Paszke et al., 2017) and the Higher (Grefenstette et al., 2019)" + }, + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "type": "inline_equation", + "content": "^4" + }, + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "type": "text", + "content": " library. The latter makes the implementation of the backpropagation through the functional network weights in PyTorch modules very easy. Real codes for the synthetic SineLine regression dataset and the large-scale ViT are also attached in the Supplement Material to help understanding of our algorithm. For all few-shot classification experiments, we use the ProtoNet-like parameter-free NCC head in our NIW-Meta. Some important implementation details on the SGLD iterations for quadratic approximation of the local episodic optimisation include: we have either 3 steps without burn-in (for large-scale backbones ViT) or 5 steps with 2 burn-in steps (for smaller backbones ConvNet, ResNet-18, and CNP). Before starting SGLD iterations, the network is initialised with the current model parameters " + }, + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "type": "inline_equation", + "content": "m_0" + }, + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "type": "text", + "content": ". For reliable variance estimation of " + }, + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "type": "inline_equation", + "content": "\\overline{A}_i" + }, + { + "bbox": [ + 104, + 203, + 506, + 324 + ], + "type": "text", + "content": ", a small regulariser is added to the diagonal entries of the variances." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 330, + 506, + 419 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 330, + 506, + 419 + ], + "spans": [ + { + "bbox": [ + 104, + 330, + 506, + 419 + ], + "type": "text", + "content": "For the standard benchmarks with ConvNet/ResNet backbones, we follow the standard protocols of (Wang et al., 2019; Mangla et al., 2020; Zhang et al., 2021): With 64/16/20 and 391/97/160 train/validation/test class splits for miniImageNet and tieredImageNet datasets, respectively, the images are resized to 84 pixels. We initialise the " + }, + { + "bbox": [ + 104, + 330, + 506, + 419 + ], + "type": "inline_equation", + "content": "m_0" + }, + { + "bbox": [ + 104, + 330, + 506, + 419 + ], + "type": "text", + "content": " parameters from the pretrained models: checkpoints from (Wang et al., 2019) for Conv-4 and ResNet-18 and checkpoints from (Mangla et al., 2020) for WRN-28-10. With the stochastic gradient descent (SGD) optimizer, we set momentum 0.9, weight decay 0.0001, and initial learning rate 0.01 for miniImageNet and 0.001 for tieredImageNet. We have learning rate schedule by reducing the learning rate by the factor of 0.1 at epoch 70." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 423, + 506, + 491 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 423, + 506, + 491 + ], + "spans": [ + { + "bbox": [ + 104, + 423, + 506, + 491 + ], + "type": "text", + "content": "For the large-scale ViT backbones, we utilise the code base from (Hu et al., 2022). We use the self-supervised pretrained checkpoints from (Caron et al., 2021) to initialise the " + }, + { + "bbox": [ + 104, + 423, + 506, + 491 + ], + "type": "inline_equation", + "content": "m_0" + }, + { + "bbox": [ + 104, + 423, + 506, + 491 + ], + "type": "text", + "content": " parameters. The CIFAR-FS dataset is formed by splitting the original CIFAR-100 into 64/16/20 train/validation/test classes. For training, we run 100 epochs, each epoch comprised of 2000 episodes. We follow the same warm-up plus cosine annealing learning rate scheduling as (Hu et al., 2022). For test evaluation, we have 600 episodes from the test splits." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "spans": [ + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "text", + "content": "For the few-shot regression experiments with ShapeNet datasets, we basically follow all experimental settings and CNP/ANP network architectures from (Gao et al., 2022). For instance, in the ShapeNet-1D dataset, we run our algorithm for " + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "inline_equation", + "content": "500K" + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "text", + "content": " iterations with learning rate " + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "inline_equation", + "content": "10^{-4}" + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "text", + "content": " where each batch iteration consists of 10 episodes. The CNP backbone, for instance, in the Distractor dataset case, has a ResNet image encoder and a linear target encoder, where the concatenated instance-wise embeddings then go through a three-layer fully connected network followed by max pooling. The decoder has a similar architecture and converts the support set embedding and a query image into a target label. For the conv-net plus ridge-regression head backbone " + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "inline_equation", + "content": "(\\mathrm{C} + \\mathrm{R})" + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "text", + "content": " tested for our method, the conv-net feature extractors are formed by taking the encoder parts of the CNP architectures in (Gao et al., 2022) while discarding the pooling operations and decoders. Also the ridge-regression L2 regularisation coefficient is set to " + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "inline_equation", + "content": "\\lambda = 1.0" + }, + { + "bbox": [ + 104, + 495, + 507, + 617 + ], + "type": "text", + "content": " for all datasets." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 105, + 632, + 505, + 645 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 632, + 505, + 645 + ], + "spans": [ + { + "bbox": [ + 105, + 632, + 505, + 645 + ], + "type": "text", + "content": "E COMPUTATIONAL COMPLEXITY, RUNNING TIME AND MEMORY FOOTPRINT" + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 657, + 506, + 715 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 657, + 506, + 715 + ], + "spans": [ + { + "bbox": [ + 104, + 657, + 506, + 715 + ], + "type": "text", + "content": "Although we have introduced a principled Bayesian model/framework for FSL with solid theoretical support, the extra steps introduced in our training/test algorithms appear to be more complicated than simple feed-forward workflows (e.g., ProtoNet (Snell et al., 2017)). To this end, we have analysed the time complexity of the proposed algorithm contrasted with ProtoNet (Snell et al., 2017). For fair comparison, our approach adopts the same NCC head on top of the feature space as ProtoNet." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 116, + 720, + 351, + 732 + ], + "type": "page_footnote", + "angle": 0, + "lines": [ + { + "bbox": [ + 116, + 720, + 351, + 732 + ], + "spans": [ + { + "bbox": [ + 116, + 720, + 351, + 732 + ], + "type": "text", + "content": "4https://github.com/facebookresearch/higher" + } + ] + } + ], + "index": 11 + }, + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 312, + 760 + ], + "type": "text", + "content": "24" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 23 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "spans": [ + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": "Table 8: (Per-episode) Time complexity of our NIW-Meta vs. ProtoNet. We denote by " + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "inline_equation", + "content": "F_{D}" + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "inline_equation", + "content": "B_{D}" + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": " the forward-pass and backpropagation times with data " + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "inline_equation", + "content": "D =" + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": " Support or Query. In our algorithm, " + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "inline_equation", + "content": "M_{L}, M_{V}," + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": " and " + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "inline_equation", + "content": "M_{S}" + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": " indicate the numbers of SGLD iterations, test-time variational inference steps for (13) or (63,64), and the number of test-time model samples " + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "inline_equation", + "content": "\\theta^{(s)}" + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": ", respectively. The costs required for reparametrised sampling in model space and regulariser computation in (11) or (62) are denoted by " + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "inline_equation", + "content": "O(d)" + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": " where " + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "inline_equation", + "content": "d =" + }, + { + "bbox": [ + 104, + 89, + 506, + 156 + ], + "type": "text", + "content": " number of backbone parameters." + } + ] + } + ], + "index": 1 + }, + { + "type": "table", + "bbox": [ + 156, + 157, + 451, + 210 + ], + "blocks": [ + { + "bbox": [ + 156, + 157, + 451, + 210 + ], + "lines": [ + { + "bbox": [ + 156, + 157, + 451, + 210 + ], + "spans": [ + { + "bbox": [ + 156, + 157, + 451, + 210 + ], + "type": "table", + "html": "
Training timeTest time
NIW-Meta (Ours)(FS+FQ+BQ)·(ML+1)+O(d)(FS+BS)·MV+(FS+FQ)·MS+O(d)
ProtoNetFS+FQ+BQFS+FQ
", + "image_path": "ce14b8f279f69034432e5a3ebdfed2faaf60316bc1bb8c2441e5502b482527d2.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "type": "table", + "bbox": [ + 157, + 265, + 449, + 319 + ], + "blocks": [ + { + "bbox": [ + 104, + 241, + 504, + 264 + ], + "lines": [ + { + "bbox": [ + 104, + 241, + 504, + 264 + ], + "spans": [ + { + "bbox": [ + 104, + 241, + 504, + 264 + ], + "type": "text", + "content": "Table 9: Effect of the finite number of episodes " + }, + { + "bbox": [ + 104, + 241, + 504, + 264 + ], + "type": "inline_equation", + "content": "(N)" + }, + { + "bbox": [ + 104, + 241, + 504, + 264 + ], + "type": "text", + "content": " on the generalization error gap in terms of the sample complexity gap " + }, + { + "bbox": [ + 104, + 241, + 504, + 264 + ], + "type": "inline_equation", + "content": "\\log (2\\sqrt{nN} /\\delta) / (nN)" + }, + { + "bbox": [ + 104, + 241, + 504, + 264 + ], + "type": "text", + "content": " in (21). Here " + }, + { + "bbox": [ + 104, + 241, + 504, + 264 + ], + "type": "inline_equation", + "content": "\\delta = 0.001" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 157, + 265, + 449, + 319 + ], + "lines": [ + { + "bbox": [ + 157, + 265, + 449, + 319 + ], + "spans": [ + { + "bbox": [ + 157, + 265, + 449, + 319 + ], + "type": "table", + "html": "
N (#episodes)n (#shots × #ways)Sample complexity error gap (↓)
1031×5 / 5×52.4×10-3/ 5.0×10-4
1041×5 / 5×53.0×10-4/ 5.5×10-5
1051×5 / 5×52.8×10-5/ 6.0×10-6
", + "image_path": "e96a5584932b4535797f582df31372cb03b604c491d73c865ad0eb6d3425c745.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "table_body" + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 352, + 504, + 386 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 352, + 504, + 386 + ], + "spans": [ + { + "bbox": [ + 104, + 352, + 504, + 386 + ], + "type": "text", + "content": "The computational complexity is summarised in Table 8. Despite seemingly increased complexity in the training/test algorithms, our method incurs only constant-factor overhead compared to the minimal-cost ProtoNet." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 391, + 506, + 449 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 391, + 506, + 449 + ], + "spans": [ + { + "bbox": [ + 104, + 391, + 506, + 449 + ], + "type": "text", + "content": "As we claimed in the main paper, one of the main drawbacks of MAML (Finn et al., 2017) is the computational overhead to keep track of a large computational graph for inner gradient descent steps. Unlike MAML, our NIw-Meta has a much more efficient episodic optimisation strategy, i.e., our local episodic optimisation only computes the (constant) first/second-order moment statistics of the episodic loss function without storing the full optimisation trace." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 104, + 452, + 506, + 552 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 452, + 506, + 552 + ], + "spans": [ + { + "bbox": [ + 104, + 452, + 506, + 552 + ], + "type": "text", + "content": "To verify this, we measure and compare the memory footprints and running times of MAML and NIW-Meta on two real-world classification/regression datasets: miniImageNet 1-shot with the ResNet-18 backbone and ShapeNet-1D with the ConvNet backbone. The results in Fig. 5 show that NIW-Meta has far lower memory requirement than MAML (even smaller than 1-inner-step MAML) while MAML suffers from heavy use of memory space, nearly linearly increasing as the number of inner steps. The running times of our NIW-Meta are not prohibitively larger compared to MAML where the main computational bottleneck is the SGLD iterations for quadratic approximation of the local episodic optimisation. We tested two scenarios with the number of SGLD iterations 2 and 5, and we have nearly the same (or even better) training speed as the 1-inner-step MAML." + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 582, + 498, + 594 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 582, + 498, + 594 + ], + "spans": [ + { + "bbox": [ + 104, + 582, + 498, + 594 + ], + "type": "text", + "content": "F TRAINING STABILITY AND IMPACT OF NUMBER OF TRAINING EPISODES" + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "spans": [ + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "content": "In our theoretical analysis of the generalisation error (Sec. 5 in the main paper and our proofs in Sec. A), we regard the number of training episodes " + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "content": " as infinity. In practice, " + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "content": " is finite, but large enough (" + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "inline_equation", + "content": "N \\sim 100K" + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "content": " in typical FSL), and we simply take it as infinite " + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "content": " for mathematical convenience (e.g., to have the first KL term in (5) vanish; reduction to the task population mean from (29) to (30)). To see the effect of finite " + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "content": " on the generalization performance, we exemplify several typical " + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "content": " values and corresponding generalization (sample complexity) error gaps (21) in Table 9. We see that even for relatively small " + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 616, + 504, + 694 + ], + "type": "text", + "content": ", error gaps are small/negligible." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 698, + 505, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 698, + 505, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 698, + 505, + 733 + ], + "type": "text", + "content": "In addition, we investigate the impact of the number of training episodes " + }, + { + "bbox": [ + 104, + 698, + 505, + 733 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 698, + 505, + 733 + ], + "type": "text", + "content": " on training stability. We illustrate it in Fig. 6, which shows that our method works stably well even for a small number of initial episodes, convergence being as fast as ProtoNet with far better generalization performance." + } + ] + } + ], + "index": 10 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "25" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 24 + }, + { + "para_blocks": [ + { + "type": "image", + "bbox": [ + 157, + 81, + 296, + 194 + ], + "blocks": [ + { + "bbox": [ + 157, + 81, + 296, + 194 + ], + "lines": [ + { + "bbox": [ + 157, + 81, + 296, + 194 + ], + "spans": [ + { + "bbox": [ + 157, + 81, + 296, + 194 + ], + "type": "image", + "image_path": "8dac19283819c61ed663553b296bf7f3c8677d880a59dc49f903b45685833642.jpg" + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 247, + 196, + 359, + 207 + ], + "lines": [ + { + "bbox": [ + 247, + 196, + 359, + 207 + ], + "spans": [ + { + "bbox": [ + 247, + 196, + 359, + 207 + ], + "type": "text", + "content": "(a) GPU memory footprints" + } + ] + } + ], + "index": 3, + "angle": 0, + "type": "image_caption" + } + ], + "index": 1 + }, + { + "type": "image", + "bbox": [ + 310, + 81, + 449, + 194 + ], + "blocks": [ + { + "bbox": [ + 310, + 81, + 449, + 194 + ], + "lines": [ + { + "bbox": [ + 310, + 81, + 449, + 194 + ], + "spans": [ + { + "bbox": [ + 310, + 81, + 449, + 194 + ], + "type": "image", + "image_path": "801401669573a7f6f5a170fcb71abb3d0319d4a8a9cf95f964b96fb373e58119.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "image_body" + } + ], + "index": 2 + }, + { + "type": "image", + "bbox": [ + 161, + 212, + 296, + 320 + ], + "blocks": [ + { + "bbox": [ + 161, + 212, + 296, + 320 + ], + "lines": [ + { + "bbox": [ + 161, + 212, + 296, + 320 + ], + "spans": [ + { + "bbox": [ + 161, + 212, + 296, + 320 + ], + "type": "image", + "image_path": "7871a90d0c1b6f2c9d6276f9b0a67d4191d8ada3ef35f00bac17b9d576e2f5df.jpg" + } + ] + } + ], + "index": 4, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 244, + 321, + 365, + 333 + ], + "lines": [ + { + "bbox": [ + 244, + 321, + 365, + 333 + ], + "spans": [ + { + "bbox": [ + 244, + 321, + 365, + 333 + ], + "type": "text", + "content": "(b) Per-episode training times" + } + ] + } + ], + "index": 6, + "angle": 0, + "type": "image_caption" + } + ], + "index": 4 + }, + { + "type": "image", + "bbox": [ + 315, + 212, + 449, + 320 + ], + "blocks": [ + { + "bbox": [ + 315, + 212, + 449, + 320 + ], + "lines": [ + { + "bbox": [ + 315, + 212, + 449, + 320 + ], + "spans": [ + { + "bbox": [ + 315, + 212, + 449, + 320 + ], + "type": "image", + "image_path": "6062ff8cbd1c55780eb147cbe151a5ad9e13cf57d1da72ca04ac08d1f58175a4.jpg" + } + ] + } + ], + "index": 5, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "lines": [ + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "spans": [ + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "type": "text", + "content": "Figure 5: Computational complexity of MAML (Finn et al., 2017) and our NIW-Meta. (a) GPU memory footprints (in MB) for a single batch. (b) Per-episode training times (in milliseconds). For our NIW-Meta models, the time for the number of burn-in steps (2 steps in this case) is also included. That is, NIW-Meta (#SGLD=2) runs " + }, + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "type": "inline_equation", + "content": "2 + 2" + }, + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "type": "text", + "content": " SGLD iterations, and NIW-Meta (#SGLD=5) runs " + }, + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "type": "inline_equation", + "content": "5 + 2" + }, + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "type": "text", + "content": ", respectively, compared to MAML with " + }, + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "type": "inline_equation", + "content": "1 \\sim 5" + }, + { + "bbox": [ + 104, + 336, + 506, + 415 + ], + "type": "text", + "content": " inner iterations. We use the ResNet-18 backbone for miniImageNet in 1-shot classification and the ConvNet backbone for ShapeNet-1D regression (10 episodes per batch)." + } + ] + } + ], + "index": 7, + "angle": 0, + "type": "image_caption" + } + ], + "index": 5 + }, + { + "type": "image", + "bbox": [ + 105, + 454, + 298, + 596 + ], + "blocks": [ + { + "bbox": [ + 105, + 454, + 298, + 596 + ], + "lines": [ + { + "bbox": [ + 105, + 454, + 298, + 596 + ], + "spans": [ + { + "bbox": [ + 105, + 454, + 298, + 596 + ], + "type": "image", + "image_path": "a1a0399bacaeb7e604f0a03225a6f0bb9bd0ca8402661d79304b33ba18be3dc0.jpg" + } + ] + } + ], + "index": 8, + "angle": 0, + "type": "image_body" + }, + { + "bbox": [ + 104, + 604, + 506, + 682 + ], + "lines": [ + { + "bbox": [ + 104, + 604, + 506, + 682 + ], + "spans": [ + { + "bbox": [ + 104, + 604, + 506, + 682 + ], + "type": "text", + "content": "Figure 6: CIFAR 1-shot learning with ViT (DINO/s). (Left) Training losses vs. training episodes (2000 episodes per epoch). We plot the training loss of our NIw-Meta (i.e., (11)) in blue, and superimpose the training cross-entropy loss in red, where the latter is comparable to ProtoNet's training CE loss in magenta. We see that our NIw-Meta training is pretty stable, where the convergence speed is as fast as ProtoNet. (Right) Performance on the validation set as training episodes increases. The validation losses of our NIw-Meta (blue) and ProtoNet (cyan) are comparable, while we also compare the validation accuracy of our NIw-Meta (red) against ProtoNet (magenta)." + } + ] + } + ], + "index": 10, + "angle": 0, + "type": "image_caption" + } + ], + "index": 8 + }, + { + "type": "image", + "bbox": [ + 312, + 454, + 504, + 596 + ], + "blocks": [ + { + "bbox": [ + 312, + 454, + 504, + 596 + ], + "lines": [ + { + "bbox": [ + 312, + 454, + 504, + 596 + ], + "spans": [ + { + "bbox": [ + 312, + 454, + 504, + 596 + ], + "type": "image", + "image_path": "8258d98950b19f146a7e363a966ced9efed61e5744fb88f3e73e9f3a1e11ffc6.jpg" + } + ] + } + ], + "index": 9, + "angle": 0, + "type": "image_body" + } + ], + "index": 9 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 106, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 106, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 106, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "26" + } + ] + } + ], + "index": 11 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 25 + }, + { + "para_blocks": [ + { + "type": "table", + "bbox": [ + 143, + 111, + 462, + 157 + ], + "blocks": [ + { + "bbox": [ + 104, + 89, + 504, + 111 + ], + "lines": [ + { + "bbox": [ + 104, + 89, + 504, + 111 + ], + "spans": [ + { + "bbox": [ + 104, + 89, + 504, + 111 + ], + "type": "text", + "content": "Table 10: Predictive log-marginal-likelihood scores (LMLHD) on the Sine-Line dataset. The higher the better uncertainty quantification and generalisation." + } + ] + } + ], + "index": 1, + "angle": 0, + "type": "table_caption" + }, + { + "bbox": [ + 143, + 111, + 462, + 157 + ], + "lines": [ + { + "bbox": [ + 143, + 111, + 462, + 157 + ], + "spans": [ + { + "bbox": [ + 143, + 111, + 462, + 157 + ], + "type": "table", + "html": "
Context size124681216
(Volpp et al., 2023)-18.38-15.98-13.69-12.74-11.75-11.14-10.23
NIW-Meta (Ours)-17.02-14.74-12.60-8.14-3.51-1.55-1.33
", + "image_path": "633476ef4c35b51f3be2a196983df79453fa5c44b9ea9843ee0d1b79a4950521.jpg" + } + ] + } + ], + "index": 2, + "angle": 0, + "type": "table_body" + } + ], + "index": 2 + }, + { + "bbox": [ + 105, + 176, + 269, + 188 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 176, + 269, + 188 + ], + "spans": [ + { + "bbox": [ + 105, + 176, + 269, + 188 + ], + "type": "text", + "content": "G ADDITIONAL DISCUSSIONS" + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 105, + 201, + 344, + 213 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 201, + 344, + 213 + ], + "spans": [ + { + "bbox": [ + 105, + 201, + 344, + 213 + ], + "type": "text", + "content": "G.1 COMPARISON TO BAYESIAN NEURAL PROCESSES" + } + ] + } + ], + "index": 4 + }, + { + "bbox": [ + 104, + 222, + 506, + 323 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 222, + 506, + 323 + ], + "spans": [ + { + "bbox": [ + 104, + 222, + 506, + 323 + ], + "type": "text", + "content": "An interesting question is what are the relative merits of a complete Bayesian treatment but with restricted Gaussian forms as in our model, versus a shallow Bayesian but GMM-like rich variational forms such as (Volpp et al., 2023). We do not know which is absolutely better than the other. Further theoretical and empirical study needs to be carried out in this regard. But we have some experimental evidence as demonstrated in our paper, the comparison with MetaQDA (Zhang et al., 2021) shallow Bayesian approach that only places prior distribution on the model head parts while freezing the feature extractor (Table 2, 3, 5). As shown, our complete Bayesian treatment outperformed it, in both test generalisation performance and uncertainty calibration. This is one supporting evidence of why a complete Bayesian treatment could be more promising than a shallow Bayesian treatment." + } + ] + } + ], + "index": 5 + }, + { + "bbox": [ + 104, + 327, + 506, + 437 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 327, + 506, + 437 + ], + "spans": [ + { + "bbox": [ + 104, + 327, + 506, + 437 + ], + "type": "text", + "content": "For the experimental demonstration, we have done additional experiments on the Sine-Line dataset to report the predictive marginal test likelihood score, which is directly comparable to the shallow Bayesian embedding models, esp., the Bayesian Neural Process model of (Volpp et al., 2023). The predictive log-marginal-likelihood scores (LMLHD) are shown in Table 10. We try to match the experimental settings from (Volpp et al., 2023) so that the results are comparable. More specifically, we test on 256 tasks, each of which consists of 64 samples with a varying number of context samples (1, 2, 4, 6, 8, 12 and 16). The number of posterior samples used to compute/approximate the predictive marginal likelihood is 1024. As shown, our NIw-Meta has higher scores, especially for larger context size, implying that capturing uncertainty in full model parameters is important for generalisation capability." + } + ] + } + ], + "index": 6 + }, + { + "bbox": [ + 105, + 451, + 366, + 462 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 451, + 366, + 462 + ], + "spans": [ + { + "bbox": [ + 105, + 451, + 366, + 462 + ], + "type": "text", + "content": "G.2 JUSTIFICATION OF MODEL AND ALGORITHM CHOICES" + } + ] + } + ], + "index": 7 + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "spans": [ + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": "Motivation for distributional estimate " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": ". If " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": " were directly linked to the observed data " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "D_{i}" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": "'s in our graphical model Fig. 1, then " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": " can tend to be determined deterministically, as the number of data " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "N" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": " becomes large. However, " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": " is linked to latent variables " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": "'s, so the belief on " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": " also needs to capture and accumulate the uncertainty in " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": "'s, which amounts to marginalising out the " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\theta_{i}" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": " variables. So, it may not not be appropriate to treat the posterior for " + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "inline_equation", + "content": "\\phi" + }, + { + "bbox": [ + 104, + 472, + 506, + 550 + ], + "type": "text", + "content": " as a delta function (0 uncertainty), and it is better to follow the Bayesian inference principle, i.e., let the posterior be computed from the observed evidence." + } + ] + } + ], + "index": 8 + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "spans": [ + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "text", + "content": "Why distributional estimate " + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "text", + "content": " if we only use the mode of " + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "text", + "content": " at meta-test time. Using the mode of " + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "text", + "content": " is only for practical convenience and simplicity. Although we used the mode, the distributional form " + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "inline_equation", + "content": "q(\\phi)" + }, + { + "bbox": [ + 104, + 555, + 504, + 599 + ], + "type": "text", + "content": " would take into account uncertainty in its optimisation, and thus lead to a different solution from the deterministic one." + } + ] + } + ], + "index": 9 + }, + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "spans": [ + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "type": "text", + "content": "Why not the same inference approach for meta testing as meta training. For meta testing, we may attempt to solve the optimisation problem similar to (7) or (8) for meta training with " + }, + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "type": "text", + "content": " fixed, which also results in the same meta test solution as our derivation in Sec. 3.2. This can also be verified by inspecting the similarity between (13) and (7) or (8). However, this approach requires that " + }, + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "type": "text", + "content": " be fixed at the trained value, which we do not know for sure in the pure optimisation perspective. In Sec. 3.2, we aimed to derive the meta-test optimisation problem from the Bayesian perspective from the outset, which offers us a reasonable justification for why " + }, + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "type": "inline_equation", + "content": "L_{0}" + }, + { + "bbox": [ + 104, + 605, + 507, + 683 + ], + "type": "text", + "content": " can be fixed." + } + ] + } + ], + "index": 10 + }, + { + "bbox": [ + 104, + 688, + 506, + 733 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 688, + 506, + 733 + ], + "spans": [ + { + "bbox": [ + 104, + 688, + 506, + 733 + ], + "type": "text", + "content": "Quality of the quadratic approximation. In (9), we have made the quadratic approximation for the negative likelihood, " + }, + { + "bbox": [ + 104, + 688, + 506, + 733 + ], + "type": "inline_equation", + "content": "-\\log p(D_i|\\theta) \\approx \\frac{1}{2} (\\theta - \\overline{m}_i)^\\top \\overline{A}_i(\\theta - \\overline{m}_i) + \\text{const}" + }, + { + "bbox": [ + 104, + 688, + 506, + 733 + ], + "type": "text", + "content": ". To see the quality of this approximation, we empirically evaluated the true value (the left hand side of the approximation) and the quadratic approximation (the right hand side) on the miniImageNet 5-shot dataset with" + } + ] + } + ], + "index": 11 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 760 + ], + "type": "text", + "content": "27" + } + ] + } + ], + "index": 12 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 26 + }, + { + "para_blocks": [ + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "spans": [ + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "text", + "content": "the ResNet-18 backbone. We take random 10 perturbations of " + }, + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "inline_equation", + "content": "\\theta" + }, + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "text", + "content": " around the mode/mean " + }, + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "inline_equation", + "content": "\\overline{m}_i" + }, + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "text", + "content": " with perturbation radius 0.1. The relative error of the quadratic approximation is " + }, + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "inline_equation", + "content": "0.0050 \\pm 0.0004" + }, + { + "bbox": [ + 104, + 82, + 504, + 127 + ], + "type": "text", + "content": ". This shows that the negative log-likelihood is well approximated by our quadratic function in the vicinity of the mode/mean." + } + ] + } + ], + "index": 1 + }, + { + "bbox": [ + 105, + 140, + 286, + 152 + ], + "type": "title", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 140, + 286, + 152 + ], + "spans": [ + { + "bbox": [ + 105, + 140, + 286, + 152 + ], + "type": "text", + "content": "G.3 LIMITATIONS AND FUTURE WORKS" + } + ] + } + ], + "index": 2 + }, + { + "bbox": [ + 104, + 160, + 506, + 228 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 160, + 506, + 228 + ], + "spans": [ + { + "bbox": [ + 104, + 160, + 506, + 228 + ], + "type": "text", + "content": "Limitations. 1) Our NIw-Meta introduces some extra hyperparameters (e.g., the number of SGLD iterations, the number of burn-in steps). These are currently estimated empirically, but a more rigorous study on how to select them automatically needs to be addressed. 2) Although it is empirically verified that our quadratic episodic loss optimisation is effective, more theoretical analysis on the quality of this approximation as well as its impact on the final results, needs to be done." + } + ] + } + ], + "index": 3 + }, + { + "bbox": [ + 104, + 232, + 507, + 277 + ], + "type": "text", + "angle": 0, + "lines": [ + { + "bbox": [ + 104, + 232, + 507, + 277 + ], + "spans": [ + { + "bbox": [ + 104, + 232, + 507, + 277 + ], + "type": "text", + "content": "Future works. We have quite an extensive evaluation on popular few-shot classification and regression benchmarks. However, we would like to evaluate our approach on new emerging applications of few-shot learning such as efficient learning of the implicit neural representations such as NeRF, e.g., (Tancik et al., 2021)." + } + ] + } + ], + "index": 4 + } + ], + "discarded_blocks": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "header", + "angle": 0, + "lines": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "spans": [ + { + "bbox": [ + 105, + 26, + 293, + 38 + ], + "type": "text", + "content": "Published as a conference paper at ICLR 2024" + } + ] + } + ], + "index": 0 + }, + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "page_number", + "angle": 0, + "lines": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "spans": [ + { + "bbox": [ + 299, + 750, + 311, + 761 + ], + "type": "text", + "content": "28" + } + ] + } + ], + "index": 5 + } + ], + "page_size": [ + 612, + 792 + ], + "page_idx": 27 + } + ], + "_backend": "vlm", + "_version_name": "2.6.4" +} \ No newline at end of file