Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1099",
"Title": "Functional Indirection Neural Estimator for Better Out-of-distribution Generalization",
"6 Limitations": "Operating in functional spaces requires FINE’s memory to store several trainable weight matrices to infer the data-specific weights for the backbone. Moreover, each layer of the backbone requires its own memory, thus FINE may need a large number of parameters for very deep backbones. This could be addressed by parameter-sharing across layers and limiting the rank of the weight matrices.\nIt remains to design the backbone architecture optimally. We have used the NICE architecture as backbone for invertible transformations and a 2-layer MLP backbone for non-invertible cases. We further investigated our model’s performances with larger number of training points (up to 1M) and observed that FINE with the NICE backbone peaks at 100,000 training points with around 85% test accuracy, while FINE with MLP backbone continues to improve and achieves around 95% test accuracy at 1M training points. This calls for further theoretical analysis to guide the architectural design of the backbone network.\nFinally, testing FINE on IQ tasks in the visual space may limit its potential on IQ tasks involving other modalities. For example, one may consider a textual IQ problem: “if abc→ abd, then mmnnpp→?”. It is worth to emphasize that the concepts of analogy-making and indirection in functional spaces are indeed general and thus the idea of FINE should be applicable to various scenarios.",
"Reviewer Comment": "Reviewer_2: Strengths\nThe question of generalization to OOD is fundamental to intelligence problems.\nThe idea of indirection is very interesting and the authors propose a very nice implementation of this idea.\nFigure 4 is interesting in showing clusters in weight space. It would be interesting to show whether other models reveal the same property or whether this is specific to the proposed architecture.\nWeaknesses\nThe term OOD is often used in a very loose manner. This study is a good example. To really define OOD, one should define D (i.e., the distribution). What I understand the authors are doing is selecting some “classes” in Omniglot and testing on other classes. Or selecting some “classes” in CIFAR and testing on other classes. This seems pretty standard. But the question is how much OOD is really being tested here. Imagine that your training class is letter “i” and your test class is letter “l”. Those letters are really similar. Sure, they are different “classes”. The issue is how similar the training set is to the test set. Can the authors train with Omniglot classes and test with CIFAR classes or vice versa? That would be impressive OOD!\nThe word generalization here only applies to testing with somewhat different images. The hard challenge in IQ tests is to generalize to novel rules.\nQuestions:\nThe paper starts with the assertion that humans can generalize to OOD. I am curious about the evidence for this. Sure, everybody says this sort of thing. But what kind of real quantitative evidence do the authors have for this statement?\nOn line 85, the authors assert that the models are required to recognize the objects and figure out the relation between them. What is the evidence for any of this? Sure, from introspection, humans may reason about the task in this way, but this does not mean that this is what the models are required to do. This is an example of anthropomorphizing. (In contrast, lines 92/93 are trivially true. Yes, models must use the training data, there is no magic! )\nThe conclusions are similarly based on introspection only. The authors state that FINE reliably figures out the hidden relational pattern in each IQ task and is able to solve new tasks but the authors do not show any of this. The authors show that the model works better than a few other models when considering somewhat different images between the training set and the test set, which is pretty cool in and of itself. But there is nothing less and nothing more than that here. Other than somewhat different images, where does the paper show \"new tasks\" or \"discovering relational pattern\"?\nLimitations:\nThe authors spell out some limitations mostly to highlight other interesting problems that could be addressed. The key challenges mentioned above are not discussed.\nEthics Flag: No\nSoundness: 2 fair\nPresentation: 4 excellent\nContribution: 2 fair\n\nReviewer_3: Strength:\nThe proposed methodology of achieving functional indirection is novel and interesting.\nThe paper is well written and easy to follow.\nThe method is shown to achieve good performance on OOD generalization across unseen categories of the datasets used for training.\nWeaknesses: My main concerns revolve around the evaluation of method.\nThe paper goal is to solve OOD abstract reasoning and analogy making. However, the method is not evaluated on the known abstract reasoning benchmark of PGMs. Instead authors propose similar but less complex tasks for evaluation.\nThe newly introduced datasets test generalization only for the unseen characters of the training dataset but not for unseen rules that could be extrapolated or interpolated. See me related comment in the questions section.\nThe proposed method, FINE, implicitly selects the weights based on the transformation (hidden rule) of the input-output pair. This is highly similar to [1] where a mixture of expert (networks) explicitly compete to explain image transformations on MNIST and Omniglot. [1] also showed huge benefits of mechanism-specific function selection for OOD generalization. I believe a comparison or even explanation on the similarities among the method would be good to have in the paper.\nSince this framework focuses on the generalization of image transformation mechanisms across unseen classes (of Omniglot and CIFAR100), I would encourage authors to test FINE model trained with Omniglot transformations on MNIST data with transformation, which was shown in [1].\n[1] https://arxiv.org/abs/1712.00961 [2] https://arxiv.org/abs/1807.04225\nQuestions:\nWhy create new set of IQ tasks when a similar benchmark of PGMs studying different forms of rule based OOD generalization exists already? It fulfils the criterion of providing hints of the hidden rule with few images and then using that to infer the predictions.\nWhy are tasks restricted to identifying image transformations only? Would the method not be able to infer complex relational reasoning tasks as demonstrated in PGMs?\nHow are the basis of network weights restricted such that it only spans to a limited set of functions that can be used by the backbone?\nWouldn’t the limited basis of network weights restrict generalization to only observed rules and their combinations thereof? How would you scale to unseen rules?\nSome of the references in the related work section are missing for e.g. Neural Interpreters [3] uses attention mechanism to recompose functional modules for each input-output pair and test their method on abstract reasoning tasks. Similarly, Switch transformers[4] switch modules based on relevance to inputs.\nThe model and the evaluation share similarities with meta-learning framework where a few input-output pairs are provided to adapt the network weights (e.g. in MAML) and the resulting model is used to make inference on the unknown inputs coming from the same classes. Analogous to the FINE datasets, the classes would be the hidden rule that input-output share. Curious to know what authors think about this and would it be possible to make a small experiment with Omniglot?\nWhy is similarity metric chosen to identify correct output\ny\n′\ninstead of using normal cross-entropy?\nDo the newly introduced datasets test generalization only for unseen characters or also unseen rules? For e.g. the model could be trained for detecting character rotation from 0 to 90 degrees and tested on the characters rotated from 90 -270 degrees.\n[3] https://arxiv.org/abs/2110.06399 [4] https://arxiv.org/abs/2101.03961\nLimitations:\nThe main limitations seem to be around experimental evaluation. I would be happy to increase my score if authors address those concerns.\nThere doesn't seem to be any obvious negative societal impact.\nEthics Flag: No\nSoundness: 4 excellent\nPresentation: 4 excellent\nContribution: 3 good\n\nReviewer_4: Strengths:\nUnlike previous work that performs indirection and analogy-making in the data spaces, this paper proposes a mechanism for OOD in functional spaces.\nThe weights of backbone can be determined on-the-fly using (input, output) query to retrieve from a memory.\nWeaknesses:\nThe proposed method may need a large memory to store trainable weights for very deep backbones.\nThe authors tested the proposed method only in functional spaces related to geometrical transformation. It's not clear whether the proposed method would work for other transformations.\nQuestions:\nIt's not clear whether the proposed method would work for other transformations.\nLimitations:\nYes, the authors have addressed the limitations of their work.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 3 good\n\nReviewer_5: Strengths:\nThe paper takes inspiration from features of human intelligence and combines them in a novel way.\nIt proposes a formal framework that uses this technique to mitigate the lack of OOD generalization in neural networks. It shows that the proposed method can abstract image-level transformations in the latent space and that the abstraction allows the model to generalize transformations to novel images.\nThe paper proposes a novel use case in visual reasoning that highlights the new framework’s advantage. The method is compared to several relevant baselines.\nThe paper is clearly written overall. The code is provided for reproducing the results.\nThe authors discuss the certain limitations of their approach.\nWeaknesses:\nTo learn the proposed task, the model has to identify and build the transformation. Thus, the OOD samples should be different functions, not only different input images (for example, can the model learn rotations of angles between 0 and 90 degrees and extrapolate to angles of 90-180 degrees ?)\nTo provide a fair comparison to baselines, all models need to be adapted to this framework. The training setup and hyperparameters used for training these models are not explained in the main paper or the supplementary material. This information is crucial for explaining their performance.\nThe paper doesn’t discuss other applications for this framework. OOD generalization is an active research topic in ML and image classification is among the main applications (adversarial examples and domain shifts and noise corruptions are OOD examples). How can this framework benefit OOD generalization in this task?\nAlthough the use of NICE layers is intuitively motivated, it is not a necessary building block for FINE. Its use is motivated by the reversibility of certain transformations, which is a design choice in the dataset. Since most baselines (excluding hypernetwork) are not equipped with NICE layers, it would be a fair comparison to provide results on all models without NICE layers.\nThe choice of using 2 NICE layers instead of one layer is not fully explained.\nThe\ny\nt\nvectors used in the analogy step are output from\nγ\nt\n(\ny\n)\n. The motivation and significance of this choice are not explained.\nIn the first 2 paragraphs of subsection 3.1, H1 considers a unique output for each input while H2 considers the possibility of several outputs based on the transformation. The second hypothesis is not different from H1 if the target transformation is supplied with the input. This is the choice made indirectly in the dataset by providing an input-output pair as a hint for the transformation. It would be more concise to write this section without using hypotheses. It’s simpler to explain FINE’s advantage when the task requires learning many functions with a single backbone.\nQuestions:\nThese questions are directly related to the weaknesses mentioned above.\nThe paper should clarify that the OOD generalization claim concerns the input images that undergo the transformations, not the transformations that are learned by the model. Evaluations on unseen transformations would be a valuable addition to the paper.\nThe training setup for baseline models should be described in the supplementary material. Ideally, all models should be capable of inferring the transformation.\nAs mentioned above, It would be a fair comparison to provide results on all models without NICE layers.\nCan this framework be adapted and evaluated on OOD generalization in image classification or other tasks in ML ?\nThe first 2 paragraphs of section 3.1 could reformulated for clarification. As mentioned above, it would be more concise to write this section without using hypotheses. It’s simpler to explain FINE’s advantage when the task requires learning many functions with a single backbone.\nLimitations:\nThe authors address certain limitations of the paper but address no negative social impact.\nEthics Flag: No\nSoundness: 2 fair\nPresentation: 3 good\nContribution: 3 good",
"abstractText": "The capacity to achieve out-of-distribution (OOD) generalization is a hallmark of human intelligence and yet remains out of reach for machines. This remarkable capability has been attributed to our abilities to make conceptual abstraction and analogy, and to a mechanism known as indirection, which binds two representations and uses one representation to refer to the other. Inspired by these mechanisms, we hypothesize that OOD generalization may be achieved by performing analogymaking and indirection in the functional space instead of the data space as in current methods. To realize this, we design FINE (Functional Indirection Neural Estimator), a neural framework that learns to compose functions that map data input to output on-the-fly. FINE consists of a backbone network and a trainable semantic memory of basis weight matrices. Upon seeing a new input-output data pair, FINE dynamically constructs the backbone weights by mixing the basis weights. The mixing coefficients are indirectly computed through querying a separate corresponding semantic memory using the data pair. We demonstrate empirically that FINE can strongly improve out-of-distribution generalization on IQ tasks that involve geometric transformations. In particular, we train FINE and competing models on IQ tasks using images from the MNIST, Omniglot and CIFAR100 datasets and test on tasks with unseen image classes from one or different datasets and unseen transformation rules. FINE not only achieves the best performance on all tasks but also is able to adapt to small-scale data scenarios.",
"1 Introduction": "Every computer science problem can be solved with a higher level of indirection.\n—Andrew Koenig, Butler Lampson, David J. Wheeler\nGeneralizing to new circumstances is a hallmark of intelligence [16, 4, 11]. In some Intelligence Quotient (IQ) tests–a popular benchmark for human intelligence–one must leverage their prior experience to identify the hidden abstract rules out of a concrete example (e.g., a transformation of an image) and then apply the rules to the next (e.g., a new set of images of totally different appearance). These tasks necessitate several key capabilities, including conceptual abstraction and analogy-making [22]. Abstraction allows us to extend a concept to novel situations. It is also driven by analogy-making, which maps the current situation to the previous experience stored in the memory. Indeed, analogy-making has been argued to be one of the most important abilities of human cognition, or even further, “a concept is a package of analogies” [12]. The ability for humans to traverse seamlessly across concrete and abstraction levels suggests another mechanism known as indirection to bind two representations and use one representation to refer to the other [16, 20].\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).\nSeveral deep learning models have successfully utilized analogy and indirection. The Transformer [30] and RelationNet [28] learn analogies between data, through self-attention or pairwise functions. The ESBN [33] goes further by incorporating the indirection mechanism to bind an entity to a symbol and then reason on the symbols; and this has proved to be efficient on tasks involving abstract rules, similar to those aforementioned IQ tests. However, a common drawback of these approaches is that they operate on the data space, and thus are susceptible to out-of-distribution samples.\nIn this paper, we propose to perform analogy-making and indirection in functional spaces instead. We aim to learn to compose a functional mapping from a given input to an output on-the-fly. By doing so, we achieve two clear advantages. First, since the class of possible mappings is often restricted, it may not require a large amount of training data to learn the distribution of functions. Second, more importantly, since this approach performs indirection in functional spaces, it avoids bindings between numerous entities and symbols in data spaces, thus may help improve the out-of-distribution generalization capability.\nTo this end, we introduce a new class of problems that requires functional analogy-making and indirection, which are deemed to be challenging for current data-oriented approaches. The tasks are similar to popular IQ tasks in which the model is given hints about the hidden rules, then it has to predict the missing answer following the rules. One reasonable approach is that models should be able to compare the current task to what they saw previously to identify the rules between appearing entities, and thus has to search on functional spaces instead of data spaces. More concretely, we construct the IQ tasks by applying geometric transformations to images from MNIST dataset, handwritten Omniglot dataset and real-image CIFAR100 dataset, where the training set and test set contain disjoint image classes from the same or different datasets, and possibly disjoint transformation rules .\nSecond, we present a novel framework named Functional Indirection Neural Estimator (FINE) to solve this class of problems (see Fig. 1 for the overall architecture of FINE). FINE consists of (a) a neural backbone to approximate the functions and (b) a trainable key-value memory module that stores the basis of the network weights that spans the space of possible functions defined by the backbone. The weight basis memories allow FINE to perform analogy-making and indirection in the function space. More concretely, when a new IQ task arrives, FINE first (1) takes the hint images to make analogies with value memories, then (2) performs indirection to bind value memories with key memories and finally (3) computes the approximated functions based on key memories. Throughout\na comprehensive suite of experiments, we demonstrate that FINE outperforms the competing methods and adapts more effectively in small data scenarios.",
"2 Tasks": "For concreteness, we will focus on Intelligence Quotient (IQ) tests, which have been widely accepted as one of the reliable benchmarks to measure human intelligence [26]. We will study a popular class of IQ tasks that provides hints following some hidden rules and requires the player to choose among given choices to fill in a placeholder so that the filled-in entity obeys the rules of the current task. In order to succeed in these tasks, the player must be able to figure out the hidden rules and perform analogy-making to select the correct choice. Moreover, once figuring out the rules for the current task, a human player can almost always solve tasks with similar rules regardless of the appearing entities given in the tasks. This remarkable ability of out-of-distribution generalization indicates that humans treat objects and relations abstractly instead of relying on the raw sensory data.\nWe aim to solve IQ tasks that involve geometric transformations (e.g., see Fig. 2 for an example), which include affine transformations (translation, rotation, shear, scale, and reflection), non-linear transformations (fisheye, horizontal wave) and syntactic transformations (black-white, swap). Details of transformations are given in Supplementary. In a task, the models are given 3 images x, y and x′, where y is the result of x after applying a geometric transformation. The models are then asked to select y′ among 4 choices y1, y2, y3, y4 so that (x′, y′) follows the same rule as (x, y) (i.e. if y = f(x) then y′ = f(x′) for transformation f ). The 4 choices include (i) one with correct object/image and correct transformation (which is the solution), (ii) one with correct object/image and incorrect transformation, (iii) one with incorrect object/image and correct transformation, and (iv) one with both incorrect object/image and transformation.\nInspired by human capability, a reasonable approach to solve these tasks is that models should be able to figure out the transformation (or relation) between objects/images and apply the transformation to novel objects/images. The datasets can be classified into two main categories: single-transformation datasets and multi-transformation datasets. Single-transformation datasets are ones that only include a particular transformation, e.g. rotation. Note that the individual transformations of the same type vary, e.g., rotations by different angles. Multi-transformation datasets, on the other hand, consist of several transformation types. To test the generalization capability of the models, we build testing sets including classes of images that have never been seen during training (see Section 4.1 and Section 4.2), or even more challenging tasks including unseen rules and unseen datasets (see Section 4.3). Models must be able to leverage knowledge and memory gained from the training dataset to solve a new task.",
"3.1 Functional Hypothesis": "Let X and Y be the data input and output spaces, respectively. Denote by (Xtrain,Ytrain) ⊂ (X ,Y) the training set, and (Xtest,Ytest) ⊂ (X ,Y) the non-overlapping test set. Classical ML assumes that Xtrain and Xtest are drawn from the same distribution. Under this hypothesis, it is reasonable (for frequentists) to find a function f : X → Y in the functional space F that fits both the train and the test sets (i.e., the discrepancy between f(x) and corresponding y is small for all (x, y) in (Xtrain,Ytrain)\nand (Xtest,Ytest)). However, when Xtest is drawn from a different data distribution from that of Xtrain, it has been widely reported that current deep learning models fail drastically [2, 8, 13, 32, 33]. This is because the models are inferred exclusively from the observed data distribution. Moreover, it could be the case that relations between x and y in testing samples are unseen during training, which raises questions on the feasibility of learning a single function when dealing with out-of-distribution tasks. A natural solution for this problem would be to train the model to learn the functions adaptively. Formally speaking, the model will learn a function composer φ : X × Y → F that maps each pair of (x, y) ∈ X × Y (where y is the associated output of x) to a function φx,y in F so that φx,y approximates the true relation between x and y. As discussed, training models this way leads to two clear advantages: (1) it can help to handle the cases when there are multiple (and possibly disjoint) relations between inputs and outputs within the training and testing datasets; and (2) models are less dependent on data and thus can achieve more stable results on different training and testing sets. Empirical evidence for these points will be given in Section 4.\nSince there are an infinite number of functions that can map an input to an output with any given degree of precision, the key is to design φ so that it can map a given input-output pair to a “good enough” function. We humans may draw analogies between the current situation and our experiences and then work out the most suitable options [12]. For example, we can recognize a math problem in exam to be similar to a previous exercise in class with different variable names. We may find the presented idea in a paper related to that in another paper we read before, as someone even says “new ideas are just re-distribution of old ideas”. All these examples illustrate that analogy-making is a powerful strategy in human thinking process. Inspired by this mechanism, we equip the function composer φ with a semantic memory to store past knowledge, on which analogy-making is performed. The memory also plays the role of constraining the searching region for φ, so that φ only looks for functions in the subspaces spanned by the memories instead of the whole functional space F . Further analysis is given in Supplementary.\nA remaining question is how to design the memory. Let us be inspired again by human thinking process: When we see an animal, we compare its face, legs or tail with things we know and finally conclude it’s a “dog”. Here “dog” is an abstract concept bound with primary characteristics (e.g. face, legs, tail, etc.); once a new entity arrives, we compare its properties with these characteristics, and if they are similar, we utilize the indirection mechanism to infer it is indeed the concept we are considering. In our case, the concepts are functions, or more specifically, the geometric transformations. We thus maintain a key-value memory structure, in which the keys represent abstract concepts and values the associated characteristics of the concepts. A new input-output pair matches with some values, and the indirection enables us to compute the functions based on corresponding keys.\nNote that we have swapped the role of keys and values of the traditional memory (where the query is matched against the key, not value). This is to emphasize that we perform indirection to map concrete functional values to abstract functional keys.",
"3.2 Functional Indirection Neural Estimator": "In this section, we present the Functional Indirection Neural Estimator (FINE), an neural architecture to realize the general idea laid out in Section 3.1. FINE learns to compose a function mapping an input embedding x to an output embedding y on-the-fly. The function is drawn from a parametric family specified by a backbone neural network. Coupled with the backbone is a function composer φ, which is trained to compute the parameters of the backbone. More specifically, φ maps a data input-output pair to the weights of the neural networks.\nFINE solves the proposed IQ tasks as follows: (1) Given two images x, y from the hint, FINE uses φ to produce the function transforming x to y; denoted by φx,y. (2) Then, we feed the third image x′ to φx,y and get output y∗ = φx,y(x′). (3) We define a similarity metric to compare y∗ with given choices. The choice with the closest distance from y∗ is model’s answer. The function composer and the similarity metric are specified as follows.\nEncoder: Images are encoded by a trainable encoder (see Fig. 1). To effectively solve IQ tasks introduced in Section 2, we use the p4-CNN [7] encoder to serve as an inductive bias for geometric transformations. Throughout this paper, we refer to the images by their embeddings.",
"The functional memory": "Let us focus a weight matrix Wt ∈ Rd in t×d out t at the t-th layer of the backbone network. In practice, Wt belongs to the huge dint × doutt -dimensional real matrix spaceMdint ,doutt . To reduce the complexity of Wt, we assume that Wt only belongs to a s-dimensional subspace ofMdint ,doutt , where s d in t d out t . This subspace has a basis of s matrices which are trainable and stored in FINE’s memory, and Wt will be written as a linear combination of these matrices.\nDenote by Mt the memory for the t-th backbone layer. Mt includes two sub-memories: the key memoryM keyt = {M key t,1 , . . . ,M key t,s } and the corresponding value memoryM valuet = {M valuet,1 , . . . ,M valuet,s }. Elements of M keyt and M value t are trainable matrices of size d in t × doutt . We further let xt ∈ Rd in t and yt ∈ Rd out t be the associated input and output, respectively, where xt is output of the (t− 1)-th layer and yt is the pseudo-output computed by a trainable 1-layer neural network yt = γt(y).\nWith this design, we control the complexity of functional hypothesis space by either constraining the form of the backbone or the capacity of the functional memory.",
"Memory reading": "By the virtue of simplicity, we aim to find a simple query that can demonstrate the relation between xt and yt. Although the exact relation may be non-linear, we found that a query induced from linear relation is enough to efficiently read from memory. Formally, we want to find a query W qt such that W qt xt = yt. The best-approximated solution is W q t = ytx + t , where x + t is the pseudo-inverse of xt and can be efficiently approximated by the iterative Ben-Israel and Cohen algorithm [3]. This way of query computing requires no parameter as opposed to other methods, where the input is often fed into a trainable neural network to compute the query.\nWith the query in hand, the next step is to perform analogy-making. In FINE, the queryW qt represents for the current situation and the value memory M valuet consists of past experiences. The concrete query interacts with value memories to measure how close the current situation is to each of the experiences. The similarities are computed as dot products between the query and value memories and normalized by a factor of √ dint d out t :\nat = fconcat(M valuet ) > · flatten(W qt )√ dint d out t , (1)\nwhere the flatten operator flattens the matrix W qt of size d in t × doutt into a vector of size dint doutt , while the fconcat operator first flattens all matrices in M valuet , then concatenates them together to form the value matrix of size (dint d out t ) × s. The resulting at is a s-dimensional vector measuring the similarities between the query and the entries in the value memory. Here we omit the softmax operator as in usual attention to allow the similarities with more freedom. The same idea is shared in the ESBN [33], where the softmax similarities are scaled by a sigmoidal factor.\nFinally, the value memories are bound with their associated key memories via indirection. This can be understood as moving forward from the concrete space of data and value memories to the abstract space of functions and key memories. With the key memories and the similarity vector at, the weight Wt of current backbone layer can be computed as the linear combination of key memories:\nWt = reshape ( fconcat(M keyt ) · at ) , (2)\nwhere the reshape operator reshapes the vector of size dint d out t to a matrix of size d in t ×doutt . Since the softmax is omitted when calculating the similarities, Wt is not constrained to be in the convex hull of the key memories and indeed can lie anywhere in the subspace spanned by those key memories.",
"Memory update": "The key and value memories are updated using gradient descent:\nM key/valuet,i ←M key/value t,i − λ\n∂L\n∂M key/valuet,i , ∀i = 1, 2, . . . , s, (3)\nwhere λ > 0 is the learning rate and L is the loss of the training step.",
"The similarity metric": "After determining φx,y, the model is given a new input x′ and being asked to select the correct associated output y′ among 4 choices y′1, y ′ 2, y ′ 3, y ′ 4 so that (x\n′, y′) follows the same transformation rule as (x, y). This problem can be cast as finding the choice that is the most similar with y∗ = φx,y(x\n′). We consider the weighted Euclidean metric that measures the distance between two vectors u, v ∈ Rd:\nη(u, v) = d∑ i=1 αi(ui − vi)2,\nwhere {αi}di=1 ≥ 0 are trainable scalars, i.e., each component of u and v contributes with different importance. Finally, the probability to pick a choice is computed as:\np (y′i | x′) = exp(−η(y′i, y∗))\n4∑ j=1 exp(− η(y′j , y∗)) , for i = 1, 2, 3, 4.",
"4 Experiments": "We conduct experiments to show the out-of-distribution generalization capability of FINE when performing tasks introduced in Section 2. For non-invertible transformations (e.g. reflection), we use a simple 2-layer MLP as the backbone. For invertible transformations, we use the NICE architecture [9] as backbone to serve as an inductive bias for invertibility. Since in each NICE layer, only half of the input is transformed, we use the same memory for two consecutive layers, i.e., M2t = M2t+1. To balance between the backbone complexity and computational cost, we use 4 NICE layers in all experiments.\nWe compare FINE with three major classes of models: (a) models that make analogies in the data space, including Transformer [30], PrediNet [29] and RelationNet [28]; (b) models that leverage indirection to bind feature vectors with associated symbols and reason on the symbols, including the ESBN [33]; and (c) models that aim to learn a mapping from data to the functional space, including the HyperNetworks [14]. For HyperNetworks, we still use the NICE backbone and just apply their fast-weight generation method for fair comparisons. Except for FINE and HyperNetworks, all models are trained with context normalization [32], which has been proved to be effective in improving the generalization ability.\nDatasets & implementation: We generate data for IQ tasks described in Section 2 using images from Omniglot dataset [5], which includes 1,623 handwritten characters, and real-image CIFAR100 dataset [17]. If not specified, models are trained with p4-CNN encoder [7]. Experiments are conducted using PyTorch on a single GPU with Adam optimizer. Reported results are averages of 10 runs.",
"4.1 Results on Omniglot Dataset": "We use 100 characters for training and other 800 characters for testing. The train and test set size is 10,000 and 20,000, respectively. For FINE, we use 4 NICE layers with 48 memories for each pair\nof NICE layers. Experimental results for single-transformation tasks are shown in Table 1. Overall, FINE dominates others with large margins. For example, the gap to the runner-up on the Rotation task is nearly 13%. With test accuracy over 75% on all tasks, FINE shows a strong capability of out-of-distribution generalization.\nWe further conduct multi-affine-transformation experiments. In this case, the training set includes multiple types of affine transformations, while other settings are similar to the single-transformation case. Results are also reported in Table 1. FINE continues to outperform other models. This is because only FINE explicitly assumes the existence of multiple good functions that represent the transformation from input to output data. We note that although HyperNet also makes a similar assumption by generating data-specific weights, it does not utilize analogy-making and indirection and thus, fails to generalize to unseen images.",
"4.2 Results on CIFAR100 Dataset": "For CIFAR100 dataset, we follow similar settings as in experiments on the Omniglot dataset, except that we use 50 classes for training and 50 remaining classes for testing. We also conduct experiments on single-transformation and multi-affine-transformation tasks. Results for single-transformation tasks are reported in Table 2. Again, FINE outperforms all other models on all tasks, especially on Reflection where the gap is nearly 30%. Although FINE does not show good performance on Swap task as in Omniglot experiments, it is still slightly better than other models. On the remaining tasks, FINE achieves test accuracy of more than 80%.\nFor the multi-affine-transformation task, we report the performances of the models when trained with different encoder architectures, including the p4-CNN, 3-layer ResNet and 2-layer MLP, in Table 3. The results show two superior characteristics of FINE: first, FINE is consistently better than other models across different encoders; second, FINE is more stable with small standard deviations. This empirical result supports the functional hypothesis stated in Section 3.1, where we suggest that focusing on functions distribution instead of data distribution can boost model’s generalization capability and stability.",
"4.3 More Extreme OOD Tasks": "Previous tasks only include unseen classes of objects during testing. In this section, we further test FINE and related models on more challenging OOD tasks: tasks with unseen rules during training and even ones with images from unseen datasets. The training and testing sets are either images from CIFAR100, Omniglot or MNIST datasets, while the hidden rules are either translation, rotation or shear. For translation, training problems consist of translation vectors (a, b) with |a|, |b| ≤ 3,\nand models are tested with either |a| > 3 or |b| > 3; for rotation, model are trained with angles α ≤ 180◦ and tested with α > 180◦; for shear, training angles (α, β) are ones with |α|, |β| ≤ 30◦, while testing ones are either |α| > 30◦ or |β| > 30◦. All tasks have 5,000 data points for training and 10,000 for testing. We report results of FINE with NICE backbone and related models in Table 4. As expected, FINE continues to outperform other models on most of the tasks, even on extreme OOD tasks with unseen datasets and unseen rules where performances of all models drop significantly. This demonstrates FINE is capable of effectively learning the basis weights, which are stored in the memory, to represent novel rules.",
"Clustering on functional spaces": "We study how the transformations are distributed in the functional space. We use the FINE model trained on multi-affine transformation tasks. For an input-output pair (x, y), we flatten and concatenate all weights of NICE layers to form a vector representing φx,y . We then use the UMAP [21] to project φx,y’s vectors onto the 2D plane. Results are shown in Fig. 3. It is interesting to see that shears with the same horizontal or vertical angles are positioned closely; and scales are separated into 2 “big” clusters, one for smaller scale and one for bigger scale. In contrast, reflection representations seem not to be clustered properly, which is worth further investigation in future work.\nNumber of memories and backbone layers\nWe do an ablation study to see the effect of number of memories and backbone layers in FINE. For the limit case with 0 memory, we assign the query matrices to be the weights for NICE layers without\nthe analogy-making and indirection process. For the limit case with 0 NICE layer, we replace the NICE backbone by a 2-layer MLP.\nResults are shown in Fig. 4(a). Overall, we can observe clear improvements when we increase the number of memories or number of NICE layers. More interestingly, the more number of memories is, the more stable the results will be. Increasing the number of memories is equivalent to enlarging the range of φ, and increasing the number of NICE layers is equivalent to enlarging the hypothesis space F . Enlarging F may help FINE approach the true functions while still being sufficiently constrained by the number of memories, thus still being able to maintain its generalization capability.\nNumber of training data points\nWe train models on multi-affine-transformation task with training sets of different sizes. Results are reported in Fig. 4(b). FINE can adapt with small datasets of sizes 100 or 1000 and achieve fair accuracy (47.8% and 64.4%, respectively). Moreover, FINE achieves average test accuracy of 81.1% on training set of size 10,000, which is quite close to the 85.3% accuracy on 100,000 data points training sets. This shows FINE may obtain a near-optimal solution even with small number of training data points. In contrast, ESBN and Transformer need 100,000 training data points to achieve 80% or higher test accuracy, while only achieving roughly 70% test accuracy on smaller datasets.",
"5 Related work": "In recent years, there has been a strong interest in designing models that are capable of generalizing systematically. The Module Networks [1], which dynamically compose neural networks out of trainable modules, have been shown to possess some degree of systematic generalization [2]. Parascandolo et al. [24] proposed a method to train multiple competing experts to explain image transformations on MNIST and Omniglot datasets, yet the transformations are simpler than ones in our FINE dataset and it is not clear whether the proposed method can deal with unseen transformations. The Neural Interpreter [25] uses attention mechanism to recompose functional modules for each input-output pair and test their method on abstract reasoning tasks, yet tends to require a large amount of data to learn. Switch Transformer [10] mitigates the communication and computational cost in mixture of experts models. Recently, ESBN [33], which utilizes the indirection mechanism, shows a great promise on OOD tasks. Both methods achieve their degree of systematic generalization by injecting symbolic biases into the models. Our model FINE follows the same strategy, but it performs analogy-making and indirection in functional spaces instead of data spaces, and this has proved to boost both the performance and stability.\nIQ tests are powerful testbeds for visual and abstract reasoning. Inspired by Raven’s Progressive Matrices (RPM), the RAVEN dataset [34] was proposed as a testbed for visual reasoning models. However, this dataset does not focus on testing the ability of OOD generalization. Webb et al. [33] propose a series of IQ tasks with Unicode characters to show the effectiveness of indirection in tasks involving abstract rules, however these tasks are relatively simple since they only require the models to understand the same-different relation. The ARC dataset [6] aims to serve as a benchmark for general intelligence and includes various psychometric IQ tasks in the form of grid structures. In this paper, we propose IQ tasks involving geometric transformations as introduced in Section 2. These tasks are not only flexible so that we can include images from different datasets or create/combine numerous transformations, but also challenging to test OOD generalization abilities of models.\nThe weight composition feature of FINE links back to the concept of fast weights [19], the idea of computing data-specific network weights on-the-fly. HyperNetworks [14] stylizes this idea by computing the fast weights using a separate trainable (slow weight) network. The Meta-learned Neural Memory [23] uses the pseudo-target technique (which we also leverage in FINE) and appropriately updates the short-term memory once new input arrives. The Neural Stored-Program Memory (NSM) [18] proposes a hybrid approach between slow-weight and fast-weight to compute network weights on-the-fly based on slow-weight key and value memories. However, NSM only performs on sequential learning tasks, while FINE aims to solve OOD IQ tasks requiring abstract cognition. Moreover, FINE computes the query based on the input and the (pseudo-) output, while NSM’s query is computed based on the input only. Memories have been to be versatile in meta-learning and few-shot learning [15, 27, 31] due to the ability to rapidly store past examples and adapt to new situations. In our case, an IQ task can be thought of as a one-shot learning scenario in which FINE has to make use of the long-term key-value memory to adapt to the current task.",
"7 Conclusion": "To study the out-of-distribution (OOD) generalization capability of models, we have proposed IQ tasks that require models to rapidly recognize the hidden rules of geometric transformations between a pair of images and transfer the rules to a new pair of different image classes. Such tasks would necessitate human-like abilities for conceptual abstraction, analogy-making, and utilizing indirection. We put forward a hypothesis that these mechanisms should be performed in the functional space instead of data space as in current deep learning models. To realize the hypothesis, we then proposed FINE, a memory-augmented neural architecture that learns to compose functions mapping an input to an output on-the-fly. The memory has two trainable components: the value sub-memory and the binding key sub-memory, where the keys are basis weight matrices that span the space of functions. For an IQ task, when given a hint in the form of an input-output pair, and FINE estimates the analogy between the pair and the values as mixing coefficients. These coefficients are then used to mix the binding keys via indirection to generate the weights of the backbone neural net which computes the intended function. For a test input of different class, the function is used to estimate the most compatible output, thus solving the IO task. Through an extensive suite of experiments using images from the Omniglot and CIFAR100 datasets to construct the IQ tasks, FINE is found to be reliable in figuring out the hidden relational pattern in each IQ task and thus is able to solve new tasks, even with unseen image classes. Importantly, FINE outperforms other models in all experiments, and can generalize well in small data regimes.\nFuture works will include making FINE robust against OOD in transformations without catastrophic forgetting when new transformations are continually introduced.",
"Checklist": "1. For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s\ncontributions and scope? [Yes] Our main contribution is to perform analogy-making and indirection on functional spaces for better OOD generalization.\n(b) Did you describe the limitations of your work? [Yes] see Section 6 (c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read the ethics review guidelines and ensured that your paper conforms to\nthem? [Yes] 2. If you are including theoretical results...\n(a) Did you state the full set of assumptions of all theoretical results? [N/A] (b) Did you include complete proofs of all theoretical results? [N/A]\n3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experi-\nmental results (either in the supplemental material or as a URL)? [Yes] see Supplementary\n(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] see Section 4\n(c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [Yes] see Section 4 and Appendix C\n(d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] see Section 4\n4. If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... (a) If your work uses existing assets, did you cite the creators? [Yes] We use codes of [33]\nto run our experiments for the RelationNet, PrediNet, Transformer and ESBN. (b) Did you mention the license of the assets? [Yes] see Supplementary (c) Did you include any new assets either in the supplemental material or as a URL? [Yes]\nWe include codes for our tasks and models. (d) Did you discuss whether and how consent was obtained from people whose data you’re\nusing/curating? [Yes] see Supplementary (e) Did you discuss whether the data you are using/curating contains personally identifiable\ninformation or offensive content? [Yes] see Supplementary 5. If you used crowdsourcing or conducted research with human subjects...\n(a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]\n(b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]\n(c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]",
"Reviewer Summary": "Reviewer_2: The problem of generalization to out-of-distribution (OOD) samples is key to most problems in AI. The authors address this challenge in the context of IQ-like tasks by introducing a method called FINE, for Functional indirection neural estimator. The authors are inspired by the idea of indirection, trying to connect two different representations and using one to learn or interpret the other. The authors show that the proposed architecture does well in those IQ tasks when considering images that are different from those in the training set upon applying the same rules as those in the training set.\n\nReviewer_3: This paper introduces a mechanism for functional indirection (FINE) in neural networks to achieve OOD generalization in abstract reasoning tasks. FINE proposes to dynamically select weights for a neural network backbone for a particular data input-output pair and use those weights to make prediction for an input that share similar hidden rule. The weights are selected from pre-defined key-value memory which is comprised of the weights that spans the space of possible functions. FINE is like a constrained form of hypernetworks where weights of the main (backbone) network are determined by another network. In this paper, this role is played by function composer\nϕ\nwhich finds optimal weights and their arrangement for the main network using a limited basis of weights in the memory.\nThe paper further introduces a new abstract reasoning dataset based on Omniglot and CIFAR100 for evaluation.\n\nReviewer_4: This paper addresses the out-of-distribution generalization of deep learning models in IQ visual tasks involving extracting geometric transformation between a pair of images and applying the extracted transformation to a new image. It presents a memory-augmented neural architecture and on-the-fly model parameter retrieval from the memory to achieve OOD generalization in functional spaces.\n\nReviewer_5: This paper introduces FINE, a method for achieving out of distribution generalization through analogy making and indirection in the space of functions. FINE makes an analogy from a given input and output example to infer the function that ties them then uses indirection to approximate the function by composing a set of functions saved in memory. The paper introduces a visual reasoning dataset for evaluating out of distribution generalization. The dataset is based on standard vision benchmarks, CIFAR100 and Omniglot. Each sample consists of an input-output pair used as a cue for the transformation, an input image and 4 choices. FINE performs better than several comparable models over all functions and function combinations in terms of accuracy and sample efficiency. Ablation experiments highlight the importance of the number of layers and the memory size."
}