_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d3300937
Supervised learning depends on annotated examples, which are taken to be the ground truth. But these labels often come from noisy crowdsourcing platforms, like Amazon Mechanical Turk. Practitioners typically collect multiple labels per example and aggregate the results to mitigate noise (the classic crowdsourcing problem). Given a fixed annotation budget and unlimited unlabeled data, redundant annotation comes at the expense of fewer labeled examples. This raises two fundamental questions: (1) How can we best learn from noisy workers? (2) How should we allocate our labeling budget to maximize the performance of a classifier? We propose a new algorithm for jointly modeling labels and worker quality from noisy crowd-sourced data. The alternating minimization proceeds in rounds, estimating worker quality from disagreement with the current model and then updating the model by optimizing a loss function that accounts for the current estimate of worker quality. Unlike previous approaches, even with only one annotation per example, our algorithm can estimate worker quality. We establish a generalization error bound for models learned with our algorithm and establish theoretically that it's better to label many examples once (vs less multiply) when worker quality is above a threshold. Experiments conducted on both ImageNet (with simulated noisy workers) and MS-COCO (using the real crowdsourced labels) confirm our algorithm's benefits. 1 1. This work was done when the authors were visiting Amazon
Learning From Noisy Singly-labeled Data
d53216170
Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input. Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.
STRUCTURED NEURAL SUMMARIZATION
d220424510
Despite significant advances, continual learning models still suffer from catastrophic forgetting when exposed to incrementally available data from non-stationary distributions. Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an array of independent memory slots. In this work, we propose to augment such an array with a learnable random graph that captures pairwise similarities between its samples, and use it not only to learn new tasks but also to guard against forgetting. Empirical results on several benchmark datasets show that our model consistently outperforms recently proposed baselines for task-free continual learning.Preprint. Under review.
Graph-Based Continual Learning
d174802369
Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot access data from previous tasks and when the model has a fixed capacity. Current regularization-based continual learning algorithms need an external representation and extra computation to measure the parameters' importance. In contrast, we propose Uncertaintyguided Continual Bayesian Neural Networks (UCB), where the learning rate adapts according to the uncertainty defined in the probability distribution of the weights in networks. Uncertainty is a natural way to identify what to remember and what to change as we continually learn, and thus mitigate catastrophic forgetting. We also show a variant of our model, which uses uncertainty for weight pruning and retains task performance after pruning by saving binary masks per tasks. We evaluate our UCB approach extensively on diverse object classification datasets with short and long sequences of tasks and report superior or on-par performance compared to existing approaches. Additionally, we show that our model does not necessarily need task information at test time, i.e. it does not presume knowledge of which task a sample belongs to.
UNCERTAINTY-GUIDED CONTINUAL LEARNING WITH BAYESIAN NEURAL NETWORKS
d260704206
The legality of training language models (LMs) on copyrighted or otherwise restricted data is under intense debate. However, as we show, model performance significantly degrades if trained only on low-risk text (e.g., out-of-copyright books or government documents), due to its limited size and domain coverage. We present SILO, a new language model that manages this risk-performance tradeoff during inference. SILO is built by (1) training a parametric LM on the OPEN LICENSE CORPUS (OLC), a new corpus we curate with 228B tokens of public domain and permissively licensed text and(2)augmenting it with a more general and easily modifiable nonparametric datastore (e.g., containing copyrighted books or news) that is only queried during inference. The datastore allows use of high-risk data without training on it, supports sentence-level data attribution, and enables data producers to opt out from the model by removing content from the store. These capabilities can foster compliance with data-use regulations such as the fair use doctrine in the United States and the GDPR in the European Union. Our experiments show that the parametric LM struggles on domains not covered by OLC. However, access to the datastore greatly improves out of domain performance, closing 90% of the performance gap with an LM trained on the Pile, a more diverse corpus with mostly high-risk text. We also analyze which nonparametric approach works best, where the remaining errors lie, and how performance scales with datastore size. Our results suggest that it is possible to build high quality language models while mitigating their legal risk. 1 * Equal Contribution. 1 We release all models, data, and code publicly at https://github.com/kernelmachine/silo-lm.
SILO LANGUAGE MODELS: ISOLATING LEGAL RISK IN A NONPARAMETRIC DATASTORE
d252715596
We address the problem of safe reinforcement learning from pixel observations. Inherent challenges in such settings are (1) a trade-off between reward optimization and adhering to safety constraints, (2) partial observability, and (3) high-dimensional observations. We formalize the problem in a constrained, partially observable Markov decision process framework, where an agent obtains distinct reward and safety signals. To address the curse of dimensionality, we employ a novel safety critic using the stochastic latent actor-critic (SLAC) approach. The latent variable model predicts rewards and safety violations, and we use the safety critic to train safe policies. Using well-known benchmark environments, we demonstrate competitive performance over existing approaches with respects to computational requirements, final reward return, and satisfying the safety constraints.
SAFE REINFORCEMENT LEARNING FROM PIXELS USING A STOCHASTIC LATENT REPRESENTATION
d221112239
Quantum computing-based machine learning mainly focuses on quantum computing hardware that is experimentally challenging to realize due to requiring quantum gates that operate at very low temperature. Instead, we demonstrate the existence of a lower performance and much lower effort island on the accuracy-vs-qubits graph that may well be experimentally accessible with room temperature optics. This high temperature "quantum computing toy model" is nevertheless interesting to study as it allows rather accessible explanations of key concepts in quantum computing, in particular interference, entanglement, and the measurement process. We specifically study the problem of classifying an example from the MNIST and Fashion-MNIST datasets, subject to the constraint that we have to make a prediction after the detection of the very first photon that passed a coherently illuminated filter showing the example. Whereas a classical set-up in which a photon is detected after falling on one of the 28 × 28 image pixels is limited to a (maximum likelihood estimation) accuracy of 21.27% for MNIST, respectively 18.27% for Fashion-MNIST, we show that the theoretically achievable accuracy when exploiting inference by optically transforming the quantum state of the photon is at least 41.27% for MNIST, respectively 36.14% for Fashion-MNIST. We show in detail how to train the corresponding transformation with TensorFlow and also explain how this example can serve as a teaching tool for the measurement process in quantum mechanics.Preprint. Under review.
Single-Photon Image Classification
d4737664
The ability of algorithms to evolve or learn (compositional) communication protocols has traditionally been studied in the language evolution literature through the use of emergent communication tasks. Here we scale up this research by using contemporary deep learning methods and by training reinforcement-learning neural network agents on referential communication games. We extend previous work, in which agents were trained in symbolic environments, by developing agents which are able to learn from raw pixel data, a more challenging and realistic input representation. We find that the degree of structure found in the input data affects the nature of the emerged protocols, and thereby corroborate the hypothesis that structured compositional language is most likely to emerge when agents perceive the world as being structured.
EMERGENCE OF LINGUISTIC COMMUNICATION FROM REFERENTIAL GAMES WITH SYMBOLIC AND PIXEL INPUT
d247084493
Image denoising and artefact removal are complex inverse problems admitting multiple valid solutions. Unsupervised diversity restoration, that is, obtaining a diverse set of possible restorations given a corrupted image, is important for ambiguity removal in many applications such as microscopy where paired data for supervised training are often unobtainable. In real world applications, imaging noise and artefacts are typically hard to model, leading to unsatisfactory performance of existing unsupervised approaches. This work presents an interpretable approach for unsupervised and diverse image restoration. To this end, we introduce a capable architecture called HIERARCHICAL DIVNOISING (HDN) based on hierarchical Variational Autoencoder. We show that HDN learns an interpretable multi-scale representation of artefacts and we leverage this interpretability to remove imaging artefacts commonly occurring in microscopy data. Our method achieves stateof-the-art results on twelve benchmark image denoising datasets while providing access to a whole distribution of sensibly restored solutions. Additionally, we demonstrate on three real microscopy datasets that HDN removes artefacts without supervision, being the first method capable of doing so while generating multiple plausible restorations all consistent with the given corrupted image.Recently, the first of these drawbacks was addressed by DIVNOISING (DN)(Prakash et al., 2021)which proposed a convolutional Variational Autoencoder (VAE) architecture for unsupervised denoising and generates diverse denoised solutions, giving users access to samples from a distribution of sensible denoising results. But DN exhibits poor performance on harder (visually more complex and varied) datasets, e.g. diverse sets of natural images. Additionally, it does not improve on 1 This is also true for supervised methods.
INTERPRETABLE UNSUPERVISED DIVERSITY DENOISING AND ARTEFACT REMOVAL
d247594371
We propose a family of adaptive integer compression operators for distributed Stochastic Gradient Descent (SGD) that do not communicate a single float. This is achieved by multiplying floating-point vectors with a number known to every device and then rounding to integers. In contrast to the prior work on integer compression for SwitchML by Sapio et al.(2021), our IntSGD method is provably convergent and computationally cheaper as it estimates the scaling of vectors adaptively. Our theory shows that the iteration complexity of IntSGD matches that of SGD up to constant factors for both convex and non-convex, smooth and nonsmooth functions, with and without overparameterization. Moreover, our algorithm can also be tailored for the popular all-reduce primitive and shows promising empirical performance. . Decentralized deep learning with arbitrary communication compression. arXiv preprint arXiv:1907.09356, 2019.Dmitry Kovalev, Samuel Horváth, and Peter Richtárik. Don't jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop. In . Uncertainty principle for communication compression in distributed and federated learning and the search for an optimal compressor. arXiv preprint arXiv:. Scaling distributed machine learning with in-network aggregation.
INTSGD: ADAPTIVE FLOATLESS COMPRESSION OF STOCHASTIC GRADIENTS
d209475822
Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task. The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at
LAMOL: LANGUAGE MODELING FOR LIFELONG LANGUAGE LEARNING
d88522730
In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Mazes). Doing so, we are able to model the topology of data while staying in Euclidean space, thus allowing its processing with standard CNN architectures. We suggest a suitable architecture for this problem and show that it can express a perfect solution to the classification task. The shape of the cost function around this solution is also derived and, remarkably, does not depend on the size of the maze in the large maze limit. Responsible for this behavior are rare events in the dataset which strongly regulate the shape of the cost function near this global minimum. We further identify an obstacle to learning in the form of poorly performing local minima in which the network chooses to ignore some of the inputs. We further support our claims with training experiments and numerical analysis of the cost function on networks with up to 128 layers.
CRITICAL PERCOLATION AS A FRAMEWORK TO ANALYZE THE TRAINING OF DEEP NETWORKS
d238419044
This paper follows up on a recent work of Neu et al. (2021) and presents some new information-theoretic upper bounds for the generalization error of machine learning models, such as neural networks, trained with SGD. We apply these bounds to analyzing the generalization behaviour of linear and two-layer ReLU networks. Experimental study of these bounds provide some insights on the SGD training of neural networks. They also point to a new and simple regularization scheme which we show performs comparably to the current state of the art.
ON THE GENERALIZATION OF MODELS TRAINED WITH SGD: INFORMATION-THEORETIC BOUNDS AND IMPLICATIONS
d52922363
We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network (parameterized as x → W N W N −1 · · · W 1 x) by minimizing the 2 loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii) weight matrices at initialization are approximately balanced; and (iii) the initial loss is smaller than the loss of any rank-deficient solution. The assumptions on initialization (conditions (ii) and (iii)) are necessary, in the sense that violating any one of them may lead to convergence failure. Moreover, in the important case of output dimension 1, i.e. scalar regression, they are met, and thus convergence to global optimum holds, with constant probability under a random initialization scheme. Our results significantly extend previous analyses, e.g., of deep linear residual networks(Bartlett et al., 2018).
A CONVERGENCE ANALYSIS OF GRADIENT DESCENT FOR DEEP LINEAR NEURAL NETWORKS
d3323727
Deep neural networks (DNNs) continue to make significant advances, solving tasks from image classification to translation or reinforcement learning. One aspect of the field receiving considerable attention is efficiently executing deep models in resource-constrained environments, such as mobile or embedded devices. This paper focuses on this problem, and proposes two new compression methods, which jointly leverage weight quantization and distillation of larger networks, called "teachers," into compressed "student" networks. The first method we propose is called quantized distillation and leverages distillation during the training process, by incorporating distillation loss, expressed with respect to the teacher network, into the training of a smaller student network whose weights are quantized to a limited set of levels. The second method, differentiable quantization, optimizes the location of quantization points through stochastic gradient descent, to better fit the behavior of the teacher model. We validate both methods through experiments on convolutional and recurrent architectures. We show that quantized shallow students can reach similar accuracy levels to state-of-the-art full-precision teacher models, while providing up to order of magnitude compression, and inference speedup that is almost linear in the depth reduction. In sum, our results enable DNNs for resource-constrained environments to leverage architecture and accuracy advances developed on more powerful devices. combine quantization, weight sharing, and careful coding of network weights, to reduce the size of state-of-the-art deep models by orders of magnitude, while at the same time speeding up inference.Both these research directions are extremely active, and have been shown to yield significant compression and accuracy improvements, which can be crucial when making such models available on embedded devices or phones. However, the literature on compressing deep networks focuses almost exclusively on finding good compression schemes for a given model, without significantly altering the structure of the model. On the other hand, recent parallel work(Ba & Caruana, 2013;Hinton et al., 2015)introduces the process of distillation, which can be used for transferring the behaviour of a given model to any other structure. This can be used for compression, e.g. to obtain compact representations of ensembles(Hinton et al., 2015). However the size of the student model needs to be large enough for allowing learning to succeed. A model that is too shallow, too narrow, or which misses necessary units, can result in considerable loss of accuracy(Urban et al., 2016).In this work, we examine whether distillation and quantization can be jointly leveraged for better compression. We start from the intuition that 1) the existence of highly-accurate, full-precision teacher models should be leveraged to improve the performance of quantized models, while 2) quantizing a model can provide better compression than a distillation process attempting the same space gains by purely decreasing the number of layers or layer width. While our approach is very natural, interesting research questions arise when these two ideas are combined.Contribution. We present two methods which allow the user to compound compression in terms of depth, by distilling a shallower student network with similar accuracy to a deeper teacher network, with compression in terms of width, by quantizing the weights of the student to a limited set of integer levels, and using less weights per layer. The basic idea is that quantized models can leverage distillation loss(Hinton et al., 2015), the weighted average between the correct targets (represented by the labels) and soft targets (represented by the teacher's outputs).
MODEL COMPRESSION VIA DISTILLATION AND QUANTIZATION
d257280442
What is an image and how to extract latent features? Convolutional Networks (ConvNets) consider an image as organized pixels in a rectangular shape and extract features via convolutional operation in local region; Vision Transformers (ViTs) treat an image as a sequence of patches and extract features via attention mechanism in a global range. In this work, we introduce a straightforward and promising paradigm for visual representation, which is called Context Clusters. Context clusters (CoCs) view an image as a set of unorganized points and extract features via simplified clustering algorithm. In detail, each point includes the raw feature (e.g., color) and positional information (e.g., coordinates), and a simplified clustering algorithm is employed to group and extract deep features hierarchically. Our CoCs are convolution-and attention-free, and only rely on clustering algorithm for spatial interaction. Owing to the simple design, we show CoCs endow gratifying interpretability via the visualization of clustering process. Our CoCs aim at providing a new perspective on image and visual representation, which may enjoy broad applications in different domains and exhibit profound insights. Even though we are not targeting SOTA performance, COCs still achieve comparable or even better results than ConvNets or ViTs on several benchmarks.Published as a conference paper at ICLR 2023ICLR et al., 2022Tolstikhin et al., 2021)have demonstrated that a pure MLP-based design can also achieve similar performance. Besides, considering graph network as the feature extractor is proven to be feasible (Han et al., 2022). Hence, we expect a new paradigm of feature extraction that can provide some novel insights instead of incremental performance improvements.
Image as Set of Points
d219721263
We present Wasserstein Embedding for Graph Learning (WEGL), a novel and fast framework for embedding entire graphs in a vector space, in which various machine learning models are applicable for graph-level prediction tasks. We leverage new insights on defining similarity between graphs as a function of the similarity between their node embedding distributions. Specifically, we use the Wasserstein distance to measure the dissimilarity between node embeddings of different graphs. Different from prior work, we avoid pairwise calculation of distances between graphs and reduce the computational complexity from quadratic to linear in the number of graphs. WEGL calculates Monge maps from a reference distribution to each node embedding and, based on these maps, creates a fixed-sized vector representation of the graph. We evaluate our new graph embedding approach on various benchmark graph-property prediction tasks, showing state-of-the-art classification performance, while having superior computational efficiency. * Equal contribution 1 arXiv:2006.09430v1 [cs.LG]
Wasserstein Embedding for Graph Learning
d256900870
Data augmentation is one of the most prevalent tools in deep learning, underpinning many recent advances, including those from classification, generative models, and representation learning. The standard approach to data augmentation combines simple transformations like rotations and flips to generate new images from existing ones. However, these new images lack diversity along key semantic axes present in the data. Current augmentations cannot alter the high-level semantic attributes, such as animal species present in a scene, to enhance the diversity of data. We address the lack of diversity in data augmentation with image-to-image transformations parameterized by pre-trained text-to-image diffusion models. Our method edits images to change their semantics using an off-the-shelf diffusion model, and generalizes to novel visual concepts from a few labelled examples. We evaluate our approach on few-shot image classification tasks, and on a real-world weed recognition task, and observe an improvement in accuracy in tested domains.
Effective Data Augmentation With Diffusion Models
d239009938
In the Mixup training paradigm, a model is trained using convex combinations of data points and their associated labels. Despite seeing very few true data points during training, models trained using Mixup seem to still minimize the original empirical risk and exhibit better generalization and robustness on various tasks when compared to standard training. In this paper, we investigate how these benefits of Mixup training rely on properties of the data in the context of classification. For minimizing the original empirical risk, we compute a closed form for the Mixup-optimal classification, which allows us to construct a simple dataset on which minimizing the Mixup loss can provably lead to learning a classifier that does not minimize the empirical loss on the data. On the other hand, we also give sufficient conditions for Mixup training to also minimize the original empirical risk. For generalization, we characterize the margin of a Mixup classifier, and use this to understand why the decision boundary of a Mixup classifier can adapt better to the full structure of the training data when compared to standard training. In contrast, we also show that, for a large class of linear models and linearly separable datasets, Mixup training leads to learning the same classifier as standard training.Published as a conference paper at ICLR 2022Having defined mix as above, we may write the component of the full Mixup cross-entropy loss corresponding to mixing points from classes i and j as: J i,j mix (g, P X , P f ) = Xi×Xj ×[0,1] mix (g, s, t, λ) d(P X × P X × P f )(s, t, λ)
TOWARDS UNDERSTANDING THE DATA DEPENDENCY OF MIXUP-STYLE TRAINING
d238354201
Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision transformers (ViTs) have been adopted. Unlike CNNs, ViTs are heavyweight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. Mobile-ViT presents a different perspective for the global processing of information with transformers. Our results show that MobileViT significantly outperforms CNNand ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters. Our source code is open-source and available at: https://github.com/apple/ml-cvnets.
MOBILEVIT: LIGHT-WEIGHT, GENERAL-PURPOSE, AND MOBILE-FRIENDLY VISION TRANSFORMER
d263608332
Training AI models that generalize across tasks and domains has long been among the open problems driving AI research.The emergence of Foundation Models made it easier to obtain expert models for a given task, but the heterogeneity of data that may be encountered at test time often means that any single expert is insufficient.We consider the Fusion of Experts (FoE) problem of fusing outputs of expert models with complementary knowledge of the data distribution and formulate it as an instance of supervised learning.Our method is applicable to both discriminative and generative tasks and leads to significant performance improvements in image and text classification, text summarization, multiple-choice QA, and automatic evaluation of generated text.We also extend our method to the "frugal" setting where it is desired to reduce the number of expert model evaluations at test time.
FUSING MODELS WITH COMPLEMENTARY EXPERTISE
d49868626
Gradient-based meta-learning techniques are both widely applicable and proficient at solving challenging few-shot learning and fast adaptation problems. However, they have practical difficulties when operating on high-dimensional parameter spaces in extreme low-data regimes. We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this lowdimensional latent space. The resulting approach, latent embedding optimization (LEO), decouples the gradient-based adaptation procedure from the underlying high-dimensional space of model parameters. Our evaluation shows that LEO can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks. Further analysis indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space. arXiv:1807.05960v3 [cs.LG]
META-LEARNING WITH LATENT EMBEDDING OPTIMIZATION
d44095973
Solving tasks in Reinforcement Learning is no easy feat. As the goal of the agent is to maximize the accumulated reward, it often learns to exploit loopholes and misspecifications in the reward signal resulting in unwanted behavior. While constraints may solve this issue, there is no closed form solution for general constraints. In this work, we present a novel multi-timescale approach for constrained policy optimization, called 'Reward Constrained Policy Optimization' (RCPO), which uses an alternative penalty signal to guide the policy towards a constraint satisfying one. We prove the convergence of our approach and provide empirical evidence of its ability to train constraint satisfying policies.
Reward Constrained Policy Optimization
d53802740
This paper introduces a new framework for data efficient and versatile learning. Specifically: 1) We develop ML-PIP, a general framework for Meta-Learning approximate Probabilistic Inference for Prediction. ML-PIP extends existing probabilistic interpretations of meta-learning to cover a broad class of methods. 2) We introduce VERSA, an instance of the framework employing a flexible and versatile amortization network that takes few-shot learning datasets as inputs, with arbitrary numbers of shots, and outputs a distribution over task-specific parameters in a single forward pass. VERSA substitutes optimization at test time with forward passes through inference networks, amortizing the cost of inference and relieving the need for second derivatives during training. 3) We evaluate VERSA on benchmark datasets where the method sets new state-of-the-art results, handles arbitrary numbers of shots, and for classification, arbitrary numbers of classes at train and test time. The power of the approach is then demonstrated through a challenging few-shot ShapeNet view reconstruction task. * Authors contributed equally.
META-LEARNING PROBABILISTIC INFERENCE FOR PREDICTION
d246431014
Methods that combine local and global features have recently shown excellent performance on multiple challenging deep image retrieval benchmarks, but their use of local features raises at least two issues. First, these local features simply boil down to the localized map activations of a neural network, and hence can be extremely redundant. Second, they are typically trained with a global loss that only acts on top of an aggregation of local features; by contrast, testing is based on local feature matching, which creates a discrepancy between training and testing. In this paper, we propose a novel architecture for deep image retrieval, based solely on mid-level features that we call Super-features. These Super-features are constructed by an iterative attention module and constitute an ordered set in which each element focuses on a localized and discriminant image pattern. For training, they require only image labels. A contrastive loss operates directly at the level of Super-features and focuses on those that match across images. A second complementary loss encourages diversity. Experiments on common landmark retrieval benchmarks validate that Super-features substantially outperform state-of-the-art methods when using the same number of features, and only require a significantly smaller memory footprint to match their performance. Code and models are available at: https://github.com/naver/FIRe.arXiv:2201.13182v1 [cs.CV] 31 Jan 2022Published as a conference paper at ICLR 2022 1 The term 'query' has a precise meaning for retrieval; yet, for this subsection only, we overload the term to refer to one of the inputs of the dot-product attention, consistently with the terminology from seminal works on attention byVaswani et al. (2017). 2 The attention maps presented in Eq.(4) are technically taken at iteration t, but we omit iteration superscripts for clarity. For the rest of the paper and visualizations, we use attention maps to refer to the attention maps of Eq.(4) after the final (T -th) iteration of the iterative module. 3 The MLP function consists of a layer-norm, a fully-connected layer with half the dimensions of the features, a ReLU activation and a fully-connected layer that projects features back to their initial dimension.
LEARNING SUPER-FEATURES FOR IMAGE RETRIEVAL
d236635379
A central goal of machine learning is the development of systems that can solve many problems in as many data domains as possible. Current architectures, however, cannot be applied beyond a small set of stereotyped settings, as they bake in domain & task assumptions or scale poorly to large inputs or outputs. In this work, we propose Perceiver IO, a general-purpose architecture that handles data from arbitrary settings while scaling linearly with the size of inputs and outputs. Our model augments the Perceiver with a flexible querying mechanism that enables outputs of various sizes and semantics, doing away with the need for task-specific architecture engineering. The same architecture achieves strong results on tasks spanning natural language and visual understanding, multi-task and multi-modal reasoning, and StarCraft II. As highlights, Perceiver IO outperforms a Transformer-based BERT baseline on the GLUE language benchmark despite removing input tokenization and achieves state-of-the-art performance on Sintel optical flow estimation with no explicit mechanisms for multiscale correspondence.Is the development of problem-specific models for each new set of inputs and outputs unavoidable? Life would be drastically simpler if a single neural network architecture could handle a wide variety of both input modalities and output tasks. In this work, we propose such an architecture, with the ultimate goal of building a network that can easily integrate and transform arbitrary information for arbitrary tasks. Our starting point is the Perceiver(Jaegle et al., 2021), an architecture which has demonstrated a remarkable ability to handle data from many modalities with no changes to the network architecture. The Perceiver uses attention to map inputs of a wide range of modalities to a fixed-size latent space that is further processed by a deep, fully attentional network. This process decouples the bulk of the network's processing from the size and modality-specific details of the input, allowing it to scale to large and multimodal data.But the Perceiver can only handle simple output spaces like classification. Much of the complexity of real-world tasks comes from the variety, size, and structure of their outputs, and in this regard i arXiv:2107.14795v3 [cs.LG] Gong. VATT: Transformers for multimodal self-supervised learning from raw video, audio and text.. A naturalistic open source movie for optical flow evaluation. . CANINE: pre-training an efficient tokenization-free encoder for language representation. Transactions of the Association for Computational Linguistics, 10:73-91, 2022.Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. . Sim2real transfer learning for 3D human pose estimation: motion to the rescue. . Learning to estimate hidden motions with global motion aggregation. . Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020.Iasonas Kokkinos. Ubernet: Training a universal convolutional neural network for low-, mid-, and high-level vision using diverse datasets and limited memory.
PERCEIVER IO: A GENERAL ARCHITECTURE FOR STRUCTURED INPUTS & OUTPUTS
d238215172
It is widely believed that the implicit regularization of SGD is fundamental to the impressive generalization behavior we observe in neural networks. In this work, we demonstrate that non-stochastic full-batch training can achieve comparably strong performance to SGD on CIFAR-10 using modern architectures. To this end, we show that the implicit regularization of SGD can be completely replaced with explicit regularization even when comparing against a strong and well-researched baseline. Our observations indicate that the perceived difficulty of full-batch training may be the result of its optimization properties and the disproportionate time and effort spent by the ML community tuning optimizers and hyperparameters for small-batch training. 1INTRODUCTIONStochastic gradient descent (SGD) is the backbone of optimization for neural networks, going back at least as far asLeCun et al. (1998a), and SGD is the de-facto tool for optimizing the parameters of modern neural networks(Krizhevsky et al., 2012;He et al., 2015a;Brown et al., 2020). A central reason for the success of stochastic gradient descent is its efficiency in the face of large datasetsa noisy estimate of the loss function gradient is generally sufficient to improve the parameters of a neural network and can be computed much faster than a full gradient over the entire training set.At the same time, folk wisdom dictates that small-batch SGD is not only faster but also has a unique bias towards good loss function minima that cannot be replicated with full batch gradient descent. Some even believe that stochastic sampling is the fundamental force behind the success of neural networks. These popular beliefs are linked to various properties of SGD, such as its gradient noise, fast escape from saddle points, and its uncanny ability to avoid sub-optimal local minima(Hendrik, 2017;LeCun, 2018). It is common to under-saturate compute capabilities and retain small batch sizes, even if enough compute is available to reap these apparent benefits. These properties are also attributed in varying degrees to all mini-batched first-order optimizers, such as Adam(Kingma & Ba, 2015)and others(Schmidt et al., 2020).But why does stochastic mini-batching really aid generalization? In this work, we set out to isolate mechanisms which underlie the benefits of SGD and use these mechanisms to replicate the empirical benefits of SGD without stochasticity. In this way, we provide a counterexample to the hypothesis that stochastic mini-batching, which leads to noisy estimates of the gradient of the loss function, is fundamental for the strong generalization success of over-parameterized neural networks.We show that a standard ResNet-18 can be trained with batch size 50K (the entire training dataset) and still achieve 95.68%(±0.09) validation accuracy on CIFAR-10, which is comparable to the same network trained with a strong SGD baseline, provided data augmentation is used for both methods (seeFig. 1). We then extend these findings to train without (random) data augmentations, for an 1 This is the uncompressed version of the final paper presented at ICLR 2022.
STOCHASTIC TRAINING IS NOT NECESSARY FOR GENERALIZATION
d3526391
We consider the problem of detecting out-of-distribution images in neural networks. We propose ODIN, a simple and effective method that does not require any change to a pre-trained neural network. Our method is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in-and out-of-distribution images, allowing for more effective detection. We show in a series of experiments that ODIN is compatible with diverse network architectures and datasets. It consistently outperforms the baseline approach (Hendrycks & Gimpel, 2017) by a large margin, establishing a new state-of-the-art performance on this task. For example, ODIN reduces the false positive rate from the baseline 34.7% to 4.3% on the DenseNet (applied to CIFAR-10 and Tiny-ImageNet) when the true positive rate is 95%.A seemingly straightforward approach of detecting out-of-distribution images is to enlarge the training set of both in-and out-of-distribution examples. However, the number of out-of-distribution examples can be infinitely many, making the re-training approach computationally expensive and intractable. Moreover, to ensure that a neural network accurately classifies in-distribution samples into correct classes while correctly detecting out-of-distribution samples, one might need to employ exceedingly large neural network architectures, which further complicates the training process.Hendrycks & Gimpel (2017) proposed a baseline method to detect out-of-distribution examples without further re-training networks. The method is based on an observation that a well-trained neural network tends to assign higher softmax scores to in-distribution examples than out-of-distribution examples. In this paper, we go further. We observe that after using temperature scaling in the softmax function(Hinton et al., 2015;Pereyra et al., 2017) and adding small controlled perturbations to inputs, . Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. 2015.
ENHANCING THE RELIABILITY OF OUT-OF-DISTRIBUTION IMAGE DETECTION IN NEURAL NETWORKS
d52935027
Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire. In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any (non-differentiable) simulator, thereby controlling the distribution of synthesized data in order to maximize the accuracy of a model trained on that data. In contrast to prior art that hand-crafts these simulation parameters or adjusts only parts of the available parameters, our approach fully controls the simulator with the actual underlying goal of maximizing accuracy, rather than mimicking the real data distribution or randomly generating a large volume of data. We find that our approach (i) quickly converges to the optimal simulation parameters in controlled experiments and (ii) can indeed discover good sets of parameters for an image rendering simulator in actual computer vision applications.
LEARNING TO SIMULATE
d264172668
Large Language Models (LLMs) inherently encode a wealth of knowledge within their parameters through pre-training on extensive corpora. While prior research has delved into operations on these parameters to manipulate the underlying implicit knowledge-encompassing detection, editing, and merging-there remains an ambiguous understanding regarding their transferability across models with varying scales. In this paper, we seek to empirically investigate knowledge transfer from larger to smaller models through a parametric perspective. To achieve this, we employ sensitivity-based techniques to extract and align knowledgespecific parameters between different LLMs. Moreover, the LoRA module is used as the intermediary mechanism for injecting the extracted knowledge into smaller models. Evaluations across four benchmarks validate the efficacy of our proposed method. Our findings highlight the critical factors contributing to the process of parametric knowledge transfer, underscoring the transferability of model parameters across LLMs of different scales. We release code and data at
SEEKING NEURAL NUGGETS: KNOWLEDGE TRANSFER IN LARGE LANGUAGE MODELS FROM A PARAMETRIC PERSPECTIVE
d258418258
Sharpness-aware minimization (SAM), which searches for flat minima by min-max optimization, has been shown to be useful in improving model generalization. However, since each SAM update requires computing two gradients, its computational cost and training time are both doubled compared to standard empirical risk minimization (ERM). Recent state-of-the-arts reduce the fraction of SAM updates and thus accelerate SAM by switching between SAM and ERM updates randomly or periodically. In this paper, we design an adaptive policy to employ SAM based on the loss landscape geometry. Two efficient algorithms, AE-SAM and AE-LookSAM, are proposed. We theoretically show that AE-SAM has the same convergence rate as SAM. Experimental results on various datasets and architectures demonstrate the efficiency and effectiveness of the adaptive policy.Published as a conference paper at ICLR 2023 the squared stochastic gradient norm and model it by a normal distribution, whose parameters are estimated by exponential moving average. Experimental results on standard benchmark datasets demonstrate the superiority of the proposed policy.Our contributions are summarized as follows: (i) We propose an adaptive policy to use SAM or ERM update based on the loss landscape geometry. (ii) We propose an efficient algorithm, called AE-SAM (Adaptive policy to Employ SAM), to reduce the fraction of SAM updates. We also theoretically study its convergence rate. (iii) The proposed policy is general and can be combined with any SAM variant. In this paper, we integrate it with LookSAM (Liu et al., 2022) and propose AE-LookSAM. (iv) Experimental results on various network architectures and datasets (with and without label noise) verify the superiority of AE-SAM and AE-LookSAM over existing baselines.
AN ADAPTIVE POLICY TO EMPLOY SHARPNESS-AWARE MINIMIZATION
d86840468
We study the problem of learning to map, in an unsupervised way, between domains A and B, such that the samples b ∈ B contain all the information that exists in samples a ∈ A and some additional information. For example, ignoring occlusions, B can be people with glasses, A people without, and the glasses, would be the added information. When mapping a sample a from the first domain to the other domain, the missing information is replicated from an independent reference sample b ∈ B. Thus, in the above example, we can create, for every person without glasses a version with the glasses observed in any face image. Our solution employs a single two-pathway encoder and a single decoder for both domains. The common part of the two domains and the separate part are encoded as two vectors, and the separate part is fixed at zero for domain A. The loss terms are minimal and involve reconstruction losses for the two domains and a domain confusion term. Our analysis shows that under mild assumptions, this architecture, which is much simpler than the literature guided-translation methods, is enough to ensure disentanglement between the two domains. We present convincing results in a few visual domains, such as no-glasses to glasses, adding facial hair based on a reference image, etc.
EMERGING DISENTANGLEMENT IN AUTO-ENCODER BASED UNSUPERVISED IMAGE CONTENT TRANSFER
d247476014
Monocular 3D object detection is one of the most challenging tasks in 3D scene understanding. Due to the ill-posed nature of monocular imagery, existing monocular 3D detection methods highly rely on training with the manually annotated 3D box labels on the LiDAR point clouds. This annotation process is very laborious and expensive. To dispense with the reliance on 3D box labels, in this paper we explore the weakly supervised monocular 3D detection. Specifically, we first detect 2D boxes on the image. Then, we adopt the generated 2D boxes to select corresponding RoI LiDAR points as the weak supervision. Eventually, we adopt a network to predict 3D boxes which can tightly align with associated RoI LiDAR points. This network is learned by minimizing our newly-proposed 3D alignment loss between the 3D box estimates and the corresponding RoI LiDAR points. We will illustrate the potential challenges of the above learning problem and resolve these challenges by introducing several effective designs into our method. Codes will be available at https://github.com/SPengLiang/WeakM3D.
WEAKM3D: TOWARDS WEAKLY SUPERVISED MONOCULAR 3D OBJECT DETECTION
d261076339
It is now possible to reconstruct dynamic human motion and shape from a sparse set of cameras using Neural Radiance Fields (NeRF) driven by an underlying skeleton. However, a challenge remains to model the deformation of cloth and skin in relation to skeleton pose. Unlike existing avatar models that are learned implicitly or rely on a proxy surface, our approach is motivated by the observation that different poses necessitate unique frequency assignments. Neglecting this distinction yields noisy artifacts in smooth areas or blurs fine-grained texture and shape details in sharp regions. We develop a two-branch neural network that is adaptive and explicit in the frequency domain. The first branch is a graph neural network that models correlations among body parts locally, taking skeleton pose as input. The second branch combines these correlation features to a set of global frequencies and then modulates the feature encoding. Our experiments demonstrate that our network outperforms state-of-the-art methods in terms of preserving details and generalization capabilities.
POSE MODULATED AVATARS FROM VIDEO
d249954052
One of the main challenges for feature representation in deep learning-based classification is the design of appropriate loss functions that exhibit strong discriminative power. The classical softmax loss does not explicitly encourage discriminative learning of features. A popular direction of research is to incorporate margins in well-established losses in order to enforce extra intra-class compactness and interclass separability, which, however, were developed through heuristic means, as opposed to rigorous mathematical principles. In this work, we attempt to address this limitation by formulating the principled optimization objective as learning towards the largest margins. Specifically, we firstly define the class margin as the measure of inter-class separability, and the sample margin as the measure of intra-class compactness. Accordingly, to encourage discriminative representation of features, the loss function should promote the largest possible margins for both classes and samples. Furthermore, we derive a generalized margin softmax loss to draw general conclusions for the existing margin-based losses. Not only does this principled framework offer new perspectives to understand and interpret existing margin-based losses, but it also provides new insights that can guide the design of new tools, including sample margin regularization and largest margin softmax loss for the class-balanced case, and zero-centroid regularization for the class-imbalanced case. Experimental results demonstrate the effectiveness of our strategy on a variety of tasks, including visual classification, imbalanced classification, person re-identification, and face verification. . Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
LEARNING TOWARDS THE LARGEST MARGINS
d250089240
Molecular representation pretraining is critical in various applications for drug and material discovery due to the limited number of labeled molecules, and most existing work focuses on pretraining on 2D molecular graphs. However, the power of pretraining on 3D geometric structures has been less explored. This is owing to the difficulty of finding a sufficient proxy task that can empower the pretraining to effectively extract essential features from the geometric structures. Motivated by the dynamic nature of 3D molecules, where the continuous motion of a molecule in the 3D Euclidean space forms a smooth potential energy surface, we propose GeoSSL, a 3D coordinate denoising pretraining framework to model such an energy landscape. Further by leveraging an SE(3)-invariant score matching method, we propose GeoSSL-DDM in which the coordinate denoising proxy task is effectively boiled down to denoising the pairwise atomic distances in a molecule. Our comprehensive experiments confirm the effectiveness and robustness of our proposed method.
MOLECULAR GEOMETRY PRETRAINING WITH SE(3)-INVARIANT DENOISING DISTANCE MATCHING
d263620365
Semi-supervised learning (SSL) has witnessed great progress with various improvements in the self-training framework with pseudo labeling.The main challenge is how to distinguish high-quality pseudo labels against the confirmation bias.However, existing pseudo-label selection strategies are limited to pre-defined schemes or complex hand-crafted policies specially designed for classification, failing to achieve high-quality labels, fast convergence, and task versatility simultaneously.To these ends, we propose a Semi-supervised Reward framework (SemiReward) that predicts reward scores to evaluate and filter out high-quality pseudo labels, which is pluggable to mainstream SSL methods in wide task types and scenarios.To mitigate confirmation bias, SemiReward is trained online in two stages with a generator model and subsampling strategy.With classification and regression tasks on 13 standard SSL benchmarks of three modalities, extensive experiments verify that SemiReward achieves significant performance gains and faster convergence speeds upon Pseudo Label, FlexMatch, and Free/SoftMatch.
SEMIREWARD: A GENERAL REWARD MODEL FOR SEMI-SUPERVISED LEARNING
d221139573
Generative modeling has recently shown great promise in computer vision, but its success is often limited to separate tasks. In this paper, motivated by multi-task learning of shareable feature representations, we consider a novel problem of learning a shared generative model across various tasks. We instantiate it on the illustrative dual-task of joint few-shot recognition and novel-view synthesis: given only one or few images of a novel object from arbitrary views with only category annotation, we aim to simultaneously learn an object classifier and generate images of the object from new viewpoints. To this end, we propose bowtie networks that jointly learn 3D geometric and semantic representations with feedback in the loop. Experimental evaluation on challenging fine-grained recognition datasets demonstrates that our synthesized images are realistic from multiple viewpoints and significantly improve recognition performance as ways of data augmentation, especially in the low-data regime. We further show that our approach is flexible and can be easily extended to incorporate other tasks, such as style guided synthesis.Preprint. Under review.
Bowtie Networks: Generative Modeling for Joint Few-Shot Recognition and Novel-View Synthesis
d238531318
Transformers are transforming the landscape of computer vision, especially for recognition tasks. Detection transformers are the first fully end-to-end learning systems for object detection, while vision transformers are the first fully transformer-based architecture for image classification. In this paper, we integrate Vision and Detection Transformers (ViDT) to build an effective and efficient object detector. ViDT introduces a reconfigured attention module to extend the recent Swin Transformer to be a standalone object detector, followed by a computationally efficient transformer decoder that exploits multi-scale features and auxiliary techniques essential to boost the detection performance without much increase in computational load. Extensive evaluation results on the Microsoft COCO benchmark dataset demonstrate that ViDT obtains the best AP and latency trade-off among existing fully transformer-based object detectors, and achieves 49.2AP owing to its high scalability for large models. We will release the code and trained models at https://github.com/naver-ai/vidt. 1 We refer to each model based on the combinations of its body and neck. For example, DETR (DeiT) indicates that DeiT (vision transformers) is integrated with DETR (detection transformers).
VIDT: AN EFFICIENT AND EFFECTIVE FULLY TRANSFORMER-BASED OBJECT DETECTOR
d238857129
Graph neural networks (GNNs) and label propagation represent two interrelated modeling strategies designed to exploit graph structure in tasks such as node property prediction. The former is typically based on stacked message-passing layers that share neighborhood information to transform node features into predictive embeddings. In contrast, the latter involves spreading label information to unlabeled nodes via a parameter-free diffusion process, but operates independently of the node features. Given then that the material difference is merely whether features or labels are smoothed across the graph, it is natural to consider combinations of the two for improving performance. In this regard, it has recently been proposed to use a randomly-selected portion of the training labels as GNN inputs, concatenated with the original node features for making predictions on the remaining labels. This so-called label trick accommodates the parallel use of features and labels, and is foundational to many of the top-ranking submissions on the Open Graph Benchmark (OGB) leaderboard. And yet despite its wide-spread adoption, thus far there has been little attempt to carefully unpack exactly what statistical properties the label trick introduces into the training pipeline, intended or otherwise. To this end, we prove that under certain simplifying assumptions, the stochastic label trick can be reduced to an interpretable, deterministic training objective composed of two factors. The first is a data-fitting term that naturally resolves potential label leakage issues, while the second serves as a regularization factor conditioned on graph structure that adapts to graph size and connectivity. Later, we leverage this perspective to motivate a broader range of label trick use cases, and provide experiments to verify the efficacy of these extensions.
WHY PROPAGATE ALONE? PARALLEL USE OF LABELS AND FEATURES ON GRAPHS
d239050028
Noise-contrastive estimation (NCE) is a statistically consistent method for learning unnormalized probabilistic models. It has been empirically observed that the choice of the noise distribution is crucial for NCE's performance. However, such observations have never been made formal or quantitative. In fact, it is not even clear whether the difficulties arising from a poorly chosen noise distribution are statistical or algorithmic in nature. In this work, we formally pinpoint reasons for NCE's poor performance when an inappropriate noise distribution is used. Namely, we prove these challenges arise due to an ill-behaved (more precisely, flat) loss landscape. To address this, we introduce a variant of NCE called eNCE which uses an exponential loss and for which normalized gradient descent addresses the landscape issues provably when the target and noise distributions are in a given exponential family.
Analyzing and Improving the Optimization Landscape of Noise-Contrastive Estimation
d222291282
Gradient estimation in models with discrete latent variables is a challenging problem, because the simplest unbiased estimators tend to have high variance. To counteract this, modern estimators either introduce bias, rely on multiple function evaluations, or use learned, input-dependent baselines. Thus, there is a need for estimators that require minimal tuning, are computationally cheap, and have low mean squared error. In this paper, we show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization without increasing the number of function evaluations. This provably reduces the mean squared error. We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
RAO-BLACKWELLIZING THE STRAIGHT-THROUGH GUMBEL-SOFTMAX GRADIENT ESTIMATOR
d251320513
We extend conformal prediction to control the expected value of any monotone loss function. The algorithm generalizes split conformal prediction together with its coverage guarantee. Like conformal prediction, the conformal risk control procedure is tight up to an O(1/n) factor. We also introduce extensions of the idea to distribution shift, quantile risk control, multiple and adversarial risk control, and expectations of U-statistics. Worked examples from computer vision and natural language processing demonstrate the usage of our algorithm to bound the false negative rate, graph distance, and token-level F1-score.
Conformal Risk Control
d257280401
We present a general framework for evaluating image counterfactuals. The power and flexibility of deep generative models make them valuable tools for learning mechanisms in structural causal models. However, their flexibility makes counterfactual identifiability impossible in the general case. Motivated by these issues, we revisit Pearl's axiomatic definition of counterfactuals to determine the necessary constraints of any counterfactual inference model: composition, reversibility, and effectiveness. We frame counterfactuals as functions of an input variable, its parents, and counterfactual parents and use the axiomatic constraints to restrict the set of functions that could represent the counterfactual, thus deriving distance metrics between the approximate and ideal functions. We demonstrate how these metrics can be used to compare and choose between different approximate counterfactual inference models and to provide insight into a model's shortcomings and trade-offs.Published as a conference paper at ICLR 2023
MEASURING AXIOMATIC SOUNDNESS OF COUNTERFACTUAL IMAGE MODELS
d2263947
We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepancy (MMD). This optimized MMD is applied to the setting of unsupervised learning by generative adversarial networks (GAN), in which a model attempts to generate realistic samples, and a discriminator attempts to tell these apart from data samples. In this context, the MMD may be used in two roles: first, as a discriminator, either directly on the samples, or on features of the samples. Second, the MMD can be used to evaluate the performance of a generative model, by testing the model's samples against a reference data set. In the latter role, the optimized MMD is particularly helpful, as it gives an interpretable indication of how the model and data distributions differ, even in cases where individual model samples are not easily distinguished either by eye or by classifier.
GENERATIVE MODELS AND MODEL CRITICISM VIA OPTIMIZED MAXIMUM MEAN DISCREPANCY
d235606384
In the paper, we design a novel Bregman gradient policy optimization framework for reinforcement learning based on Bregman divergences and momentum techniques. Specifically, we propose a Bregman gradient policy optimization (BGPO) algorithm based on the basic momentum technique and mirror descent iteration. Meanwhile, we further propose an accelerated Bregman gradient policy optimization (VR-BGPO) algorithm based on the variance reduced technique. Moreover, we provide a convergence analysis framework for our Bregman gradient policy optimization under the nonconvex setting. We prove that our BGPO achieves a sample complexity of O( −4 ) for finding -stationary policy only requiring one trajectory at each iteration, and our VR-BGPO reaches the best known sample complexity of O( −3 ), which also only requires one trajectory at each iteration. In particular, by using different Bregman divergences, our BGPO framework unifies many existing policy optimization algorithms such as the existing (variance reduced) policy gradient algorithms such as natural policy gradient algorithm. Extensive experimental results on multiple reinforcement learning tasks demonstrate the efficiency of our new algorithms. * Feihu and Shangqian contributed equally. † Corresponding Authors.
BREGMAN GRADIENT POLICY OPTIMIZATION
d52077536
Prediction is arguably one of the most basic functions of an intelligent system. In general, the problem of predicting events in the future or between two waypoints is exceedingly difficult. However, most phenomena naturally pass through relatively predictable bottlenecks-while we cannot predict the precise trajectory of a robot arm between being at rest and holding an object up, we can be certain that it must have picked the object up. To exploit this, we decouple visual prediction from a rigid notion of time. While conventional approaches predict frames at regularly spaced temporal intervals, our time-agnostic predictors (TAP) are not tied to specific times so that they may instead discover predictable "bottleneck" frames no matter when they occur. We evaluate our approach for future and intermediate frame prediction across three robotic manipulation tasks. Our predictions are not only of higher visual quality, but also correspond to coherent semantic subgoals in temporally extended tasks.Recall the bottle-tilting uncertainty profile.Fig 1 depictsuncertainty profiles for several other prediction settings, including both forward/future prediction (given a start frame) and intermediate prediction (given start and end frames). Our time-agnostic reframing of the prediction problem targets the minima of these profiles, where prediction is intuitively easiest. We refer to these minima states as "bottlenecks."At this point, one might ask: are these "easy" bottlenecks actually useful to predict? Intuitively, bottlenecks naturally correspond to reliable subgoals-an agent hoping to solve the maze inFig 1 (e)would do well to target its bottlenecks as subgoals. In our experiments, we evaluate the usefulness of our predictions as subgoals in simulated robotic manipulation tasks. 1Figure 1: (a) Over time as the bottle is tilted, the uncertainty first rises and then falls as the bottle is held steady after tilting. (b)-(e) Similar uncertainty profiles corresponding to various scenarios-a ball rolling down the side of a bowl, a car driving on a highway with an exit 100m away, an iron pellet tossed in the direction of a magnet, and intermediate frame prediction in a maze traversal given start and end states. The red asterisks along the x-axis correspond to the asterisks in the maze-these "bottleneck" states must occur in any successful traversal.Our main contributions are: (i) we reframe the video prediction problem to be time-agnostic, (ii) we propose a novel technical approach to solve this problem, (iii) we show that our approach effectively identifies "bottleneck states" across several tasks, and (iv) we show that these bottlenecks correspond to subgoals that aid in planning towards complex end goals.
TIME-AGNOSTIC PREDICTION: PREDICTING PREDICTABLE VIDEO FRAMES
d252780718
Tabular data synthesis is a long-standing research topic in machine learning. Many different methods have been proposed over the past decades, ranging from statistical methods to deep generative methods. However, it has not always been successful due to the complicated nature of real-world tabular data. In this paper, we present a new model named Score-based Tabular data Synthesis (STaSy) and its training strategy based on the paradigm of score-based generative modeling. Despite the fact that score-based generative models have resolved many issues in generative models, there still exists room for improvement in tabular data synthesis. Our proposed training strategy includes a self-paced learning technique and a fine-tuning strategy, which further increases the sampling quality and diversity by stabilizing the denoising score matching training. Furthermore, we also conduct rigorous experimental studies in terms of the generative task trilemma: sampling quality, diversity, and time. In our experiments with 15 benchmark tabular datasets and 7 baselines, our method outperforms existing methods in terms of task-dependant evaluations and diversity. Code is available at https://github.com/JayoungKim408/STaSy. arXiv:2210.04018v4 [cs.LG] 29 May 2023 Kim. Data synthesis based on generative adversarial networks. arXiv preprint arXiv:1806.03384, 2018. Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2021.
STASY: SCORE-BASED TABULAR DATA SYNTHESIS
d252693505
Output reachability and adversarial robustness are among the most relevant safety properties of neural networks. We show that in the context of Message Passing Neural Networks (MPNN), a common Graph Neural Network (GNN) model, formal verification is impossible. In particular, we show that output reachability of graph-classifier MPNN, working over graphs of unbounded size, non-trivial degree and sufficiently expressive node labels, cannot be verified formally: there is no algorithm that answers correctly (with yes or no), given an MPNN, whether there exists some valid input to the MPNN such that the corresponding output satisfies a given specification. However, we also show that output reachability and adversarial robustness of node-classifier MPNN can be verified formally when a limit on the degree of input graphs is given a priori. We discuss the implications of these results, for the purpose of obtaining a complete picture of the principle possibility to formally verify GNN, depending on the expressiveness of the involved GNN models and input-output specifications.
FUNDAMENTAL LIMITS IN FORMAL VERIFICATION OF MESSAGE-PASSING NEURAL NETWORKS
d250144478
Semi-Supervised Learning (SSL) is fundamentally a missing label problem, in which the label Missing Not At Random (MNAR) problem is more realistic and challenging, compared to the widely-adopted yet naïve Missing Completely At Random assumption where both labeled and unlabeled data share the same class distribution. Different from existing SSL solutions that overlook the role of "class" in causing the non-randomness, e.g., users are more likely to label popular classes, we explicitly incorporate "class" into SSL. Our method is three-fold: 1) We propose Class-Aware Propensity (CAP) score that exploits the unlabeled data to train an improved classifier using the biased labeled data. 2) To encourage rare class training, whose model is low-recall but high-precision that discards too many pseudo-labeled data, we propose Class-Aware Imputation (CAI) that dynamically decreases (or increases) the pseudo-label assignment threshold for rare (or frequent) classes. 3) Overall, we integrate CAP and CAI into a Class-Aware Doubly Robust (CADR) estimator for training an unbiased SSL model. Under various MNAR settings and ablations, our method not only significantly outperforms existing baselines, but also surpasses other label bias removal SSL methods.
ON NON-RANDOM MISSING LABELS IN SEMI-SUPERVISED LEARNING
d231918454
Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. However, it is unclear whether these priors accurately reflect our true beliefs about the weight distributions or give optimal performance. To find better priors, we study summary statistics of neural network weights in networks trained using stochastic gradient descent (SGD). We find that convolutional neural network (CNN) and ResNet weights display strong spatial correlations, while fully connected networks (FCNNs) display heavy-tailed weight distributions. We show that building these observations into priors can lead to improved performance on a variety of image classification datasets. Surprisingly, these priors mitigate the cold posterior effect in FCNNs, but slightly increase the cold posterior effect in ResNets. * Equal contribution. † Equal contribution.
BAYESIAN NEURAL NETWORK PRIORS REVISITED
d232307359
Source code (Context) and its parsed abstract syntax tree (AST; Structure) are two complementary representations of the same computer program. Traditionally, designers of machine learning models have relied predominantly either on Structure or Context. We propose a new model, which jointly learns on Context and Structure of source code. In contrast to previous approaches, our model uses only language-agnostic features, i.e., source code and features that can be computed directly from the AST. Besides obtaining state-of-the-art on monolingual code summarization on all five programming languages considered in this work, we propose the first multilingual code summarization model. We show that jointly training on non-parallel data from multiple programming languages improves results on all individual languages, where the strongest gains are on low-resource languages. Remarkably, multilingual training only from Context does not lead to the same improvements, highlighting the benefits of combining Structure and Context for representation learning on code.
LANGUAGE-AGNOSTIC REPRESENTATION LEARNING OF SOURCE CODE FROM STRUCTURE AND CONTEXT
d256274566
This paper proposes a simple method to distill and detect backdoor patterns within an image: Cognitive Distillation (CD). The idea is to extract the "minimal essence" from an input image responsible for the model's prediction. CD optimizes an input mask to extract a small pattern from the input image that can lead to the same model output (i.e., logits or deep features). The extracted pattern can help understand the cognitive mechanism of a model on clean vs. backdoor images and is thus called a Cognitive Pattern (CP). Using CD and the distilled CPs, we uncover an interesting phenomenon of backdoor attacks: despite the various forms and sizes of trigger patterns used by different attacks, the CPs of backdoor samples are all surprisingly and suspiciously small. One thus can leverage the learned mask to detect and remove backdoor examples from poisoned training datasets. We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks. We also show that CD can potentially be applied to help detect potential biases from face datasets. . Deep feature space trojan attack of neural networks by controlled detoxification. arXiv preprint arXiv:2012.11212, 2020. attack with imperceptible input and latent modification. NeurIPS, 2021a.
DISTILLING COGNITIVE BACKDOOR PATTERNS WITHIN AN IMAGE
d251104701
We present Generalizable NeRF Transformer (GNT), a transformer-based architecture that reconstructs Neural Radiance Fields (NeRFs) and learns to render novel views on the fly from source views. While prior works on NeRFs optimize a scene representation by inverting a handcrafted rendering equation, GNT achieves neural representation and rendering that generalizes across scenes using transformers at two stages. (1) The view transformer leverages multi-view geometry as an inductive bias for attention-based scene representation, and predicts coordinatealigned features by aggregating information from epipolar lines on the neighboring views.(2) The ray transformer renders novel views using attention to decode the features from the view transformer along the sampled points during ray marching. Our experiments demonstrate that when optimized on a single scene, GNT can successfully reconstruct NeRF without an explicit rendering formula due to the learned ray renderer. When trained on multiple scenes, GNT consistently achieves state-of-the-art performance when transferring to unseen scenes and outperform all other methods by~10% on average. Our analysis of the learned attention maps to infer depth and occlusion indicate that attention enables learning a physicallygrounded rendering. Our results show the promise of transformers as a universal modeling tool for graphics. Please refer to our project page for video results: https://vita-group.github.io/GNT/ * Equal contribution.In this paper, we first consider the problem of transferable novel view synthesis as a two-stage information aggregation process: the multi-view image feature fusion, followed by the samplingbased rendering integration. Our key contributions come from using transformers (Vaswani et al., 2017) for both these stages. Transformers have had resounding success in language modeling(Devlin et al., 2018)and computer vision (Dosovitskiy et al., 2020) and their "self-attention" mechanism can be thought of as a universal trainable aggregation function. In our case, for volumetric scene representation, we train a view transformer, to aggregate pixel-aligned image features (Saito et al., 2019) from corresponding epipolar lines to predict coordinate-wise features. For rendering a novel view, we develop a ray transformer that composes the coordinate-wise point features along a traced ray via the attention mechanism. These two form the Generalizable NeRF Transformer (GNT).GNT simultaneously learns to represent scenes from source view images and to perform sceneadaptive ray-based rendering using the learned attention mechanism. Remarkably, GNT predicts novel views using the captured images without fitting per scene. Our promising results endorse that transformers are strong, scalable, and versatile learning backbones for graphical rendering (Tewari et al., 2020). Our key contributions are: 1. A view transformer to aggregate multi-view image features complying with epipolar geometry and to infer coordinate-aligned features.2.A ray transformer for a learned ray-based rendering to predict target color.3. Experiments to demonstrate that GNT's fully transformer-based architecture achieves stateof-the-art results on complex scenes and cross-scene generalization.4. Analysis of the attention module showing that GNT learns to be depth and occlusion aware.Overall, our combined Generalizable NeRF Transformer (GNT) demonstrates that many of the inductive biases that were thought necessary for view synthesis (e.g. persistent 3D model, hard-coded rendering equation) can be replaced with attention/transformer mechanisms. Published as a conference paper at ICLR 2023 Zhiwen Fan, Tianlong Chen, Peihao Wang, and Zhangyang Wang. Cadtransformer: Panoptic symbol spotting transformer for cad drawings. In , et al. Highly accurate protein structure prediction with alphafold. Nature, 2021.
IS ATTENTION ALL THAT NERF NEEDS?
d231802365
Conventional neural architectures for sequential data present important limitations. Recurrent neural networks suffer from exploding and vanishing gradients, small effective memory horizons, and must be trained sequentially. Convolutional neural networks cannot handle sequences of unknown size and their memory horizon must be defined a priori. In this work, we show that these problems can be solved by formulating the convolutional kernels of CNNs as continuous functions. The resulting Continuous Kernel Convolution (CKConv) handles arbitrarily long sequences in a parallel manner, within a single operation, and without relying on any form of recurrence. We show that Continuous Kernel Convolutional Networks (CK-CNNs) obtain state-of-the-art results in multiple datasets, e.g., permuted MNIST, and, thanks to their continuous nature, are able to handle non-uniformly sampled datasets and irregularly-sampled data natively. CKCNNs match or perform better than neural ODEs designed for these purposes in a faster and simpler manner. Figure 1: Continuous Kernel Convolution (CKConv). CKConv views a convolutional kernel as a vector-valued continuous function ψ ∶ R → R Nout×N in parameterized by a small neural network MLP ψ .MLP ψ receives a time-step and outputs the value of the convolutional kernel at that position. We sample convolutional kernels by passing a set of relative positions {∆τ i } to MLP ψ , and perform convolution with the sampled kernel next. Since MLP ψ is a continuous function, CKConvs can (i) construct arbitrarily large kernels, (ii) generate kernels at different resolutions, and (iii) handle irregular data.We observe that continuous kernel parameterizations previously used to handle irregular data locally, e.g., Schütt et al.(2017); Wu et al.(2019), are not adequate to model long-term dependencies. This is due to the inability of their kernels to model long spatial complex functions (Sec. 4.2). Contrarily, CKConvs perfectly describe long complex non-linear, non-smooth functions by parameterizing their kernels as SIRENs (Sitzmann et al., 2020): implicit neural representations with Sine nonlinearities. Shallow CKCNNs match or outperform state-of-the-art approaches on several tasks comprising stress tests, continuous, discrete and irregular data, as well as resolution changes. To the best of our knowledge, we are first to observe the potential of continuous convolutional kernels to model long-term dependencies, and to provide an useful parameterization to this end.
CKCONV: CONTINUOUS KERNEL CONVOLUTION FOR SEQUENTIAL DATA
d260611249
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the language agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably outperforms baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.arXiv:2308.02151v1 [cs.CL] 4 Aug 2023 with a concrete direction to improve upon, helping it learn from prior mistakes and prevent repetitive errors to perform better in the next attempt.Although the self-reflection operation enables iterative refinement, generating useful reflective feedback from a pre-trained, frozen LLM is challenging, as showcased inFig. 1, since it requires the LLM to have a good understanding of where the agent made mistakes in a specific environment, i.e., the credit assignment problem(Sutton & Barto, 2018), as well as the ability to generate a summary containing actionable insights for improvement. The verbal reinforcement cannot be optimal, if the frozen language model has not been properly fine-tuned to specialize in credit assignment problems for the tasks in given environments. Furthermore, the existing language agents do not reason and plan in ways that are compatible with differentiable, gradient-based learning from rewards by exploiting the existing abundant reinforcement learning techniques. To address these limitations, this paper introduces Retroformer, a principled framework for reinforcing language agents by learning a plug-in retrospective model, which automatically refines the language agent prompts from environment feedback through policy optimization. Specifically, our proposed agent architecture can learn from arbitrary reward information across multiple environments and tasks, for iteratively fine-tuning a pre-trained language model, which refines the language agent prompts by reflecting on failed attempts and assigning credits of actions taken by the agent on future rewards.Lollipop Chainsaw featured Juliet Starling, who was voiced by a Canadian-American actress who has done voice roles for what Teen Titans spinoff series?
RETROFORMER: RETROSPECTIVE LARGE LANGUAGE AGENTS WITH POLICY GRADIENT OPTIMIZATION
d54462139
Representations of sets are challenging to learn because operations on sets should be permutation-invariant. To this end, we propose a Permutation-Optimisation module that learns how to permute a set end-to-end. The permuted set can be further processed to learn a permutation-invariant representation of that set, avoiding a bottleneck in traditional set models. We demonstrate our model's ability to learn permutations and set representations with either explicit or implicit supervision on four datasets, on which we achieve state-of-the-art results: number sorting, image mosaics, classification from image mosaics, and visual question answering. programming with one negative eigenvalue is NP-hard. . A simple neural network module for relational reasoning. In NIPS.2017.Aliaksei Severyn and Alessandro Moschitti. Learning to rank short text pairs with convolutional deep neural networks. In SIGIR, 2015.Richard Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices.
LEARNING REPRESENTATIONS OF SETS THROUGH OPTIMIZED PERMUTATIONS
d238418995
Recent empirical advances show that training deep models with large learning rate often improves generalization performance. However, theoretical justifications on the benefits of large learning rate are highly limited, due to challenges in analysis. In this paper, we consider using Gradient Descent (GD) with a large learning rate on a homogeneous matrix factorization problem, i.e., min X,Y A − XY 2 F . We prove a convergence theory for constant large learning rates well beyond 2/L, where L is the largest eigenvalue of Hessian at the initialization. Moreover, we rigorously establish an implicit bias of GD induced by such a large learning rate, termed 'balancing', meaning that magnitudes of X and Y at the limit of GD iterations will be close even if their initialization is significantly unbalanced. Numerical experiments are provided to support our theory.Published as a conference paper at ICLR 2022 Problem (1) possesses several intriguing properties. Firstly, the objective function is non-convex, and critical points are either global minima or saddles (see e.g.,Baldi & Hornik (1989); Li et al. (2019b); Valavi et al. (2020a); Chen et al. (2018)). Secondly, problem (1) is homogeneous in X and Y , meaning that rescaling X, Y to aX, a −1 Y for any a = 0 will not change the objective's value. This property is shared by commonly used ReLU neural networks. A direct consequence of homogeneity is that global minima of (1) are non-isolated and can be unbounded. The curvatures at these global minima are highly dependent on the magnitudes of X, Y . When X, Y have comparable magnitudes, the largest eigenvalue of Hessian is small, and this corresponds to a flat minimum; on the contrary, unbalanced X and Y give a sharp minimum. Last but not the least, the homogeneity impairs smoothness conditions of (1), rendering the gradient being not Lipschitz continuous unless X, Y are bounded. See a formal discussion in Section 2.Existing approaches for solving (1) often uses explicit regularization(Ge et al., 2017;Tu et al., 2016;Cabral et al., 2013;Li et al., 2019a), or infinitesimal (or diminishing) learning rates for controlling the magnitudes of X, Y (Du et al., 2018; Ye & Du, 2021). In this paper, we go beyond the scope of aforementioned works, and analyze GD with a large learning rate for solving (1). In particular, we allow the learning rate h to be as large as approximately 4/L (see more explanation in Section 2), where L denotes the largest eigenvalue of Hessian at GD initialization. In connection to empirical observations, we provide positive answers to the following two questions:Does GD with large learning rate converge at least for some cases of (1)? Does larger learning rate biases toward flatter minima (i.e., X, Y with comparable magnitudes)?We remark that having a learning rate h ≈ 4/L is far beyond the commonly analyzed regime in optimization. Even for globally L-smooth objective, traditional theory requires h < 2/L for GD convergence and h = 1/L is optimal for convex functions(Boyd et al., 2004), not to mention that our problem (1) is never globally L-smooth due to homogeneity. Modified equation provides a tool for probing intermediate learning rates (see Hairer et al. (2006, Chapter 9) for a general review, and Kong & Tao (2020, Appendix A) for the specific setup of GD), but the learning rate here is too large for modified equation to work (see Appendix C). In fact, besides blowing up, GD with large learning rate may have a zoology of limiting behaviors (see e.g., Appendix B for convergence to periodic orbits under our setup, and Kong & Tao (2020) for convergence to chaotic attractors).Our analyses (of convergence and balancing) leverage various mathematical tools, including a proper partition of state space and its dynamical transition (specifically invented for this problem), stability theory of discrete time dynamical systems(Alligood et al., 1996), and geometric measure theory(Federer, 2014).The rest of the paper is organized as: Section 2 provides the background of studying (1) and discusses related works; Section 3 presents convergence and balancing results for scalar factorization problems; Section 4 generalizes the theory to rank-1 matrix approximation; Section 5 studies problem (1) with arbitrary A and its arbitrary-rank approximation; Section 6 summarizes the paper and discusses broadly related topics and future directions.
LARGE LEARNING RATE TAMES HOMOGENEITY: CONVERGENCE AND BALANCING EFFECT
d249062882
In the mode connectivity literature, it is widely accepted that there are common circumstances in which two neural networks, trained similarly on the same data, will maintain loss when interpolated in the weight space. In particular, transfer learning is presumed to ensure the necessary conditions for linear mode connectivity across training runs. In contrast to existing results from image classification, we find that among text classifiers (trained on MNLI, QQP, and CoLA), some pairs of finetuned models have large barriers of increasing loss on the linear paths between them. On each task, we find distinct clusters of models which are linearly connected on the test loss surface, but are disconnected from models outside the clustermodels that occupy separate basins on the surface. By measuring performance on specially-crafted diagnostic datasets, we find that these clusters correspond to different generalization strategies. For example, on MNLI, one cluster behaves like a bag of words model under domain shift, while another cluster uses syntactic heuristics. Our work demonstrates how the geometry of the loss surface can guide models towards different heuristic functions in standard finetuning settings. arXiv:2205.12411v5 [cs.LG] 23 Jan 2023Published as a conference paper at ICLR 2023 tasks. Using standard finetuning methods, we find that in all three tasks, models that perform similarly on the same diagnostic sets are linearly connected without barriers on the ID loss surface, but they tend to be disconnected from models with different generalization behavior.Our code and models are public. 1 Our main contributions are:• In contrast with existing work in computer vision , we find that transfer learning can lead to different basins over different finetuning runs (Section 3). We develop a metric for model similarity based on LMC, the convexity gap (Section 4), and an accompanying method for clustering models into basins (Section 4.1). • We align the basins to specific generalization behaviors (Section 4). In NLI (Section 2.1), they correspond to a preference for either syntactic or lexical overlap heuristics. On a paraphrase task (Section 2.2), they split on behavior under word order permutation. On a linguistic acceptability task, they reveal the ability to classify unseen linguistic phenomena (Appendix A). • We find that basins trap a portion of finetuning runs, which become increasingly disconnected from the other models as they train (Section 4.2). Connections between models in the early stages of training may thus predict final heuristics.
LINEAR CONNECTIVITY REVEALS GENERALIZATION STRATEGIES
d257834100
Few datasets contain self-identified sensitive attributes, inferring attributes risks introducing additional biases, and collecting attributes can carry legal risks. Besides, categorical labels can fail to reflect the continuous nature of human phenotypic diversity, making it difficult to compare the similarity between same-labeled faces. To address these issues, we present A View From Somewhere (AVFS)-a dataset of 638,180 human judgments of face similarity. 1 We demonstrate the utility of AVFS for learning a continuous, low-dimensional embedding space aligned with human perception. Our embedding space, induced under a novel conditional framework, not only enables the accurate prediction of face similarity, but also provides a human-interpretable decomposition of the dimensions used in the human-decision making process, and the importance distinct annotators place on each dimension. We additionally show the practicality of the dimensions for collecting continuous attributes, performing classification, and comparing dataset attribute disparities.Published as a conference paper at ICLR 2023 provides a human-interpretable decomposition of the dimensions used in the human decision-making process, as well as the importance distinct annotators place on each dimension. We demonstrate that the individual embedding dimensions (1) are related to concepts of gender, ethnicity, age, as well as face and hair morphology; and (2) can be used to collect continuous attributes, perform classification, and compare dataset attribute disparities. We further show that annotators are influenced by their sociocultural backgrounds, underscoring the need for diverse annotator groups to mitigate bias.
A VIEW FROM SOMEWHERE: HUMAN-CENTRIC FACE REPRESENTATIONS
d247594823
Reward-free, unsupervised discovery of skills is an attractive alternative to the bottleneck of hand-designing rewards in environments where task supervision is scarce or expensive. However, current skill pre-training methods, like many RL techniques, make a fundamental assumption -stationary environments during training. Traditional methods learn all their skills simultaneously, which makes it difficult for them to both quickly adapt to changes in the environment, and to not forget earlier skills after such adaptation. On the other hand, in an evolving or expanding environment, skill learning must be able to adapt fast to new environment situations while not forgetting previously learned skills. These two conditions make it difficult for classic skill discovery to do well in an evolving environment. In this work, we propose a new framework for skill discovery, where skills are learned one after another in an incremental fashion. This framework allows newly learned skills to adapt to new environment or agent dynamics, while the fixed old skills ensure the agent doesn't forget a learned skill. We demonstrate experimentally that in both evolving and static environments, incremental skills significantly outperform current state-of-the-art skill discovery methods on both skill quality and the ability to solve downstream tasks. Videos for learned skills and code are made public on: https://notmahi.github.io/disk. . Self-supervised policy adaptation during deployment. arXiv preprint arXiv. Robot learning from demonstration by constructing skill trees. IJRR, 2012. 10 Dimitris E Koulouriotis and A Xanthopoulos. Reinforcement learning and evolutionary algorithms for non-stationary multi-armed bandit problems. Applied Mathematics and Computation, 196(2): 913-922, 2008. 2, 3, 10 Tejas D Kulkarni, Karthik R Narasimhan, Ardavan Saeedi, and Joshua B Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. arXiv preprint arXiv:1604.06057, 2016. 1 M Pawan Kumar, Benjamin Packer, and Daphne Koller. Self-paced learning for latent variable models.
ONE AFTER ANOTHER: LEARNING INCREMENTAL SKILLS FOR A CHANGING WORLD
d229348988
We propose a Distributional Approach for addressing Controlled Text Generation from pre-trained Language Models (LMs). This approach permits to specify, in a single formal framework, both "pointwise" and "distributional" constraints over the target LM -to our knowledge, the first model with such generalitywhile minimizing KL divergence from the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-Based Model) representation. From that optimal representation we then train a target controlled Autoregressive LM through an adaptive distributional variant of Policy Gradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the initial LM. We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study, we show the effectiveness of our adaptive technique for obtaining faster convergence.
A DISTRIBUTIONAL APPROACH TO CONTROLLED TEXT GENERATION
d238582721
Embedding learning has found widespread applications in recommendation systems and natural language modeling, among other domains. To learn quality embeddings efficiently, adaptive learning rate algorithms have demonstrated superior empirical performance over SGD, largely accredited to their token-dependent learning rate. However, the underlying mechanism for the efficiency of token-dependent learning rate remains underexplored. We show that incorporating frequency information of tokens in the embedding learning problems leads to provably efficient algorithms, and demonstrate that common adaptive algorithms implicitly exploit the frequency information to a large extent. Specifically, we propose (Counterbased) Frequency-aware Stochastic Gradient Descent, which applies a frequency-dependent learning rate for each token, and exhibits provable speed-up compared to SGD when the token distribution is imbalanced. Empirically, we show the proposed algorithms are able to improve or match adaptive algorithms on benchmark recommendation tasks and a large-scale industrial recommendation system, closing the performance gap between SGD and adaptive algorithms, while using significantly lower memory. Our results are the first to show token-dependent learning rate provably improves convergence for non-convex embedding learning problems. * Work done while an intern at Facebook. Corresponding email:{yli939, tourzhao, gl68}@gatech.edu. 1 arXiv:2110.04844v3 [cs.LG] 23 Nov 2021 1.1 Related Literature Adaptive algorithms for non-convex problems. There has been a fruitful line of research on analyzing the convergence of adaptive learning rate algorithms in non-convex setting. These results aim to match the convergence rate of standard SGD given by O(1/ √ T) (Ghadimi and Lan, 2013), however often with additional factor of log T (Ward et al., 2018; Défossez et al., 2020; Chen et al., 2018; Reddi et al., 2018), or with worse dimension dependence (Zhou et al., 2018a) for smooth problem (assumed byalmost all prior works). Moreover, all existing works aim to analyze the convergence for general non-convex problems, ignoring unique data features in embedding learning problems, where adaptive algorithms are most successful. We explicitly take account into the sparsity of stochastic gradient, and token distribution imbalancedness into the design and analysis of our proposed algorithms, which are the keys to better convergence properties. Adaptive algorithms and SGD. To the best of our knowledge, the study on understanding why adaptive learning rate algorithms outperform SGD is very limited. Zhang et al. (2019) argue that BERT pretraining (Devlin et al., 2018) has heavy-tailed noise, implying unbounded variance and possible non-convergence of SGD. Normalized gradient clipping method is proposed therein and converges for a family of heavy-tailed noise distributions. Our results focus on a different direction by showing that imbalanced token distribution is an important factor that can be leveraged to design more efficient algorithms for embedding learning problems. Our result also does not rely on the noise to be heavy-tailed for the convergence benefits of the proposed FA/CF-SGD to take effect. Notations: For a vector/matrix, we use · to denotes its 2 -norm/Frobenius norm. We use · 2 to denote the spectral norm of a matrix.
Frequency-aware SGD for Efficient Embedding Learning with Provable Benefits
d211010532
With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic notion of constituency. Our method provides an effective way of extracting constituency trees from the pre-trained LMs without training. In addition, we report intriguing findings in the induced trees, including the fact that some pre-trained LMs outperform other approaches in correctly demarcating adverb phrases in sentences.arXiv:2002.00737v1 [cs.CL] 30 Jan 2020Published as a conference paper at ICLR 2020 LMs and constituency trees from various points of view, including looking into which layer(s) of the LMs is considered to be sensitive to phrase information ( §6).To summarize, our contributions in this work are as follows:• By investigating the attention distributions from Transformer-based pre-trained LMs, weshow that there is evidence to suggest that several attention heads of the LMs exhibit syntactic structure akin to constituency grammar.• Inspired by the above observation, we propose a method that facilitates the derivation of constituency trees from pre-trained LMs without training. We also demonstrate that the induced trees can serve as a strong baseline for English grammar induction.• We inspect, in view of our framework, what type of syntactic knowledge the pre-trained LMs capture, discovering interesting facts, e.g., that some pre-trained LMs are more aware of adverb phrases than other approaches.
ARE PRE-TRAINED LANGUAGE MODELS AWARE OF PHRASES? SIMPLE BUT STRONG BASELINES FOR GRAMMAR INDUCTION
d232105154
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream. In this paper, we formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases: attackers must operate under partial knowledge of the target model, and the decisions made by the attacker are irrevocable since they operate on a transient data stream. We first rigorously analyze a deterministic variant of the online threat model by drawing parallels to the well-studied ksecretary problem in theoretical computer science and propose VIRTUAL+, a simple yet practical online algorithm. Our main theoretical result shows VIRTUAL+ yields provably the best competitive ratio over all single-threshold algorithms for k < 5-extending the previous analysis of the k-secretary problem. We also introduce the stochastic k-secretary-effectively reducing online blackbox transfer attacks to a k-secretary problem under noise-and prove theoretical bounds on the performance of VIRTUAL+ adapted to this setting. Finally, we complement our theoretical results by conducting experiments on MNIST, CIFAR-10, and Imagenet classifiers, revealing the necessity of online algorithms in achieving near-optimal performance and also the rich interplay between attack strategies and online attack selection, enabling simple strategies like FGSM to outperform stronger adversaries.Previously studied threat models (e.g., whitebox and blackbox) implicitly assume a static setting that permits full access to instances in a target dataset at all times (Tramèr et al., 2018). However, such an assumption is unrealistic in many real-world systems. Countless real-world applications involve streaming data that arrive in an online fashion (e.g., financial markets or real-time sensor networks). Understanding the feasibility of adversarial attacks in this online setting is an essential question.As a motivating example, consider the case where the adversary launches a man-in-the-middle attack depicted inFig. 1. Here, data is streamed between two endpoints-i.e., from sensors on an autonomous car to the actual control system. An adversary, in this example, would intercept the * Equal Contribution. Corresponding authors: {joey.bose,andjela.mladenovic}@mila.quebec † Work done while an intern at Meta AI Research ‡ Canada CIFAR AI Chair Published as a conference paper at ICLR 2022 sensor data, potentially perturb it, and then send it to the controller. Unlike classical adversarial attacks, such a scenario presents two key challenges that are representative of all online settings.Susanne Albers and Leon Ladewig. New results for the k-secretary problem. arXiv preprint arXiv:2012.00488, 2020.Antonios Antoniadis, Themis Gouleakis, Pieter Kleer, and Pavel Kolev. Secretary and online matching problems with machine learned advice. arXiv preprint arXiv:. Magnet and efficient defenses against adversarial attacks are not robust to adversarial examples. arXiv preprint arXiv:1711.08478, 2017.
ONLINE ADVERSARIAL ATTACKS
d3461974
Consider how easy it is for people to imagine what a "purple hippo" would look like, even though they do not exist. If we instead said "purple hippo with wings", they could just as easily create a different internal mental representation, to represent this more specific concept. To assess whether the person has correctly understood the concept, we can ask them to draw a few sketches, to illustrate their thoughts. We call the ability to map text descriptions of concepts to latent representations and then to images (or vice versa) visually grounded semantic imagination. We propose a latent variable model for images and attributes, based on variational auto-encoders, which can perform this task. Our method uses a novel training objective, and a novel product-of-experts inference network, which can handle partially specified (abstract) concepts in a principled and efficient way. We also propose a set of easy-to-compute evaluation metrics that capture our intuitive notions of what it means to have good imagination, namely correctness, coverage, and compositionality (the 3 C's). Finally, we perform a detailed comparison (in terms of the 3 C's) of our method with two existing joint image-attribute VAE methods (the JMVAE method of (Suzuki et al., 2017) and the bi-VCCA method of (Wang et al., 2016)) by applying them to two simple datasets based on MNIST, where it is easy to objectively evaluate performance in a controlled way.
Generative Models of Visually Grounded Imagination
d17611960
Modern convolutional networks, incorporating rectifiers and max-pooling, are neither smooth nor convex. Standard guarantees therefore do not apply. Nevertheless, methods from convex optimization such as gradient descent and Adam are widely used as building blocks for deep learning algorithms.This paper provides the first convergence guarantee applicable to modern convnets. The guarantee matches a lower bound for convex nonsmooth functions. The key technical tool is the neural Taylor approximation -a straightforward application of Taylor expansions to neural networksand the associated Taylor loss. Experiments on a range of optimizers, layers, and tasks provide evidence that the analysis accurately captures the dynamics of neural optimization.The second half of the paper applies the Taylor approximation to isolate the main difficulty in training rectifier nets: that gradients are shattered. We investigate the hypothesis that, by exploring the space of activation configurations more thoroughly, adaptive optimizers such as RMSProp and Adam are able to converge to better solutions.
Neural Taylor Approximations: Convergence and Exploration in Rectifier Networks
d264128411
A fundamental characteristic of audio is its compositional nature.Audio-language models (ALMs) trained using a contrastive approach (e.g., CLAP) that learns a shared representation between audio and language modalities have improved performance in many downstream applications, including zero-shot audio classification, audio retrieval, etc.However, the ability of these models to effectively perform compositional reasoning remains largely unexplored and necessitates additional research.In this paper, we propose CompA, a collection of two expertannotated benchmarks, with a majority of real-world audio samples, to evaluate compositional reasoning in ALMs.Our proposed CompA-order evaluates how well an ALM understands the order or occurrence of acoustic events in audio, and CompA-attribute evaluates attribute binding of acoustic events.An instance from either benchmark consists of two audio-caption pairs, where both audios have the same acoustic events but with different compositions.An ALM is evaluated on how well it matches the right audio to the right caption.Using this benchmark, we first show that current ALMs perform only marginally better than random chance, thereby struggling with compositional reasoning.Next, we propose CompA-CLAP, where we fine-tune CLAP using a novel learning method to improve its compositional reasoning abilities.To train CompA-CLAP, we first propose improvements to contrastive training with composition-aware hard negatives, allowing for more focused training.Next, we propose a novel modular contrastive loss that helps the model learn fine-grained compositional understanding and overcomes the acute scarcity of openly available compositional audios.CompA-CLAP significantly improves over all our baseline models on the CompA benchmark, indicating its superior compositional reasoning capabilities.
COMPA: ADDRESSING THE GAP IN COMPOSITIONAL REASONING IN AUDIO-LANGUAGE MODELS
d244527239
Rate-distortion (R-D) function, a key quantity in information theory, characterizes the fundamental limit of how much a data source can be compressed subject to a fidelity criterion, by any compression algorithm. As researchers push for everimproving compression performance, establishing the R-D function of a given data source is not only of scientific interest, but also sheds light on the possible room for improving compression algorithms. Previous work on this problem relied on distributional assumptions on the data source(Gibson, 2017)or only applied to discrete data. By contrast, this paper makes the first attempt at an algorithm for sandwiching the R-D function of a general (not necessarily discrete) source requiring only i.i.d. data samples. We estimate R-D sandwich bounds for a variety of artificial and real-world data sources, in settings far beyond the feasibility of any known method, and shed light on the optimality of neural data compression (Ballé et al., 2021; Yang et al., 2022). Our R-D upper bound on natural images indicates theoretical room for improving state-of-the-art image compression methods by at least one dB in PSNR at various bitrates. Our data and code can be found here.
TOWARDS EMPIRICAL SANDWICH BOUNDS ON THE RATE-DISTORTION FUNCTION
d67855286
Recurrent neural networks have gained widespread use in modeling sequential data. Learning long-term dependencies using these models remains difficult though, due to exploding or vanishing gradients. In this paper, we draw connections between recurrent networks and ordinary differential equations. A special form of recurrent networks called the AntisymmetricRNN is proposed under this theoretical framework, which is able to capture long-term dependencies thanks to the stability property of its underlying differential equation. Existing approaches to improving RNN trainability often incur significant computation overhead. In comparison, AntisymmetricRNN achieves the same goal by design. We showcase the advantage of this new architecture through extensive simulations and experiments. AntisymmetricRNN exhibits much more predictable dynamics. It outperforms regular LSTM models on tasks requiring long-term memory and matches the performance on tasks where short-term dependencies dominate despite being much simpler. advocate going beyond initialization and forcing the weight matrices to be orthogonal throughout the entire learning process. However, some of these approaches come with significant computational overhead and reportedly hinder representation power of these models(Vorontsov et al., 2017). Moreover, orthogonal weight matrices alone do not prevent exploding and vanishing gradients, due to the nonlinear nature of deep neural networks as shown in(Pennington et al., 2017).Here we offer a new perspective on the trainability of RNNs from the dynamical system viewpoint. While exploding gradient is a manifestation of the instability of the underlying dynamical system, vanishing gradient results from a lossy system, properties that have been widely studied in the dynamical system literature(Haber & Ruthotto, 2017;Laurent & von Brecht, 2017). The main contributions of the work are:• We draw connections between RNNs and the ordinary differential equation theory and design new recurrent architectures by discretizing ODEs.
ANTISYMMETRICRNN: A DYNAMICAL SYSTEM VIEW ON RECURRENT NEURAL NETWORKS
d257365137
Generative flow networks (GFlowNets), as an emerging technique, can be used as an alternative to reinforcement learning for exploratory control tasks. GFlowNet aims to generate distribution proportional to the rewards over terminating states, and to sample different candidates in an active learning fashion. GFlowNets need to form a DAG and compute the flow matching loss by traversing the inflows and outflows of each node in the trajectory. No experiments have yet concluded that GFlowNets can be used to handle continuous tasks. In this paper, we propose generative continuous flow networks (CFlowNets) that can be applied to continuous control tasks. First, we present the theoretical formulation of CFlowNets. Then, a training framework for CFlowNets is proposed, including the action selection process, the flow approximation algorithm, and the continuous flow matching loss function. Afterward, we theoretically prove the error bound of the flow approximation. The error decreases rapidly as the number of flow samples increases. Finally, experimental results on continuous control tasks demonstrate the performance advantages of CFlowNets compared to many reinforcement learning methods, especially regarding exploration ability.
CFLOWNETS: CONTINUOUS CONTROL WITH GENERATIVE FLOW NETWORKS
d261582259
Diffusion models achieved great success in image synthesis, but still face challenges in high-resolution generation. Through the lens of discrete cosine transformation, we find the main reason is that the same noise level on a higher resolution results in a higher Signal-to-Noise Ratio in the frequency domain. In this work, we present Relay Diffusion Model (RDM), which transfers a low-resolution image or noise into an equivalent high-resolution one for diffusion model via blurring diffusion and block noise. Therefore, the diffusion process can continue seamlessly in any new resolution or model without restarting from pure noise or lowresolution conditioning. RDM achieves state-of-the-art FID on CelebA-HQ and sFID on ImageNet 256×256, surpassing previous works such as ADM, LDM and DiT by a large margin. All the codes and checkpoints are open-sourced at https://github.com/THUDM/RelayDiffusion. Figure 1: (left): Generated Samples by RDM on ImageNet 256×256 and CelebA-HQ 256×256. (right): Benchmarking recent diffusion models on class-conditional ImageNet 256×256 generation without any guidance. RDM can achieve a FID of 1.87 if with classifier-free guidance. arXiv:2309.03350v1 [cs.CV] 4 Sep 2023Preprint generative models in recent years. However, challenges still exist in the training of diffusion models for high-resolution images. More specifically, there are two main obstacles:Training Efficiency. Although equipped with UNet to balance the memory and computation cost across different resolutions, diffusion models still require a large amount of resources to train on high-resolution images. One popular solution is to train the diffusion model on a latent (usually 4× compression rate in resolution) space and map the result back as pixels(Rombach et al., 2022), which is fast but inevitably suffers from some low-level artifacts. The cascaded method trains a series of varying-size super-resolution diffusion models, which is effective but needs a complete sampling for each stage separately.Noise Schedule. Diffusion models need a noise schedule to control the amount of the isotropic Gaussian noise at each step. The setting of the noise schedule shows great influence over the performance, and most current models follow the linear (Ho et al., 2020) or cosine schedule. However, an ideal noise schedule should be resolution-dependent (SeeFigure 2or Chen (2023)), resulting in suboptimal performance to train high-resolution models directly with common schedules designed for resolutions of 32×32 or 64×64 pixels.
RELAY DIFFUSION: UNIFYING DIFFUSION PROCESS ACROSS RESOLUTIONS FOR IMAGE SYNTHESIS
d201657791
Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification. However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data. In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model. Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a non-convex concave min-max problem. The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well. In particular, we propose to explore Polyak-Łojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme. An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate. Our experimental results demonstrate the effectiveness of the proposed algorithms.Preprint. Under review.
Stochastic AUC Maximization with Deep Neural Networks
d53717167
Deep neural networks (DNNs) have set benchmarks on a wide array of supervised learning tasks. Trained DNNs, however, often lack robustness to minor adversarial perturbations to the input, which undermines their true practicality. Recent works have increased the robustness of DNNs by fitting networks using adversarially-perturbed training samples, but the improved performance can still be far below the performance seen in non-adversarial settings. A significant portion of this gap can be attributed to the decrease in generalization performance due to adversarial training. In this work, we extend the notion of margin loss to adversarial settings and bound the generalization error for DNNs trained under several well-known gradient-based attack schemes, motivating an effective regularization scheme based on spectral normalization of the DNN's weight matrices. We also provide a computationally-efficient method for normalizing the spectral norm of convolutional layers with arbitrary stride and padding schemes in deep convolutional networks. We evaluate the power of spectral normalization extensively on combinations of datasets, network architectures, and adversarial training schemes. The code is available at
Generalizable Adversarial Training via Spectral Normalization
d220055784
In this paper, we cast fair machine learning as invariant machine learning. We first formulate a version of individual fairness that enforces invariance on certain sensitive sets. We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently. Our theoretical results guarantee the proposed approach trains certifiably fair ML models. Finally, in the experimental studies we demonstrate improved fairness metrics in comparison to several recent fair training procedures on three ML tasks that are susceptible to algorithmic bias.
SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
d260519400
Hyperbolic spaces, which have the capacity to embed tree structures without distortion owing to their exponential volume growth, have recently been applied to machine learning to better capture the hierarchical nature of data. In this study, we reconsider a way to generalize the fundamental components of neural networks in a single hyperbolic geometry model, and propose novel methodologies to construct a multinomial logistic regression, fully-connected layers, convolutional layers, and attention mechanisms under a unified mathematical interpretation, without increasing the parameters. A series of experiments show the parameter efficiency of our methods compared to a conventional hyperbolic component, and stability and outperformance over their Euclidean counterparts.One of the pioneering approaches is Hyperbolic Neural Networks (HNNs), which introduced an easy-to-interpret and highly analytical coordinate system of hyperbolic spaces, namely, the Poincaré ball model, with a corresponding gyrovector space to smoothly connect the fundamental functions common to neural networks into valid ones in a hyperbolic geometry[9]. Built upon the solid foundation of HNNs, the essential components for neural networks covering the multinomial logistic regression (MLR), fully-connected (FC) layers, and Recurrent Neural Networks have been realized in the Poincaré ball model. In addition to the formalism, the methods for graphs [23], sequential classification[25],or Variational Autoencoders [27,24,32,37]are further constructed. Such studies have applied the Poincaré ball model as a natural and viable option in the area of deep learning.Despite such progress, however, unsolved problems and uncovered regions remain. In terms of the network architectures, the current formulation of hyperbolic MLR requires almost twice the number of parameters compared to its Euclidean counterpart, which makes it unscalable in cases in which numerous embedded entities should be classified or where large hidden dimensions are employed, Preprint. Under review.
Hyperbolic Neural Networks++
d264590778
Social alignment in AI systems aims to ensure that these models behave according to established societal values.However, unlike humans, who derive consensus on value judgments through social interaction, current language models (LMs) are trained to rigidly replicate their training corpus in isolation, leading to subpar generalization in unfamiliar scenarios and vulnerability to adversarial attacks.This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.In comparison to existing methodologies, our approach is considerably more scalable and efficient, demonstrating superior performance in alignment benchmarks and human evaluations.This paradigm shift in the training of LMs brings us a step closer to developing AI systems that can robustly and accurately reflect societal norms and values.
TRAINING SOCIALLY ALIGNED LANGUAGE MODELS ON SIMULATED SOCIAL INTERACTIONS
d240288910
A key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties. Existing work that provably achieves this goal relies on strong assumptions on relationships between the latent variables (e.g., independence conditional on auxiliary information). In this paper, we take a very different perspective on the problem and ask, "Can we instead identify latent properties by leveraging knowledge of the mechanisms that govern their evolution?" We provide a complete characterization of the sources of non-identifiability as we vary knowledge about a set of possible mechanisms. In particular, we prove that if we know the exact mechanisms under which the latent properties evolve, then identification can be achieved up to any equivariances that are shared by the underlying mechanisms. We generalize this characterization to settings where we only know some hypothesis class over possible mechanisms, as well as settings where the mechanisms are stochastic. We demonstrate the power of this mechanism-based perspective by showing that we can leverage our results to generalize existing identifiable representation learning results. These results suggest that by exploiting inductive biases on mechanisms, it is possible to design a range of new identifiable representation learning approaches. * equal contribution, author order selected randomly. 1 A problem is identified if there exists a unique solution in the infinite data limit and no constraints on model capacity.
Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning
d253080580
Bridging geometry and topology, curvature is a powerful and expressive invariant. While the utility of curvature has been theoretically and empirically confirmed in the context of manifolds and graphs, its generalization to the emerging domain of hypergraphs has remained largely unexplored. On graphs, the Ollivier-Ricci curvature measures differences between random walks via Wasserstein distances, thus grounding a geometric concept in ideas from probability theory and optimal transport. We develop ORCHID, a flexible framework generalizing Ollivier-Ricci curvature to hypergraphs, and prove that the resulting curvatures have favorable theoretical properties. Through extensive experiments on synthetic and real-world hypergraphs from different domains, we demonstrate that ORCHID curvatures are both scalable and useful to perform a variety of hypergraph tasks in practice.arXiv:2210.12048v3 [cs.LG] 6 Apr 2023Published as a conference paper at ICLR 2023 Structure. After providing the necessary background on graphs and hypergraphs and recalling the definition of Ollivier-Ricci curvature for graphs in Section 2, we introduce ORCHID, our framework for hypergraph ORC, and analyze the theoretical properties of ORCHID curvatures in Section 3. We assess the empirical properties and practical utility of ORCHID curvatures through extensive experiments in Section 4, and discuss limitations and potential extensions of ORCHID as well as directions for future work in Section 5. Further materials are provided in Appendices A.1 to A.5.PRELIMINARIESGraphs and HypergraphsHere, for a set S and a positive integer k ≤ |S|, S k denotes the set of all k-element subsets of S, and forIn multi-graphs, edges can occur multiple times, and hence, E = (e 1 , . . . , e m ) is an indexed family of sets, with e i ∈ V 2 for all i ∈ [m]. Generalizing simple graphs, a simple hypergraph H = (V, E) is a tuple containing n nodes V and m hyperedges E ⊆ P(V ) \ ∅, i.e., in contrast to edges, hyperedges can have any cardinality r ∈ [n]. In a multihypergraph, E = (e 1 , . . . , e m ) is an indexed family of sets, with e i ⊆ V for all i ∈ [m]. We assume that all our hypergraphs are multi-hypergraphs, and we drop the prefix hyper from hypergraph and hyperedge where it is clear from context.We denote the degree of node i, i.e., the number of edges containing i, by deg(i) = |{e ∈ E | i ∈ e}|, write i ∼ j if i is adjacent to j (i.e., there exists e ∈ E such that {i, j} ⊆ e), and use N (i) (N (e)) for the neighborhood of i (e), i.e., the set of nodes adjacent to i (edges intersecting edge e). While deg(i) = | N (i)| in simple graphs and deg(i) ≥ | N (i)| in multigraphs, these relations do not generally hold for hypergraphs. Two nodes i = j are connected in H if there is a sequence of. Every such sequence is a path in H, whose length is the cardinality of the set of edges used in the adjacency relation. We refer to the length of a shortest path connecting nodes i, j as the distance between them, denoted as d(i, j). We assume that all (hyper)graphs are connected, i.e., there exists a path between all pairs of nodes. This turns H into a metric space (H, d) with diameter diam(H) := max{d(i, j) | i, j ∈ V }.(Hyper)graphs in which all nodes have the same degree k (deg(i) = k for all i ∈ V ) are called k-regular. Three properties of hypergraphs that distinguish them from graphs give rise to additional (ir)regularities. First, hyperedges can vary in cardinality, and a hypergraph in which all hyperedges have the same cardinality r (|e| = r for all e ∈ E) is called r-uniform. Second, hyperedge intersections can have cardinality greater than 1, and we call a hypergraph s-intersecting if all nonempty edge intersections have the same cardinality s (e ∩ f = ∅ ⇔ |e ∩ f | = s for all e, f ∈ E). Third, nodes can cooccur in any number of hyperedges; we call a hypergraph c-cooccurrent if each node cooccurs c times with any of its neighbors (i ∼ j ⇔ |{e ∈ E | {i, j} ⊆ e}| = c for all i, j ∈ V ). Using this terminology, simple graphs are 2-uniform, 1-intersecting, 1-cooccurrent hypergraphs.where two nodes are adjacent in G • if and only if they are adjacent in H. The weighted clique expansion of H is G • endowed with a weighting function w : E • → N, where w(e) = |{e ∈ E | {i, j} ⊆ e}| for each e ∈ E • , i.e., an edge {i, j} is weighted by how often i and j cooccur in edges from H. Both of these transformations are lossy, i.e., we cannot uniquely reconstruct H from G • . The unweighted star expansion of H is the bipartite graph G = (V , E ) with V = V∪E and E = {{i, e} | i ∈ V, e ∈ E, i ∈ e}, and we can uniquely reconstruct H from G if we know which of its parts corresponds to the original node set of H.
OLLIVIER-RICCI CURVATURE FOR HYPERGRAPHS: A UNIFIED FRAMEWORK
d244478155
Vision Transformer (ViT) is emerging as the state-of-the-art architecture for image recognition. While recent studies suggest that ViTs are more robust than their convolutional counterparts, our experiments find that ViTs trained on ImageNet are overly reliant on local textures and fail to make adequate use of shape information. ViTs thus have difficulties generalizing to out-of-distribution, real-world data. To address this deficiency, we present a simple and effective architecture modification to ViT's input layer by adding discrete tokens produced by a vectorquantized encoder. Different from the standard continuous pixel tokens, discrete tokens are invariant under small perturbations and contain less information individually, which promote ViTs to learn global information that is invariant. Experimental results demonstrate that adding discrete representation on four architecture variants strengthens ViT robustness by up to 12% across seven ImageNet robustness benchmarks while maintaining the performance on ImageNet.
DISCRETE REPRESENTATIONS STRENGTHEN VISION TRANSFORMER ROBUSTNESS
d252682995
Temporal networks model a variety of important phenomena involving timed interactions between entities. Existing methods for machine learning on temporal networks generally exhibit at least one of two limitations. First, time is assumed to be discretized, so if the time data is continuous, the user must determine the discretization and discard precise time information. Second, edge representations can only be calculated indirectly from the nodes, which may be suboptimal for tasks like edge classification. We present a simple method that avoids both shortcomings: construct the line graph of the network, which includes a node for each interaction, and weigh the edges of this graph based on the difference in time between interactions. From this derived graph, edge representations for the original network can be computed with efficient classical methods. The simplicity of this approach facilitates explicit theoretical analysis: we can constructively show the effectiveness of our method's representations for a natural synthetic model of temporal networks. Empirical results on real-world networks demonstrate our method's efficacy and efficiency on both edge classification and temporal link prediction.
Direct Embedding of Temporal Network Edges via Time-Decayed Line Graphs
d249712405
As machine learning becomes more widespread throughout society, aspects including data privacy and fairness must be carefully considered, and are crucial for deployment in highly regulated industries. Unfortunately, the application of privacy enhancing technologies can worsen unfair tendencies in models. In particular, one of the most widely used techniques for private model training, differentially private stochastic gradient descent (DPSGD), frequently intensifies disparate impact on groups within data. In this work we study the fine-grained causes of unfairness in DPSGD and identify gradient misalignment due to inequitable gradient clipping as the most significant source. This observation leads us to a new method for reducing unfairness by preventing gradient misalignment in DPSGD.
DISPARATE IMPACT IN DIFFERENTIAL PRIVACY FROM GRADIENT MISALIGNMENT
d8257350
Recurrent neural networks have achieved excellent performance in many applications. However, on portable devices with limited resources, the models are often too large to deploy. For applications on the server with large scale concurrent requests, the latency during inference can also be very critical for costly computing resources. In this work, we address these problems by quantizing the network, both weights and activations, into multiple binary codes {−1, +1}. We formulate the quantization as an optimization problem. Under the key observation that once the quantization coefficients are fixed the binary codes can be derived efficiently by binary search tree, alternating minimization is then applied. We test the quantization for two well-known RNNs, i.e., long short term memory (LSTM) and gated recurrent unit (GRU), on the language models. Compared with the full-precision counter part, by 2-bit quantization we can achieve ∼16× memory saving and ∼6× real inference acceleration on CPUs, with only a reasonable loss in the accuracy. By 3-bit quantization, we can achieve almost no loss in the accuracy or even surpass the original model, with ∼10.5× memory saving and ∼3× real inference acceleration. Both results beat the exiting quantization works with large margins. We extend our alternating quantization to image classification tasks. In both RNNs and feedforward neural networks, the method also achieves excellent performance. rank matrix factorization for deep neural network training with high-dimensional output targets. In ICASSP, pp. 6655-6659. IEEE, 2013.Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015.
ALTERNATING MULTI-BIT QUANTIZATION FOR RECURRENT NEURAL NETWORKS
d3708505
A deep fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP) in the limit of infinite network width. This correspondence enables exact Bayesian inference for neural networks on regression tasks by means of straightforward matrix computations. For single hiddenlayer networks, the covariance function of this GP has long been known. Recently, kernel functions for multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified the correspondence between using these kernels as the covariance function for a GP and performing fully Bayesian prediction with a deep neural network. In this work, we derive this correspondence and develop a computationally efficient pipeline to compute the covariance functions. We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. We find that the GP-based predictions are competitive and can outperform neural networks trained with stochastic gradient descent. We observe that the trained neural network accuracy approaches that of the corresponding GP-based computation with increasing layer width, and that the GP uncertainty is strongly correlated with prediction error. We connect our observations to the recent development of signal propagation in random neural networks. * Both authors contributed equally to this work. † Work done as a member of the Google Brain Residency program (g.co/brainresidency). 1 Throughout this paper, we assume the conditions on the parameter distributions and nonlinearities are such that the Central Limit Theorem will hold; for instance, that the weight variance is scaled inversely proportional to the layer width.
DEEP NEURAL NETWORKS AS GAUSSIAN PROCESSES
d263830446
The optimal transport problem for measures supported on non-Euclidean spaces has recently gained ample interest in diverse applications involving representation learning.In this paper, we focus on circular probability measures, i.e., probability measures supported on the unit circle, and introduce a new computationally efficient metric for these measures, denoted as Linear Circular Optimal Transport (LCOT).The proposed metric comes with an explicit linear embedding that allows one to apply Machine Learning (ML) algorithms to the embedded measures and seamlessly modify the underlying metric for the ML algorithm to LCOT.We show that the proposed metric is rooted in the Circular Optimal Transport (COT) and can be considered the linearization of the COT metric with respect to a fixed reference measure.We provide a theoretical analysis of the proposed metric and derive the computational complexities for pairwise comparison of circular probability measures.Lastly, through a set of numerical experiments, we demonstrate the benefits of LCOT in learning representations of circular measures.
LCOT: Linear circular optimal transport
d53477919
Control of complex systems involves both system identification and controller design.Deep neural networks have proven to be successful in many identification tasks, such as classification, prediction, and end-to-end system modeling.However, from the controller design perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex.Therefore many systems are still optimized and controlled based on simple linear models despite their poor identification performance.In this paper we address this problem by explicitly constructing deep neural networks that are convex with respect to their inputs.We show that these input convex networks can be trained to obtain accurate models of complex physical systems.In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems.Then optimal controllers based on these networks can be designed by solving convex optimization problems.Results on both toy models and real-world image denoising and building energy optimization problems demonstrate the modeling accuracy and control efficiency of the proposed approach.
Optimal Control Via Neural Networks: A Convex Approach
d211132391
Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts. They are a natural choice to solve discriminative tasks in a robust manner as they jointly optimize for predictive performance and accurate modeling of the input distribution. In this work, we investigate robust classification with likelihood-based generative models from a theoretical and practical perspective to investigate if they can deliver on their promises. Our analysis focuses on a spectrum of robustness properties:(1)Detection of worst-case outliers in the form of adversarial examples; (2) Detection of average-case outliers in the form of ambiguous inputs and (3) Detection of incorrectly labeled in-distribution inputs.Our theoretical result reveals that it is impossible to guarantee detectability of adversarially-perturbed inputs even for near-optimal generative classifiers. Experimentally, we find that while we are able to train robust models for MNIST, robustness completely breaks down on CIFAR10. We relate this failure to various undesirable model properties that can be traced to the maximum likelihood training objective. Despite being a common choice in the literature, our results indicate that likelihood-based conditional generative models may are surprisingly ineffective for robust classification.Published as a conference paper at ICLR 2020 jointly modeling the input and target distribution should make it easy to detect out-of-distribution inputs. These traits lend hope to the belief that good class-conditional generative models can overcome important problems faced by discriminative models.
UNDERSTANDING THE LIMITATIONS OF CONDITIONAL GENERATIVE MODELS
d231719892
Deep generative modeling has seen impressive advances in recent years, to the point where it is now commonplace to see simulated samples (e.g., images) that closely resemble real-world data.However, generation quality is generally inconsistent for any given model and can vary dramatically between samples.We introduce Discriminator Gradient f low (DGf low), a new technique that improves generated samples via the gradient flow of entropy-regularized f -divergences between the real and the generated data distributions.The gradient flow takes the form of a non-linear Fokker-Plank equation, which can be easily simulated by sampling from the equivalent McKean-Vlasov process.By refining inferior samples, our technique avoids wasteful sample rejection used by previous methods (DRS & MH-GAN).Compared to existing works that focus on specific GAN variants, we show our refinement approach can be applied to GANs with vector-valued critics and even other deep generative models such as VAEs and Normalizing Flows.Empirical results on multiple synthetic, image, and text datasets demonstrate that DGf low leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) and Discriminator Driven Latent Sampling (DDLS) methods.
REFINING DEEP GENERATIVE MODELS VIA DISCRIMINATOR GRADIENT FLOW
d53781800
In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness.In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as highquality waveform generation.We introduce TimbreTron, a method for musical timbre transfer which applies "image" domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer.We show that the Constant Q Transform (CQT) representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance.Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples.We made an accompanying demo video 1 which we strongly encourage you to watch before reading the paper.
TIMBRETRON: A WAVENET(CYCLEGAN(CQT(AUDIO))) PIPELINE FOR MUSICAL TIMBRE TRANSFER
d174797767
Forming perceptual groups and individuating objects in visual scenes is an essential step towards visual intelligence. This ability is thought to arise in the brain from computations implemented by bottom-up, horizontal, and top-down connections between neurons. However, the relative contributions of these connections to perceptual grouping are poorly understood. We address this question by systematically evaluating neural network architectures featuring combinations of these connections on two synthetic visual tasks, which stress low-level "gestalt" vs. high-level object cues for perceptual grouping. We show that increasing the difficulty of either task strains learning for networks that rely solely on bottom-up processing. Horizontal connections resolve this limitation on tasks with gestalt cues by supporting incremental spatial propagation of activities, whereas top-down connections rescue learning on tasks featuring object cues by propagating coarse predictions about the position of the target object. Our findings disassociate the computational roles of bottom-up, horizontal and top-down connectivity, and demonstrate how a model featuring all of these interactions can more flexibly learn to form perceptual groups.Extant theory suggests that there are two distinct types of feedback strategies: A low-level strategy of grouping visual features with neighboring features according to Gestalt laws including similarity, good continuation, etc.[13][14][15][16][17][18][19]. In contrast, an object-based strategy is mediated by expectations † These authors contributed equally to this work.
Disentangling neural mechanisms for perceptual grouping
d257050884
Active learning has demonstrated data efficiency in many fields. Existing active learning algorithms, especially in the context of batch-mode deep Bayesian active models, rely heavily on the quality of uncertainty estimations of the model, and are often challenging to scale to large batches. In this paper, we propose Batch-BALANCE, a scalable batch-mode active learning algorithm, which combines insights from decision-theoretic active learning, combinatorial information measure, and diversity sampling. At its core, Batch-BALANCE relies on a novel decision-theoretic acquisition function that facilitates differentiation among different equivalence classes. Intuitively, each equivalence class consists of hypotheses (e.g., posterior samples of deep neural networks) with similar predictions, and Batch-BALANCE adaptively adjusts the size of the equivalence classes as learning progresses. To scale up the computation of queries to large batches, we further propose an efficient batch-mode acquisition procedure, which aims to maximize a novel information measure defined through the acquisition function. We show that our algorithm can effectively handle realistic multi-class classification tasks, and achieves compelling performance on several benchmark datasets for active learning under both low-and large-batch regimes. Reference code is released at https://github.com/zhangrenyuuchicago/BALanCe.
SCALABLE BATCH-MODE DEEP BAYESIAN ACTIVE LEARNING VIA EQUIVALENCE CLASS ANNEALING
d17306137
Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models details very well, but lacks a latent code and is difficult to scale for capturing large structures. We present PixelVAE, a VAE model with an autoregressive decoder based on PixelCNN. Our model requires very few expensive autoregressive layers compared to PixelCNN and learns latent codes that are more compressed than a standard VAE while still capturing most non-trivial structure. Finally, we extend our model to a hierarchy of latent variables at different scales. Our model achieves state-of-the-art performance on binarized MNIST, competitive performance on 64 × 64 ImageNet, and high-quality samples on the LSUN bedrooms dataset.
PIXELVAE: A LATENT VARIABLE MODEL FOR NATURAL IMAGES
d53018855
We propose a rejection sampling scheme using the discriminator of a GAN to approximately correct errors in the GAN generator distribution. We show that under quite strict assumptions, this will allow us to recover the data distribution exactly. We then examine where those strict assumptions break down and design a practical algorithm-called Discriminator Rejection Sampling (DRS)-that can be used on real data-sets. Finally, we demonstrate the efficacy of DRS on a mixture of Gaussians and on the SAGAN model, state-of-the-art in the image generation task at the time of developing this work. On ImageNet, we train an improved baseline that increases the Inception Score from 52.52 to 62.36 and reduces the Fréchet Inception Distance from 18.65 to 14.79. We then use DRS to further improve on this baseline, improving the Inception Score to 76.08 and the FID to 13.75.
DISCRIMINATOR REJECTION SAMPLING
d252992725
Algorithmic reasoning requires capabilities which are most naturally understood through recurrent models of computation, like the Turing machine. However, Transformer models, while lacking recurrence, are able to perform such reasoning using far fewer layers than the number of reasoning steps. This raises the question: what solutions are learned by these shallow and non-recurrent models? We show that a low-depth Transformer can represent the computations of any finite-state automaton (thus, any bounded-memory algorithm), by hierarchically reparameterizing its recurrent dynamics. Our theoretical results characterize shortcut solutions, whereby a Transformer with o(T ) layers can exactly replicate the computation of an automaton on an input sequence of length T . We find that polynomial-sized O(log T )-depth solutions always exist; furthermore, O(1)-depth simulators are surprisingly common, and can be understood using tools from Krohn-Rhodes theory and circuit complexity. Empirically, we find that Transformers converge to shortcut solutions with standard training, across a wide variety of automata. We further investigate the brittleness of these solutions and propose potential mitigations.Semiautomata describe the underlying dynamics of automata, which are simply semiautomata equipped with mappings from states to outputs. With unbounded state spaces, automata can represent all algorithms; however, even bounded automata form a rich class of sequence processing algorithms, containing regularTheory: shortcuts aboundTo simulate a semiautomaton at length T , a T -layer Transformer can implement the same sequential solution as an RNN: let the t-th layer embed the state transition q t−1 → q t . We define shortcuts as solutions which implement the same functionality with a significantly smaller depth.Definition 1 (Shortcut solution). Let A be a semiautomaton. For every T ≥ 1, let f T be a sequence-tosequence neural network which simulates A at length T . Then, we call this sequence {f T } T ≥1 a shortcut to A if the sequence of network depths D :By this definition, shortcuts are quite general, and some are less interesting than others. For example, it is always possible to construct a constant-depth neural network which memorizes all |Σ| T values of A T,q0 , but 2 We omit layer normalization. This discrepancy is superficial; see the discussion in Appendix A.4.Reproducibility StatementComplete proofs of the theoretical results are provided in Appendix C, with a self-contained tutorial of relevant algebraic concepts in Appendix A.2. For the empirical results, all our datasets are derived from synthetic distributions, which are clearly described in Appendix B.1 and B.2. The architectures, implementations (with references to popular base repositories), and hyperparameters (including training procedure) are documented in Appendix B.3. Our open-source data-generating code is available from our project page.
Transformers Learn Shortcuts to Automata
d257232817
We give the first efficient algorithm for learning halfspaces in the testable learning model recently defined by Rubinfeld and Vasilyan[RV23]. In this model, a learner certifies that the accuracy of its output hypothesis is near optimal whenever the training set passes an associated test, and training sets drawn from some target distribution -e.g., the Gaussian -must pass the test. This model is more challenging than distribution-specific agnostic or Massart noise models where the learner is allowed to fail arbitrarily if the distributional assumption does not hold.We consider the setting where the target distribution is Gaussian (or more generally any strongly log-concave distribution) in d dimensions and the noise model is either Massart or adversarial (agnostic). For Massart noise, our tester-learner runs in polynomial time and outputs a hypothesis with (information-theoretically optimal) error opt + for any strongly log-concave target distribution. For adversarial noise, our tester-learner obtains error O(opt) + in polynomial time when the target distribution is Gaussian; for strongly log-concave distributions, we obtain O(opt) + in quasipolynomial time.Prior work on testable learning ignores the labels in the training set and checks that the empirical moments of the covariates are close to the moments of the base distribution. Here we develop new tests of independent interest that make critical use of the labels and combine them with the moment-matching approach of[GKK23]. This enables us to simulate a variant of the algorithm of [DKTZ20a, DKTZ20b] for learning noisy halfspaces using nonconvex SGD but in the testable learning setting.Learning halfspaces in the presence of noise is one of the most basic and well-studied problems in computational learning theory. A large body of work has obtained results for this problem under a variety of different noise models and distributional assumptions (see e.g. [BH21] for a survey). A major issue with common distributional assumptions such as Gaussianity, however, is that they can be hard or impossible to verify in the absence of any prior information.The recently defined model of testable learning [RV23] addresses this issue by replacing such assumptions with efficiently testable ones. In this model, the learner is required to work with an arbitrary input distribution D X Y and verify any assumptions it needs to succeed. It may choose to reject a given training set, but if it accepts, it is required to output a hypothesis with error close to opt(C, D X Y ), the optimal error achievable over D X Y by any function in a concept class C. Further, whenever the training set is drawn from a distribution D X Y whose marginal is truly a well-behaved target distribution D * (such as the standard Gaussian), the algorithm is required to accept with high probability. Such an algorithm, or tester-learner, is then said to testably learn C with respect to target marginal D * . (See Definition 2.1 for a formal definition.) Note that unlike ordinary distribution-specific agnostic learners, a tester-learner must take some nontrivial action regardless of the input distribution.The work of [RV23, GKK23] established foundational algorithmic and statistical results for this model and showed that testable learning is in general provably harder than ordinary distributionspecific agnostic learning. As one of their main algorithmic results, they showed tester-learners for the class of halfspaces over R d that succeed whenever the target marginal is Gaussian (or one of a more general class of distributions), achieving error opt + in time and sample complexity d O(1/ 2 ) . This matches the running time of ordinary distribution-specific agnostic learning of halfspaces over the Gaussian using the standard approach of[KKMS08]. Their testers are simple and labeloblivious, and are based on checking whether the low-degree empirical moments of the unknown marginal match those of the target D * .These works essentially resolve the question of designing tester-learners achieving error opt + for halfspaces, matching known hardness results for (ordinary) agnostic learning [GGK20, DKZ20, DKPZ21]. Their running time, however, necessarily scales exponentially in 1/ .A long line of research has sought to obtain more efficient algorithms at the cost of relaxing the optimality guarantee [ABL17, DKS18, DKTZ20a, DKTZ20b]. These works give polynomial-time algorithms achieving bounds of the form opt + and O(opt) + for the Massart and agnostic setting respectively under structured distributions (see Section 1.1 for more discussion). The main question we consider here is whether such guarantees can be obtained in the testable learning framework.
An Efficient Tester-Learner for Halfspaces
d225062170
The COVID-19 pandemic has spread rapidly worldwide, overwhelming manual contact tracing in many countries and resulting in widespread lockdowns for emergency containment. Large-scale digital contact tracing (DCT) 1 has emerged as a potential solution to resume economic and social activity while minimizing spread of the virus. Various DCT methods have been proposed, each making trade-offs between privacy, mobility restrictions, and public health. The most common approach, binary contact tracing (BCT), models infection as a binary event, informed only by an individual's test results, with corresponding binary recommendations that either all or none of the individual's contacts quarantine. BCT ignores the inherent uncertainty in contacts and the infection process, which could be used to tailor messaging to high-risk individuals, and prompt proactive testing or earlier warnings. It also does not make use of observations such as symptoms or pre-existing medical conditions, which could be used to make more accurate infectiousness predictions. In this paper, we use a recently-proposed COVID-19 epidemiological simulator to develop and test methods that can be deployed to a smartphone to locally and proactively predict an individual's infectiousness (risk of infecting others) based on their contact history and other information, while respecting strong privacy constraints. Predictions are used to provide personalized recommendations to the individual via an app, as well as to send anonymized messages to the individual's contacts, who use this information to better predict their own infectiousness, an approach we call proactive contact tracing (PCT). Similarly to other works, we find that compared to no tracing, all DCT methods tested are able to reduce spread of the disease and thus save lives, even at low adoption rates, strongly supporting a role for DCT methods in managing the pandemic. Further, we find a deep-learning based PCT method which improves over BCT for equivalent average mobility, suggesting PCT could help in safe re-opening and second-wave prevention. 2
PREDICTING INFECTIOUSNESS FOR PROACTIVE CONTACT TRACING
d52895832
Adam is shown not being able to converge to the optimal solution in certain cases. Researchers recently propose several algorithms to avoid the issue of nonconvergence of Adam, but their efficiency turns out to be unsatisfactory in practice. In this paper, we provide a new insight into the non-convergence issue of Adam as well as other adaptive learning rate methods. We argue that there exists an inappropriate correlation between gradient g t and the second moment term v t in Adam (t is the timestep), which results in that a large gradient is likely to have small step size while a small gradient may have a large step size. We demonstrate that such unbalanced step sizes are the fundamental cause of non-convergence of Adam, and we further prove that decorrelating v t and g t will lead to unbiased step size for each gradient, thus solving the non-convergence problem of Adam. Finally, we propose AdaShift, a novel adaptive learning rate method that decorrelates v t and g t by temporal shifting, i.e., using temporally shifted gradient g t−n to calculate v t . The experiment results demonstrate that AdaShift is able to address the non-convergence issue of Adam, while still maintaining a competitive performance with Adam in terms of both training speed and generalization. * Equally contributed.
ADASHIFT: DECORRELATION AND CONVERGENCE OF ADAPTIVE LEARNING RATE METHODS
d251564473
The remarkable performance gains realized by large pretrained models, e.g., GPT-3, hinge on the massive amounts of data they are exposed to during training. Analogously, distilling such large models to compact models for efficient deployment also necessitates a large amount of (labeled or unlabeled) training data. In this paper, we propose the teacher-guided training (TGT) framework for training a high-quality compact model that leverages the knowledge acquired by pretrained generative models, while obviating the need to go through a large volume of data. TGT exploits the fact that the teacher has acquired a good representation of the underlying data domain, which typically corresponds to a much lower dimensional manifold than the input space. Furthermore, we can use the teacher to explore input space more efficiently through sampling or gradient-based methods; thus, making TGT especially attractive for limited data or long-tail settings. We formally capture this benefit of proposed data-domain exploration in our generalization bounds. We find that TGT can improve accuracy on several image classification benchmarks as well as a range of text classification and retrieval tasks.
Teacher Guided Training: An Efficient Framework for Knowledge Transfer
d259313791
This paper proposes a simple method to distill and detect backdoor patterns within an image: Cognitive Distillation (CD). The idea is to extract the "minimal essence" from an input image responsible for the model's prediction. CD optimizes an input mask to extract a small pattern from the input image that can lead to the same model output (i.e., logits or deep features). The extracted pattern can help understand the cognitive mechanism of a model on clean vs. backdoor images and is thus called a Cognitive Pattern (CP). Using CD and the distilled CPs, we uncover an interesting phenomenon of backdoor attacks: despite the various forms and sizes of trigger patterns used by different attacks, the CPs of backdoor samples are all surprisingly and suspiciously small. One thus can leverage the learned mask to detect and remove backdoor examples from poisoned training datasets. We conduct extensive experiments to show that CD can robustly detect a wide range of advanced backdoor attacks. We also show that CD can potentially be applied to help detect potential biases from face datasets. . Deep feature space trojan attack of neural networks by controlled detoxification. arXiv preprint arXiv:2012.11212, 2020. attack with imperceptible input and latent modification. NeurIPS, 2021a.
DISTILLING COGNITIVE BACKDOOR PATTERNS WITHIN AN IMAGE
d220127956
Influence functions approximate the effect of training samples in test-time predictions and have a wide variety of applications in machine learning interpretability and uncertainty estimation. A commonly-used (first-order) influence function can be implemented efficiently as a post-hoc method requiring access only to the gradients and Hessian of the model. For linear models, influence functions are well-defined due to the convexity of the underlying loss function and are generally accurate even across difficult settings where model changes are fairly large such as estimating group influences. Influence functions, however, are not well-understood in the context of deep learning with non-convex loss functions. In this paper, we provide a comprehensive and large-scale empirical study of successes and failures of influence functions in neural network models trained on datasets such as Iris, MNIST, CIFAR-10 and ImageNet. Through our extensive experiments, we show that the network architecture, its depth and width, as well as the extent of model parameterization and regularization techniques have strong effects in the accuracy of influence functions. In particular, we find that (i) influence estimates are fairly accurate for shallow networks, while for deeper networks the estimates are often erroneous; (ii) for certain network architectures and datasets, training with weight-decay regularization is important to get high-quality influence estimates; and (iii) the accuracy of influence estimates can vary significantly depending on the examined test points. These results suggest that in general influence functions in deep learning are fragile and call for developing improved influence estimation methods to mitigate these issues in non-convex setups.* denotes equal contribution Preprint. Under review.
Influence Functions in Deep Learning Are Fragile