_id stringlengths 4 10 | text stringlengths 0 18.4k | title stringlengths 0 8.56k |
|---|---|---|
d219981518 | Calibrating neural networks is of utmost importance when employing them in safety-critical applications where the downstream decision making depends on the predicted probabilities. Measuring calibration error amounts to comparing two empirical distributions. In this work, we introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test in which the main idea is to compare the respective cumulative probability distributions. From this, by approximating the empirical cumulative distribution using a differentiable function via splines, we obtain a recalibration function, which maps the network outputs to actual (calibrated) class assignment probabilities. The spline-fitting is performed using a held-out calibration set and the obtained recalibration function is evaluated on an unseen test set. We tested our method against existing calibration approaches on various image classification datasets and our spline-based recalibration approach consistently outperforms existing methods on KS error as well as other commonly used calibration measures. | Published as a conference paper at ICLR 2021 CALIBRATION OF NEURAL NETWORKS USING SPLINES |
d3543617 | Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data still remains largely an open problem. Inspired by the theory of compiler where the syntax and semantics check is done via syntax-directed translation (SDT), we propose a novel syntax-directed variational autoencoder (SD-VAE) by introducing stochastic lazy attributes. This approach converts the offline SDT check into on-the-fly generated guidance for constraining the decoder. Comparing to the state-of-the-art methods, our approach enforces constraints on the output space so that the output will be not only syntactically valid, but also semantically reasonable. We evaluate the proposed model with applications in programming language and molecules, including reconstruction and program/molecule optimization. The results demonstrate the effectiveness in incorporating syntactic and semantic constraints in discrete generative models, which is significantly better than current state-of-the-art approaches. | Published as a conference paper at ICLR 2018 SYNTAX-DIRECTED VARIATIONAL AUTOENCODER FOR STRUCTURED DATA |
d256615568 | Moiré patterns appear frequently when taking photos of digital screens, drastically degrading the image quality. Despite the advance of CNNs in image demoiréing, existing networks are with heavy design, causing redundant computation burden for mobile devices. In this paper, we launch the first study on accelerating demoiréing networks and propose a dynamic demoiréing acceleration method (DDA) towards a real-time deployment on mobile devices. Our stimulus stems from a simple-yet-universal fact that moiré patterns often unbalancedly distribute across an image. Consequently, excessive computation is wasted upon non-moiré areas. Therefore, we reallocate computation costs in proportion to the complexity of image patches. In order to achieve this aim, we measure the complexity of an image patch by designing a novel moiré prior that considers both colorfulness and frequency information of moiré patterns. Then, we restore image patches with higher-complexity using larger networks and the ones with lower-complexity are assigned with smaller networks to relieve the computation burden. At last, we train all networks in a parameter-shared supernet paradigm to avoid additional parameter burden. Extensive experiments on several benchmarks demonstrate the efficacy of our proposed DDA. In addition, the acceleration evaluated on the VIVO X80 Pro smartphone equipped with a chip of Snapdragon 8 Gen 1 shows that our method can drastically reduce the inference time, leading to a real-time image demoiréing on mobile devices. Source codes and models are released at https://github.com/zyxxmu/DDA. | REAL-TIME IMAGE DEMOIRÉING ON MOBILE DE- VICES |
d249375525 | The asymptotic mean squared test error and sensitivity of the Random Features Regression model (RFR) have been recently studied. We build on this work and identify in closed-form the family of Activation Functions (AFs) that minimize a combination of the test error and sensitivity of the RFR under different notions of functional parsimony. We find scenarios under which the optimal AFs are linear, saturated linear functions, or expressible in terms of Hermite polynomials. Finally, we show how using optimal AFs impacts well established properties of the RFR model, such as its double descent curve, and the dependency of its optimal regularization parameter on the observation noise level. | Published as a conference paper at ICLR 2023 OPTIMAL ACTIVATION FUNCTIONS FOR THE RANDOM FEATURES REGRESSION MODEL |
d15530352 | Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transformations must be carefully designed, implemented, and tested for every new domain, limiting its re-use and generality. In this paper, we adopt a simpler, domain-agnostic approach to dataset augmentation. We start with existing data points and apply simple transformations such as adding noise, interpolating, or extrapolating between them. Our main insight is to perform the transformation not in input space, but in a learned feature space. A re-kindling of interest in unsupervised representation learning makes this technique timely and more effective. It is a simple proposal, but to-date one that has not been tested empirically. Working in the space of context vectors generated by sequence-to-sequence models, we demonstrate a technique that is effective for both static and sequential data. | Workshop track -ICLR 2017 DATASET AUGMENTATION IN FEATURE SPACE |
d244478279 | The lottery ticket hypothesis has sparked the rapid development of pruning algorithms that aim to reduce the computational costs associated with deep learning during training and model deployment. Currently, such algorithms are primarily evaluated on imaging data, for which we lack ground truth information and thus the understanding of how sparse lottery tickets could be. To fill this gap, we develop a framework that allows us to plant and hide winning tickets with desirable properties in randomly initialized neural networks. To analyze the ability of state-of-the-art pruning to identify tickets of extreme sparsity, we design and hide such tickets solving four challenging tasks. In extensive experiments, we observe similar trends as in imaging studies, indicating that our framework can provide transferable insights into realistic problems. Additionally, we can now see beyond such relative trends and highlight limitations of current pruning methods. Based on our results, we conclude that the current limitations in ticket sparsity are likely of algorithmic rather than fundamental nature. We anticipate that comparisons to planted tickets will facilitate future developments of efficient pruning algorithms.Published as a conference paper at ICLR 2022 reflect three common challenges in machine learning. We use this experimental set-up to compare state-of-the-art pruning algorithms designed to search for lottery tickets.Our results indicate that state-of-the-art methods achieve sub-optimal sparsity levels, and are not able to recover good tickets before training. The qualitative trends are consistent with previous results on image classification tasks (Tanaka et al., 2020; Frankle et al., 2021) indicating that our experimental set-up exposes pruning algorithms to realistic challenges. In addition, we improve a state-of-the-art pruning algorithms towards finding strong lottery tickets of better sparsity. Our proposed planting framework will enable the evaluation of future progress into this direction.Contributions 1) We prove the existence of strong lottery tickets with sparse representations. 2) Inspired by the proof, we derive a framework that allows us to plant and hide strong tickets in neural networks and thus create benchmark data with known ground truth. 3) We construct sparse representations of three types of tickets that reflect typical machine learning problems. 4) We systematically evaluate state-of-the-art pruning methods that aim to discover tickets on these three problems against the ground truth tickets and highlight key challenges. | Published as a conference paper at ICLR 2022 PLANT 'N' SEEK: CAN YOU FIND THE WINNING TICKET? |
d258437184 | Machine learning models fail to perform when facing out-of-distribution (OOD) domains, a challenging task known as domain generalization (DG). In this work, we develop a novel DG training strategy, we call PGrad , to learn a robust gradient direction, improving models' generalization ability on unseen domains. The proposed gradient aggregates the principal directions of a sampled roll-out optimization trajectory that measures the training dynamics across all training domains. PGrad 's gradient design forces the DG training to ignore domaindependent noise signals and updates all training domains with a robust direction covering main components of parameter dynamics. We further improve PGrad via bijection-based computational refinement and directional plus length-based calibrations. Our theoretical proof connects PGrad to the spectral analysis of Hessian in training neural networks. Experiments on DomainBed and WILDS benchmarks demonstrate that our approach effectively enables robust DG optimization and leads to smoothly decreased loss curves. Empirically, PGrad achieves competitive results across seven datasets, demonstrating its efficacy across both synthetic and real-world distributional shifts. Code is available at https://github.com/QData/PGrad.Recent literature covers a wide spectrum of DG methods, including invariant representation learning, meta-learning, data augmentation, ensemble learning, and gradient manipulation (more details in Section 2.4). Despite the large body of recent DG literature, the authors of (Gulrajani & Lopez-Paz, 2021) showed that empirical risk minimization (ERM) provides a competitive baseline on many real-world DG benchmarks. ERM does not explicitly address distributional shifts during training. Instead, ERM calculates the gradient from each training domain and updates a model with the average gradient. However, one caveat of ERM is its average gradient-based model update will pre-1 In the rest of this paper, we use the terms "domain" and "distribution" interchangeably. | Published as a conference paper at ICLR 2023 PGR A D : LEARNING PRINCIPAL GRADIENTS FOR DO- MAIN GENERALIZATION |
d252199584 | Partial differential equations (PDEs) see widespread use in sciences and engineering to describe simulation of physical processes as scalar and vector fields interacting and coevolving over time. Due to the computationally expensive nature of their standard solution methods, neural PDE surrogates have become an active research topic to accelerate these simulations. However, current methods do not explicitly take into account the relationship between different fields and their internal components, which are often correlated. Viewing the time evolution of such correlated fields through the lens of multivector fields allows us to overcome these limitations. Multivector fields consist of scalar, vector, as well as higher-order components, such as bivectors and trivectors. Their algebraic properties, such as multiplication, addition and other arithmetic operations can be described by Clifford algebras. To our knowledge, this paper presents the first usage of such multivector representations together with Clifford convolutions and Clifford Fourier transforms in the context of deep learning. The resulting Clifford neural layers are universally applicable and will find direct use in the areas of fluid dynamics, weather forecasting, and the modeling of physical systems in general. We empirically evaluate the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on 2D Navier-Stokes and weather modeling tasks, as well as 3D Maxwell equations. For similar parameter count, Clifford neural layers consistently improve generalization capabilities of the tested neural PDE surrogates. Source code for our PyTorch implementation is available at Marco AS Trindade, Vinicius NL Rocha, and S Floquet. Clifford algebras, quantum neural networks and generalized quantum fourier transform. arXiv preprint arXiv:2206.01808, 2022.Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. | Published as a conference paper at ICLR 2023 CLIFFORD NEURAL LAYERS FOR PDE MODELING |
d235266229 | In recent years, implicit deep learning has emerged as a method to increase the effective depth of deep neural networks. While their training is memory-efficient, they are still significantly slower to train than their explicit counterparts. In Deep Equilibrium Models (DEQs), the training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix. In this paper, we propose a novel strategy to tackle this computational bottleneck from which many bi-level problems suffer. The main idea is to use the quasi-Newton matrices from the forward pass to efficiently approximate the inverse Jacobian matrix in the direction needed for the gradient computation. We provide a theorem that motivates using our method with the original forward algorithms. In addition, by modifying these forward algorithms, we further provide theoretical guarantees that our method asymptotically estimates the true implicit gradient. We empirically study this approach and the recent Jacobian-Free method in different settings, ranging from hyperparameter optimization to large Multiscale DEQs (MDEQs) applied to CIFAR and ImageNet. Both methods reduce significantly the computational cost of the backward pass. While SHINE has a clear advantage on hyperparameter optimization problems, both methods attain similar computational performances for larger scale problems such as MDEQs at the cost of a limited performance drop compared to the original models. | SHINE: SHARING THE INVERSE ESTIMATE FROM THE FORWARD PASS FOR BI-LEVEL OPTIMIZATION AND IM- PLICIT MODELS |
d236034533 | Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we propose TAPEX to show that table pretraining can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and highquality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes improvements on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs and to achieve new state-of-the-art results on various downstream tasks. Our code can be found at https://github.com | Published as a conference paper at ICLR 2022 TAPEX: TABLE PRE-TRAINING VIA LEARNING A NEURAL SQL EXECUTOR |
d252118781 | We present a neural network architecture, Bispectral Neural Networks (BNNs) for learning representations that are invariant to the actions of compact commutative groups on the space over which a signal is defined. The model incorporates the ansatz of the bispectrum, an analytically defined group invariant that is complete-that is, it preserves all signal structure while removing only the variation due to group actions. Here, we demonstrate that BNNs are able to simultaneously learn groups, their irreducible representations, and corresponding equivariant and complete-invariant maps purely from the symmetries implicit in data. Further, we demonstrate that the completeness property endows these networks with strong invariance-based adversarial robustness. This work establishes Bispectral Neural Networks as a powerful computational primitive for robust invariant representation learning.arXiv:2209.03416v5 [cs.LG] 19 May 2023Published as a conference paper at ICLR 2023 collection of triple products computed from the output of the previous layer. BNNs are trained with an objective function consisting of two terms: one that collapses all transformations of a pattern to a single point in the output (invariance), and another that prevents information collapse in the first layer (selectivity).We demonstrate that BNNs trained to separate orbit classes in augmented data learn the group its Fourier transform, and corresponding bispectrum purely from the symmetries implicit in the data (Section 4.1). Because the model has learned the fundamental structure of the group, we show that it generalizes to novel, out-of-distribution classes with the same group structure and facilitates downstream group-invariant classification (Section 4.2). Further, we demonstrate that the trained network inherits the completeness of the analytical model, which endows the network with strong adversarial robustness (Section 4.3). Finally, we demonstrate that the weights of the network can be used to recover the group Cayley table-the fundamental signature of a group's structure (Section 4.4). Thus, an explicit model of the group can be learned and extracted from the network weights. To our knowledge, our work is the first to demonstrate that either a bispectrum or a group Cayley table can be learned from data alone. Our results set the foundation of a new computational primitive for robust and interpretable representation learning. | Published as a conference paper at ICLR 2023 BISPECTRAL NEURAL NETWORKS |
d234742529 | We consider the problem of learning a latent -vertex simplex ⊂ ℝ , given access to A ∈ ℝ × , which can be viewed as a data matrix with points that are obtained by randomly perturbing latent points in the simplex (potentially beyond ). A large class of latent variable models, such as adversarial clustering, mixed membership stochastic block models, and topic models can be cast as learning a latent simplex. Bhattacharyya and Kannan (SODA, 2020) give an algorithm for learning such a latent simplex in time roughly ( · nnz(A)), where nnz(A) is the number of non-zeros in A. We show that the dependence on in the running time is unnecessary given a natural assumption about the mass of the top singular values of A, which holds in many of these applications. Further, we show this assumption is necessary, as otherwise an algorithm for learning a latent simplex would imply an algorithmic breakthrough for spectral low rank approximation.At a high level, Bhattacharyya and Kannan provide an adaptive algorithm that makes matrix-vector product queries to A and each query is a function of all queries preceding it. Since each matrix-vector product requires nnz(A) time, their overall running time appears unavoidable. Instead, we obtain a low-rank approximation to A in input-sparsity time and show that the column space thus obtained has small sin Θ (angular) distance to the right topsingular space of A. Our algorithm then selects points in the low-rank subspace with the largest inner product (in absolute value) with carefully chosen random vectors. By working in the low-rank subspace, we avoid reading the entire matrix in each iteration and thus circumvent the Θ( · nnz(A)) running time.We study the problem of learning vertices M * ,1 , . . . , M * , of a latent -dimensional simplex in ℝ using data points generated from and then possibly perturbed by a stochastic, deterministic, or adversarial source before given to the algorithm. In particular, the resulting points observed as input data could be heavily perturbed so that the initial points may no longer be discernible or they could be outside the simplex . Recent work of Bhattacharyya and Kannan [BK20b] unifies several stochastic models for unsupervised learning problems, including -means clustering [CG92, GH + 96, Web03, WT10, Dua20], topic models [BJ03, SG07, BL06a, Ble12, AGH + 13a], mixed membership stochastic block models [ABFX08, MJG09, XFS + 10, FSX09, ABEF14, LAW16, FXC16] and Non-negative Matrix Factorization [AGH + 13b, GV14, Gil20] under the problem of learning a latent simplex. In general, identifying the latent simplex can be computationally intractable. However many special applications do not require the full generality. For example, in a mixture model like Gaussian mixtures, the data is assumed to be generated from a convex combination of density functions. Thus, it may be possible to efficiently approximately learn the latent simplex given certain distributional properties in these models.Indeed, Bhattacharyya and Kannan showed that given certain reasonable geometric assumptions that are typically satisfied for real-world instances of Latent Dirichlet Allocation, Stochastic Block Models and Clustering, there exists an ( · nnz(A)) time algorithm for recovering the vertices of the underlying simplex. We show that, given an additional natural assumption, we can remove the dependency on and obtain a true input sparsity time algorithm. We begin by defining the model along with our new assumption: Definition 1. | Learning a Latent Simplex in Input-Sparsity Time |
d218487350 | Although Neural Differential Equations have shown promise on toy problems such as MNIST, they have yet to be successfully applied to more challenging tasks. Inspired by variational methods for image restoration relying on partial differential equations, we choose to benchmark several forms of Neural DEs and backpropagation methods on single image super-resolution. The adjoint method previously proposed for gradient estimation has no theoretical stability guarantees; we find a practical case where this makes it unusable, and show that discrete sensitivity analysis has better stability. In our experiments 1 , differential models match the performance of a state-of-the art super-resolution model. | Neural Differential Equations for Single Image Super-Resolution |
d211259530 | The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains. Devising a definitive defense against such attacks is proven to be challenging, and the methods relying on detecting adversarial samples are only valid when the attacker is oblivious to the detection mechanism. In this paper we first present an adversarial example detection method that provides performance guarantee to norm constrained adversaries. The method is based on the idea of training adversarial robust subspace detectors using asymmetrical adversarial training (AAT). The novel AAT objective presents a minimax problem similar to that of GANs; it has the same convergence property, and consequently supports the learning of class conditional distributions. We first demonstrate that the minimax problem could be reasonably solved by PGD attack, and then use the learned class conditional generative models to define generative detection/classification models that are both robust and more interpretable. We provide comprehensive evaluations of the above methods, and demonstrate their competitive performances and compelling properties on adversarial detection and robust classification problems. | ADVERSARIAL EXAMPLE DETECTION AND CLASSIFI- CATION WITH ASYMMETRICAL ADVERSARIAL TRAIN- ING |
d259108315 | Though Self-supervised learning (SSL) has been widely studied as a promising technique for representation learning, it doesn't generalize well on long-tailed datasets due to the majority classes dominating the feature space. Recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (ID) data for self-supervised training, however, large-scale ID data which can rebalance the minority classes are expensive to collect. In this paper, we propose an alternative but easy-to-use and effective solution, Contrastive with Out-of-distribution (OOD) data for Long-Tail learning (COLT), which can effectively exploit OOD data to dynamically re-balance the feature space. We empirically identify the counter-intuitive usefulness of OOD samples in SSL long-tailed learning and principally design a novel SSL method. Concretely, we first localize the 'head' and 'tail' samples by assigning a tailness score to each OOD sample based on its neighborhoods in the feature space. Then, we propose an online OOD sampling strategy to dynamically re-balance the feature space. Finally, we enforce the model to be capable of distinguishing ID and OOD samples by a distributionlevel supervised contrastive loss. Extensive experiments are conducted on various datasets and several state-of-the-art SSL frameworks to verify the effectiveness of the proposed method. The results show that our method significantly improves the performance of SSL on long-tailed datasets by a large margin, and even outperforms previous work which uses external ID data. Our code is available at | ON THE EFFECTIVENESS OF OUT-OF-DISTRIBUTION DATA IN SELF-SUPERVISED LONG-TAIL LEARNING |
d485828 | We propose an approach to learn spatio-temporal features in videos from intermediate visual representations we call "percepts" using Gated-Recurrent-Unit Recurrent Networks (GRUs). Our method relies on percepts that are extracted from all levels of a deep convolutional network trained on the large ImageNet dataset. While high-level percepts contain highly discriminative information, they tend to have a low-spatial resolution. Low-level percepts, on the other hand, preserve a higher spatial resolution from which we can model finer motion patterns. Using low-level percepts, however, can lead to high-dimensionality video representations. To mitigate this effect and control the number of parameters, we introduce a variant of the GRU model that leverages the convolution operations to enforce sparse connectivity of the model units and share parameters across the input spatial locations. We empirically validate our approach on both Human Action Recognition and Video Captioning tasks. In particular, we achieve results equivalent to state-of-art on the YouTube2Text dataset using a simpler caption-decoder model and without extra 3D CNN features. | Published as a conference paper at ICLR 2016 DELVING DEEPER INTO CONVOLUTIONAL NETWORKS FOR LEARNING VIDEO REPRESENTATIONS |
d257102785 | We tackle the domain generalisation (DG) problem by posing it as a domain adaptation (DA) task where we adversarially synthesise the worst-case 'target' domain and adapt a model to that worst-case domain, thereby improving the model's robustness. To synthesise data that is challenging yet semantics-preserving, we generate Fourier amplitude images and combine them with source domain phase images, exploiting the widely believed conjecture from signal processing that amplitude spectra mainly determines image style, while phase data mainly captures image semantics. To synthesise a worst-case domain for adaptation, we train the classifier and the amplitude generator adversarially. Specifically, we exploit the maximum classifier discrepancy (MCD) principle from DA that relates the target domain performance to the discrepancy of classifiers in the model hypothesis space. By Bayesian hypothesis modeling, we express the model hypothesis space effectively as a posterior distribution over classifiers given the source domains, making adversarial MCD minimisation feasible. On the DomainBed benchmark including the large-scale DomainNet dataset, the proposed approach yields significantly improved domain generalisation performance over the state-of-the-art. | Published as a conference paper at ICLR 2023 DOMAIN GENERALISATION VIA DOMAIN ADAPTA- TION: AN ADVERSARIAL FOURIER AMPLITUDE AP- PROACH |
d1597636 | Biologically inspired, from the early HMAX model to Spatial Pyramid Matching, pooling has played an important role in visual recognition pipelines. Spatial pooling, by grouping of local codes, equips these methods with a certain degree of robustness to translation and deformation yet preserving important spatial information. Despite the predominance of this approach in current recognition systems, we have seen little progress to fully adapt the pooling strategy to the task at hand. This paper proposes a model for learning task dependent pooling scheme -including previously proposed hand-crafted pooling schemes as a particular instantiation. In our work, we investigate the role of different regularization terms showing that the smooth regularization term is crucial to achieve strong performance using the presented architecture. Finally, we propose an efficient and parallel method to train the model. Our experiments show improved performance over hand-crafted pooling schemes on the CIFAR-10 and CIFAR-100 datasetsin particular improving the state-of-the-art to 56.29% on the latter. | Learnable Pooling Regions for Image Classification |
d220128149 | Adversarial poisoning attacks distort training data in order to corrupt the test-time behavior of a classifier. A provable defense provides a certificate for each test sample, which is a lower bound on the magnitude of any adversarial distortion of the training set that can corrupt the test sample's classification. We propose two novel provable defenses against poisoning attacks: (i) Deep Partition Aggregation (DPA), a certified defense against a general poisoning threat model, defined as the insertion or deletion of a bounded number of samples to the training set -by implication, this threat model also includes arbitrary distortions to a bounded number of images and/or labels; and (ii) Semi-Supervised DPA (SS-DPA), a certified defense against label-flipping poisoning attacks. DPA is an ensemble method where base models are trained on partitions of the training set determined by a hash function. DPA is related to subset aggregation, a well-studied ensemble method in classical machine learning. DPA can also be regarded as an extension of randomized ablation (Levine and Feizi, 2020a), a smoothing-based certified defense against sparse evasion attacks, to the poisoning domain. Our defense against label-flipping poison attacks, SS-DPA, uses a semi-supervised learning algorithm as its base classifier model: each base classifier is trained using the entire unlabeled training set in addition to the labels for a partition. SS-DPA outperforms the existing certified defense for label-flipping attacks(Rosenfeld et al., 2020). SS-DPA can certify ≥ 50% of test images against 675 label flips (vs. < 200 label flips with the existing defense) on MNIST and 83 label flips on CIFAR-10. Against general poisoning attacks (no prior certified defenses), DPA can certify ≥ 50% of test images against over 500 poison image insertions on MNIST, and nine insertions on CIFAR-10. These results establish new state-of-the-art provable defenses against poison attacks.Preprint. Under review. | Deep Partition Aggregation: Provable Defense against General Poisoning Attacks |
d219981806 | A key challenge in adversarial robustness is the lack of a precise mathematical characterization of human perception, used in the definition of adversarial attacks that are imperceptible to human eyes. Most current attacks and defenses try to avoid this issue by considering restrictive adversarial threat models such as those bounded by L 2 or L ∞ distance, spatial perturbations, etc. However, models that are robust against any of these restrictive threat models are still fragile against other threat models, i.e. they have poor generalization to unforeseen attacks. Moreover, even if a model is robust against the union of several restrictive threat models, it is still susceptible to other imperceptible adversarial examples that are not contained in any of the constituent threat models. To resolve these issues, we propose adversarial training against the set of all imperceptible adversarial examples. Since this set is intractable to compute without a human in the loop, we approximate it using deep neural networks. We call this threat model the neural perceptual threat model (NPTM); it includes adversarial examples with a bounded neural perceptual distance (a neural network-based approximation of the true perceptual distance) to natural images. Through an extensive perceptual study, we show that the neural perceptual distance correlates well with human judgements of perceptibility of adversarial examples, validating our threat model. Under the NPTM, we develop novel perceptual adversarial attacks and defenses. Because the NPTM is very broad, we find that Perceptual Adversarial Training (PAT) against a perceptual attack gives robustness against many other types of adversarial attacks. We test PAT on CIFAR-10 and ImageNet-100 against five diverse adversarial attacks: L 2 , L ∞ , spatial, recoloring, and JPEG. We find that PAT achieves state-of-the-art robustness against the union of these five attacks-more than doubling the accuracy over the next best model-without training against any of them. That is, PAT generalizes well to unforeseen perturbation types. This is vital in sensitive applications where a particular threat model cannot be assumed, and to the best of our knowledge, PAT is the first adversarial training defense with this property. | Published as a conference paper at ICLR 2021 PERCEPTUAL ADVERSARIAL ROBUSTNESS: DEFENSE AGAINST UNSEEN THREAT MODELS |
d59316418 | Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task. To solve the problem, some regularization methods, constraining the outer layer weights of the target network using the starting point as references (SPAR), have been studied. In this paper, we propose a novel regularized transfer learning framework DELTA, namely DEep Learning Transfer using Feature Map with Attention. Instead of constraining the weights of neural network, DELTA aims to preserve the outer layer outputs of the target network. Specifically, in addition to minimizing the empirical loss, DELTA aligns the outer layer outputs of two networks, through constraining a subset of feature maps that are precisely selected by attention that has been learned in a supervised learning manner. We evaluate DELTA with the state-of-the-art algorithms, including L 2 and L 2 -SP . The experiment results show that our method outperforms these baselines with higher accuracy for new tasks. | DELTA: DEEP LEARNING TRANSFER USING FEATURE MAP WITH ATTENTION FOR CONVOLUTIONAL NET- WORKS |
d232240622 | Model-agnostic meta-learning (MAML) is a popular method for few-shot learning but assumes that we have access to the meta-training set. In practice, training on the meta-training set may not always be an option due to data privacy concerns, intellectual property issues, or merely lack of computing resources. In this paper, we consider the novel problem of repurposing pretrained MAML checkpoints to solve new few-shot classification tasks. Because of the potential distribution mismatch, the original MAML steps may no longer be optimal. Therefore we propose an alternative meta-testing procedure and combine MAML gradient steps with adversarial training and uncertainty-based stepsize adaptation. Our method outperforms "vanilla" MAML on same-domain and cross-domains benchmarks using both SGD and Adam optimizers and shows improved robustness to the choice of base stepsize. * Work done as visiting researchers at SAIT AI Lab, Montreal. † Canada CIFAR AI Chair Luo. Domain-adaptive few-shot learning. In arXiv, 2020. | Published as a conference paper at ICLR 2021 REPURPOSING PRETRAINED MODELS FOR ROBUST OUT-OF-DOMAIN FEW-SHOT LEARNING |
d237291550 | With recent progress in joint modeling of visual and textual representations, Vision-Language Pretraining (VLP) has achieved impressive performance on many multimodal downstream tasks. However, the requirement for expensive annotations including clean image captions and regional labels limits the scalability of existing approaches, and complicates the pretraining procedure with the introduction of multiple dataset-specific objectives. In this work, we relax these constraints and present a minimalist pretraining framework, named Simple Visual Language Model (SimVLM). Unlike prior work, SimVLM reduces the training complexity by exploiting large-scale weak supervision, and is trained end-to-end with a single prefix language modeling objective. Without utilizing extra data or task-specific customization, the resulting model significantly outperforms previous pretraining methods and achieves new state-of-the-art results on a wide range of discriminative and generative vision-language benchmarks, including VQA (+3.74% vqa-score), NLVR2 (+1.17% accuracy), SNLI-VE (+1.37% accuracy) and image captioning tasks (+10.1% average CIDEr score). Furthermore, we demonstrate that SimVLM acquires strong generalization and transfer ability, enabling zero-shot behavior including open-ended visual question answering and cross-modality transfer. | SIMVLM: SIMPLE VISUAL LANGUAGE MODEL PRE- TRAINING WITH WEAK SUPERVISION |
d7953396 | In recent years, a lot of attention has been devoted to efficient nearest neighbor search by means of similarity-preserving hashing. One of the plights of existing hashing techniques is the intrinsic trade-off between performance and computational complexity: while longer hash codes allow for lower false positive rates, it is very difficult to increase the embedding dimensionality without incurring in very high false negatives rates or prohibiting computational costs. In this paper, we propose a way to overcome this limitation by enforcing the hash codes to be sparse. Sparse highdimensional codes enjoy from the low false positive rates typical of long hashes, while keeping the false negative rates similar to those of a shorter dense hashing scheme with equal number of degrees of freedom. We use a tailored feed-forward neural network for the hashing function. Extensive experimental evaluation involving visual and multimodal data shows the benefits of the proposed method. | Sparse similarity-preserving hashing |
d257050560 | Skip connections and normalisation layers form two standard architectural components that are ubiquitous for the training of Deep Neural Networks (DNNs), but whose precise roles are poorly understood. Recent approaches such as Deep Kernel Shaping have made progress towards reducing our reliance on them, using insights from wide NN kernel theory to improve signal propagation in vanilla DNNs (which we define as networks without skips or normalisation layers). However, these approaches are incompatible with the self-attention layers present in transformers, whose kernels are intrinsically more complicated to analyse and control. And so the question remains: is it possible to train deep vanilla transformers? We answer this question in the affirmative by designing several approaches that use combinations of parameter initialisations, bias matrices and location-dependent rescaling to achieve faithful signal propagation in vanilla transformers. Our methods address several intricacies specific to signal propagation in transformers, including the interaction with positional encoding and causal masking. In experiments on WikiText-103 and C4, our approaches enable deep transformers without normalisation to train at speeds matching their standard counterparts, and deep vanilla transformers to reach the same performance as standard ones after about 5 times more iterations. | Published as a conference paper at ICLR 2023 DEEP TRANSFORMERS WITHOUT SHORTCUTS: MODIFYING SELF-ATTENTION FOR FAITHFUL SIGNAL PROPAGATION |
d251067024 | Proximal splitting algorithms are well suited to solving large-scale nonsmooth optimization problems, in particular those arising in machine learning. We propose a new primal-dual algorithm, in which the dual update is randomized; equivalently, the proximity operator of one of the function in the problem is replaced by a stochastic oracle. For instance, some randomly chosen dual variables, instead of all, are updated at each iteration. Or, the proximity operator of a function is called with some small probability only. A nonsmooth variance-reduction technique is implemented so that the algorithm finds an exact minimizer of the general problem involving smooth and nonsmooth functions, possibly composed with linear operators. We derive linear convergence results in presence of strong convexity; these results are new even in the deterministic case, when our algorithms reverts to the recently proposed Primal-Dual Davis-Yin algorithm. Some randomized algorithms of the literature are also recovered as particular cases (e.g., Point-SAGA). But our randomization technique is general and encompasses many unbiased mechanisms beyond sampling and probabilistic updates, including compression. Since the convergence speed depends on the slowest among the primal and dual contraction mechanisms, the iteration complexity might remain the same when randomness is used. On the other hand, the computation complexity can be significantly reduced. Overall, randomness helps getting faster algorithms. This has long been known for stochastic-gradient-type algorithms, and our work shows that this fully applies in the more general primal-dual setting as well. | RANDPROX: PRIMAL-DUAL OPTIMIZATION ALGO- RITHMS WITH RANDOMIZED PROXIMAL UPDATES |
d254853899 | Deep latent variable models have achieved significant empirical successes in modelbased reinforcement learning (RL) due to their expressiveness in modeling complex transition dynamics. On the other hand, it remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of RL. In this paper, we provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle in the face of uncertainty for exploration. In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models. Theoretically, we establish the sample complexity of the proposed approach in the online and offline settings. Empirically, we demonstrate superior performance over current state-of-the-art algorithms across various benchmarks. * Equal Contribution. Project Website: httpsWe define the state value function V : S → 0, 1 1−γ and state-action value function Q : S × A → 0, 1 1−γ following the standard notation:It is straightforward to see that V π T * ,r = E s∼d0 V π T * ,r (s) , as well as the following Bellman equation: Q π T * ,r (s, a) = r(s, a) + γE s ∼T * (·|s,a) V π T * ,r (s ) . We also define the discounted occupancy measure d π T * of policy π as follows:By the definition of the discounted occupancy measure, we can see V π T * ,r = E (s,a)∼d π T * [r(s, a)]. Furthermore, with the property of the Markov chain, we can obtainLINEAR MDPIn the tabular MDP, where the state space |S| is finite, there exist lots of work on sample-and computation-efficient RL algorithms (e.g. Azar et al., 2017; Jin et al., 2018). However, such methods can still be expensive when |S| becomes large or even infinite, which is quite common for in realworld applications. To address this issue, we would like to introduce function approximations into RL algorithms to alleviate the statistical and computational bottleneck. The linear MDP (Jin et al., 2020; Agarwal et al., 2020) is a promising subclass admits special structure for such purposes. Definition 1 (Linear MDP (Jin et al., 2020; Agarwal et al., 2020)). An MDP is called a linear MDP if there exists φ * : S × A → H and µ * : S → H for some proper Hilbert space H, such that T * (s |s, a) = φ * (s, a), µ * (s ) H .The complete definition of linear MDPs require φ * and µ * satisfy certain normalization conditions, which we defer to Section 4 for the ease of presentation. The most significant benefit for linear MDP is that, for any policy π : S → A, Q π T * ,r (s, a) is linear with respect to [r(s, a), φ * (s, a)], thanks to the following observation: Q π T * ,r (s, a) = r(s, a)+γE s ∼T * (·|s,a) V π T * ,r (s ) = r(s, a)+ φ * (s, a), S µ * (s )V π T * ,r (s )ds H .(1) Plenty of sample-efficient algorithms have been developed based on the linear MDP structure with known φ * (e.g. Yang & Wang, 2020; Jin et al., 2020; Yang et al., 2020). This requirement limits their practical applications. In fact, in most cases, we do not have access to φ * and we need to perform representation learning to obtain an estimate of φ * . However, the learning of φ relies on efficient exploration for the full-coverage data, while the design of exploration strategy relies on the accurate estimation of φ. The coupling between exploration and learning induces extra difficulty.Recently, Uehara et al. (2022) designed UCB-style exploration for iterative finite-dimension representation updates with theoretical guarantees. The algorithm requires the computaiton oracle for the maximum likelihood estimation (MLE) to the conditional density estimation,which is difficult as we generally do not have specific realization of (φ, µ) pairs to make the constraints hold for arbitrary (s, a) pairs, and therefore, impractical for real-world applications.LATENT VARIABLE MODELS AS LINEAR MDPSIn this section, we first reveal the linear representation view of the transitions with a latent variable structure. This essential connection brings several benefits for learning, planning and exploration/exploitation. More specifically, the latent variable model view provides us a tractable variational learning scheme, while the linear representation view inspires computational-efficient planning and exploration/exploitation mechanism. | Published as a conference paper at ICLR 2023 LATENT VARIABLE REPRESENTATION FOR REIN- FORCEMENT LEARNING |
d2835189 | Neuromorphic hardware tends to pose limits on the connectivity of deep networks that one can run on them. But also generic hardware and software implementations of deep learning run more efficiently for sparse networks. Several methods exist for pruning connections of a neural network after it was trained without connectivity constraints. We present an algorithm, DEEP R, that enables us to train directly a sparsely connected neural network. DEEP R automatically rewires the network during supervised training so that connections are there where they are most needed for the task, while its total number is all the time strictly bounded. We demonstrate that DEEP R can be used to train very sparse feedforward and recurrent neural networks on standard benchmark tasks with just a minor loss in performance. DEEP R is based on a rigorous theoretical foundation that views rewiring as stochastic sampling of network configurations from a posterior.REWIRING IN DEEP NEURAL NETWORKSStochastic gradient descent (SGD) and its modern variants (Kingma & Ba, 2014; Tieleman & Hinton, 2012) implemented through the Error Backpropagation algorithm is the dominant learning paradigm of contemporary deep learning applications. For a given list of network inputs X and target network outputs Y * , gradient descent iteratively moves the parameter vector θ in the direction of the negative gradient of an error function E X,Y * (θ) such that a local minimum of E X,Y * (θ) is eventually reached.A more general view on neural network training is provided by a probabilistic interpretation of the learning problem(Bishop, 2006;Neal, 1992). In this probabilistic learning framework, the deterministic network output is interpreted as defining a probability distribution p N (Y | X, θ) over outputs Y for the given input X and the given network parameters θ. The goal of training is then to find parameters that maximize the likelihood p N (Y * | X, θ) of the training targets under this model (maximum likelihood learning). Training can again be performed by gradient descent on an equivalent error function that is usually given by the negative log-likelihood E X,Y * (θ) = − log p N (Y * | X, θ). Joao FG de Freitas, Mahesan Niranjan, Andrew H. Gee, and Arnaud Doucet. Sequential monte carlo methods to train neural network models. Neural computation, 12(4):955-993, 2000. Steve B Furber, Francesco Galluppi, Steve Temple, and Luis A Plana. The spinnaker project. Proceedings of the IEEE, 102(5):652-665, 2014. Alex Graves and Jürgen Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, 18(5):602-610, 2005.Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. | DEEP REWIRING: TRAINING VERY SPARSE DEEP NET- WORKS |
d256194594 | Self-supervised pretraining has been extensively studied in language and vision domains, where a unified model can be easily adapted to various downstream tasks by pretraining representations without explicit labels. When it comes to sequential decision-making tasks, however, it is difficult to properly design such a pretraining approach that can cope with both high-dimensional perceptual information and the complexity of sequential control over long interaction horizons. The challenge becomes combinatorially more complex if we want to pretrain representations amenable to a large variety of tasks. To tackle this problem, in this work, we formulate a general pretraining-finetuning pipeline for sequential decision making, under which we propose a generic pretraining framework Self-supervised Multi-task pretrAining with contRol Transformer (SMART). By systematically investigating pretraining regimes, we carefully design a Control Transformer (CT) coupled with a novel control-centric pretraining objective in a self-supervised manner. SMART encourages the representation to capture the common essential information relevant to short-term control and long-term control, which is transferrable across tasks. We show by extensive experiments in DeepMind Control Suite that SMART significantly improves the learning efficiency among seen and unseen downstream tasks and domains under different learning scenarios including Imitation Learning (IL) and Reinforcement Learning (RL). Benefiting from the proposed control-centric objective, SMART is resilient to distribution shift between pretraining and finetuning, and even works well with low-quality pretraining datasets that are randomly collected. | Published as a conference paper at ICLR 2023 SMART: SELF-SUPERVISED MULTI-TASK PRETRAIN- ING WITH CONTROL TRANSFORMERS |
d222291295 | To alleviate the resource constraint for real-time point cloud applications that run on edge devices, in this paper we present BiPointNet, the first model binarization approach for efficient deep learning on point clouds. We discover that the immense performance drop of binarized models for point clouds mainly stems from two challenges: aggregation-induced feature homogenization that leads to a degradation of information entropy, and scale distortion that hinders optimization and invalidates scale-sensitive structures. With theoretical justifications and in-depth analysis, our BiPointNet introduces Entropy-Maximizing Aggregation (EMA) to modulate the distribution before aggregation for the maximum information entropy, and Layer-wise Scale Recovery (LSR) to efficiently restore feature representation capacity. Extensive experiments show that BiPointNet outperforms existing binarization methods by convincing margins, at the level even comparable with the full precision counterpart. We highlight that our techniques are generic, guaranteeing significant improvements on various fundamental tasks and mainstream backbones. Moreover, BiPointNet gives an impressive 14.7× speedup and 18.9× storage saving on real-world resource-constrained devices. * equal contributions † corresponding author arXiv:2010.05501v4 [cs.CV] 11 Jun 2021 Published as a conference paper at ICLR 2021 2 RELATED WORK Network Binarization. Recently, various quantization methods for neural networks have emerged, such as uniform quantization (Gong et al., 2019; Zhu et al., 2020), mixed-precision quantization (Wu Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. . Rich feature hierarchies for accurate object detection and semantic segmentation. In IEEE CVPR, 2014. Ross B. Girshick. Fast r-cnn. IEEE ICCV, 2015. . Pointnet: Deep learning on point sets for 3d classification and segmentation. IEEE CVPR, 2017a. . Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In NeurIPS, 2017b. | Published as a conference paper at ICLR 2021 BIPOINTNET: BINARY NEURAL NETWORK FOR POINT CLOUDS |
d257279896 | We present a deep learning approach for repairing sequential circuits against formal specifications given in linear-time temporal logic (LTL). Given a defective circuit and its formal specification, we train Transformer models to output circuits that satisfy the corresponding specification. We propose a separated hierarchical Transformer for multimodal representation learning of the formal specification and the circuit. We introduce a data generation algorithm that enables generalization to more complex specifications and out-of-distribution datasets. In addition, our proposed repair mechanism significantly improves the automated synthesis of circuits from LTL specifications with Transformers. It improves the state-of-theart by 6.8 percentage points on held-out instances and 11.8 percentage points on an out-of-distribution dataset from the annual reactive synthesis competition. | ITERATIVE CIRCUIT REPAIR AGAINST FORMAL SPEC- IFICATIONS |
d221819379 | Multi-Task Learning (MTL) networks have emerged as a promising method for transferring learned knowledge across different tasks. However, MTL must deal with challenges such as: overfitting to low resource tasks, catastrophic forgetting, and negative task transfer, or learning interference. Often, in Natural Language Processing (NLP), a separate model per task is needed to obtain the best performance. However, many fine-tuning approaches are both parameter inefficient, i.e., potentially involving one new model per task, and highly susceptible to losing knowledge acquired during pretraining. We propose a novel Transformer based Adapter consisting of a new conditional attention mechanism as well as a set of task-conditioned modules that facilitate weight sharing. Through this construction, we achieve more efficient parameter sharing and mitigate forgetting by keeping half of the weights of a pretrained model fixed. We also use a new multi-task data sampling strategy to mitigate the negative effects of data imbalance across tasks. Using this approach, we are able to surpass single task fine-tuning methods while being parameter and data efficient (using around 66% of the data for weight updates). Compared to other BERT Large methods on GLUE, our 8-task model surpasses other Adapter methods by 2.8% and our 24-task model outperforms by 0.7-1.0% models that use MTL and single task fine-tuning. We show that a larger variant of our single multi-task model approach performs competitively across 26 NLP tasks and yields state-of-the-art results on a number of test and development sets. Our code is publicly available at https://github.com/CAMTL/CA-MTL. | Published as a conference paper at ICLR 2021 CONDITIONALLY ADAPTIVE MULTI-TASK LEARNING: IMPROVING TRANSFER LEARNING IN NLP USING FEWER PARAMETERS & LESS DATA |
d221376381 | We construct an experimental setup in which changing the scale of initialization strongly impacts the implicit regularization induced by SGD, interpolating from good generalization performance to completely memorizing the training set while making little progress on the test set. Moreover, we find that the extent and manner in which generalization ability is affected depends on the activation and loss function used, with sin activation demonstrating extreme memorization. In the case of the homogeneous ReLU activation, we show that this behavior can be attributed to the loss function. Our empirical investigation reveals that increasing the scale of initialization correlates with misalignment of representations and gradients across examples in the same class. This insight allows us to devise an alignment measure over gradients and representations which can capture this phenomenon. We demonstrate that our alignment measure correlates with generalization of deep models trained on image classification tasks. | EXTREME MEMORIZATION VIA SCALE OF INITIALIZA- TION |
d3875075 | We present a formal language with expressions denoting general symbol structures and queries which access information in those structures. A sequence-to-sequence network processing this language learns to encode symbol structures and query them. The learned representation (approximately) shares a simple linearity property with theoretical techniques for performing this task. | Workshop track -ICLR 2018 LEARNING AND ANALYZING VECTOR ENCODING OF SYMBOLIC REPRESENTATIONS |
d170078603 | While tasks could come with varying the number of instances and classes in realistic settings, the existing meta-learning approaches for few-shot classification assume that the number of instances per task and class is fixed. Due to such restriction, they learn to equally utilize the meta-knowledge across all the tasks, even when the number of instances per task and class largely varies. Moreover, they do not consider distributional difference in unseen tasks, on which the meta-knowledge may have less usefulness depending on the task relatedness. To overcome these limitations, we propose a novel meta-learning model that adaptively balances the effect of the meta-learning and task-specific learning within each task. Through the learning of the balancing variables, we can decide whether to obtain a solution by relying on the meta-knowledge or task-specific learning. We formulate this objective into a Bayesian inference framework and tackle it using variational inference. We validate our Bayesian Task-Adaptive Meta-Learning (Bayesian TAML) on multiple realistic task-and class-imbalanced datasets, on which it significantly outperforms existing meta-learning approaches. Further ablation study confirms the effectiveness of each balancing component and the Bayesian learning framework. | Published as a conference paper at ICLR 2020 LEARNING TO BALANCE: BAYESIAN META-LEARNING FOR IMBALANCED AND OUT-OF-DISTRIBUTION TASKS |
d11553675 | Automatic speech recognition systems usually rely on spectral-based features, such as MFCC of PLP. These features are extracted based on prior knowledge such as, speech perception or/and speech production. Recently, convolutional neural networks have been shown to be able to estimate phoneme conditional probabilities in a completely data-driven manner, i.e. using directly temporal raw speech signal as input. This system was shown to yield similar or better performance than HMM/ANN based system on phoneme recognition task and on large scale continuous speech recognition task, using less parameters. Motivated by these studies, we investigate the use of simple linear classifier in the CNN-based framework. Thus, the network learns linearly separable features from raw speech. We show that such system yields similar or better performance than MLP based system using cepstral-based features as input. | Under review as a conference paper at ICLR 2015 LEARNING LINEARLY SEPARABLE FEATURES FOR SPEECH RECOGNITION USING CONVOLUTIONAL NEU- RAL NETWORKS |
d6212000 | Despite their massive size, successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance. Conventional wisdom attributes small generalization error either to properties of the model family, or to the regularization techniques used during training. Through extensive systematic experiments, we show how these traditional approaches fail to explain why large neural networks generalize well in practice. Specifically, our experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data. This phenomenon is qualitatively unaffected by explicit regularization, and occurs even if we replace the true images by completely unstructured random noise. We corroborate these experimental findings with a theoretical construction showing that simple depth two neural networks already have perfect finite sample expressivity as soon as the number of parameters exceeds the number of data points as it usually does in practice. We interpret our experimental findings by comparison with traditional models. | UNDERSTANDING DEEP LEARNING REQUIRES RE- THINKING GENERALIZATION |
d25717172 | The driving force behind the recent success of LSTMs has been their ability to learn complex and non-linear relationships. Consequently, our inability to describe these relationships has led to LSTMs being characterized as black boxes. To this end, we introduce contextual decomposition (CD), an interpretation algorithm for analysing individual predictions made by standard LSTMs, without any changes to the underlying model. By decomposing the output of a LSTM, CD captures the contributions of combinations of words or variables to the final prediction of an LSTM. On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction. Using the phrase-level labels in SST, we also demonstrate that CD is able to successfully extract positive and negative negations from an LSTM, something which has not previously been done. * Work started during internship at Google Brain | Under review as a conference paper at ICLR 2018 BEYOND WORD IMPORTANCE: CONTEXTUAL DE- COMPOSITION TO EXTRACT INTERACTIONS FROM LSTMS |
d209862859 | Person re-identification (re-ID) aims at identifying the same persons' images across different cameras. However, domain diversities between different datasets pose an evident challenge for adapting the re-ID model trained on one dataset to another one. State-of-the-art unsupervised domain adaptation methods for person re-ID transferred the learned knowledge from the source domain by optimizing with pseudo labels created by clustering algorithms on the target domain. Although they achieved state-of-the-art performances, the inevitable label noise caused by the clustering procedure was ignored. Such noisy pseudo labels substantially hinders the model's capability on further improving feature representations on the target domain. In order to mitigate the effects of noisy pseudo labels, we propose to softly refine the pseudo labels in the target domain by proposing an unsupervised framework, Mutual Mean-Teaching (MMT), to learn better features from the target domain via off-line refined hard pseudo labels and on-line refined soft pseudo labels in an alternative training manner. In addition, the common practice is to adopt both the classification loss and the triplet loss jointly for achieving optimal performances in person re-ID models. However, conventional triplet loss cannot work with softly refined labels. To solve this problem, a novel soft softmax-triplet loss is proposed to support learning with soft pseudo triplet labels for achieving the optimal domain adaptation performance. The proposed MMT framework achieves considerable improvements of 14.4%, 18.2%, 13.4% and 16.4% mAP on Market-to-Duke, Duke-to-Market, Market-to-MSMT and Duke-to-MSMT unsupervised domain adaptation tasks. 1 | MUTUAL MEAN-TEACHING: PSEUDO LABEL REFINERY FOR UNSUPERVISED DO- MAIN ADAPTATION ON PERSON RE-IDENTIFICATION |
d252595735 | Mobile UI understanding is important for enabling various interaction tasks such as UI automation and accessibility. Previous mobile UI modeling often depends on the view hierarchy information of a screen, which directly provides the structural data of the UI, with the hope to bypass challenging tasks of visual modeling from screen pixels. However, view hierarchies are not always available, and are often corrupted with missing object descriptions or misaligned structure information. As a result, despite the use of view hierarchies could offer short-term gains, it may ultimately hinder the applicability and performance of the model. In this paper, we propose Spotlight, a vision-only approach for mobile UI understanding. Specifically, we enhance a vision-language model that only takes the screenshot of the UI and a region of interest on the screen-the focus-as the input. This general architecture of Spotlight is easily scalable and capable of performing a range of UI modeling tasks. Our experiments show that our model establishes SoTA results on several representative UI tasks and outperforms previous methods that use both screenshots and view hierarchies as inputs. Furthermore, we explore multi-task learning and few-shot prompting capacities of the proposed models, demonstrating promising results in the multi-task learning direction. | Published as a conference paper at ICLR 2023 SPOTLIGHT: MOBILE UI UNDERSTANDING USING VISION-LANGUAGE MODELS WITH A FOCUS |
d256231061 | In real-world applications, deep learning models often run in non-stationary environments where the target data distribution continually shifts over time. There have been numerous domain adaptation (DA) methods in both online and offline modes to improve cross-domain adaptation ability. However, these DA methods typically only provide good performance after a long period of adaptation, and perform poorly on new domains before and during adaptation -in what we call the "Unfamiliar Period", especially when domain shifts happen suddenly and significantly. On the other hand, domain generalization (DG) methods have been proposed to improve the model generalization ability on unadapted domains. However, existing DG works are ineffective for continually changing domains due to severe catastrophic forgetting of learned knowledge. To overcome these limitations of DA and DG in handling the Unfamiliar Period during continual domain shift, we propose RaTP, a framework that focuses on improving models' target domain generalization (TDG) capability, while also achieving effective target domain adaptation (TDA) capability right after training on certain domains and forgetting alleviation (FA) capability on past domains. RaTP includes a trainingfree data augmentation module to prepare data for TDG, a novel pseudo-labeling mechanism to provide reliable supervision for TDA, and a prototype contrastive alignment algorithm to align different domains for achieving TDG, TDA and FA. Extensive experiments on Digits, PACS, and DomainNet demonstrate that RaTP significantly outperforms state-of-the-art works from Continual DA, Source-Free DA, Test-Time/Online DA, Single DG, Multiple DG and Unified DA&DG in TDG, and achieves comparable TDA and FA capabilities. * Equal contributions (ordered alphabetically); ‡ Corresponding authors. † Part of the work was done during an internship at Sony AI.Domain adaptation (DA) methods have been proposed to tackle continual data drifts in dynamic environments in either online or offline mode. For example, Continual DA (Liu et al., 2020; Rostami, 2021) starts from a labeled source domain and continually adapts the model to various target domains, while keeping the model performance from degrading significantly on seen domains. However, existing Continual DA works often assume that the source domain can be accessed all the time, which may be difficult to guarantee in practical scenarios, especially considering the possible limitation on memory storage and regulations on privacy or intellectual property. Source-Free DAQu et al., 2022)can overcome this issue and achieve target adaptation without the source domain data. In addition, Test-Time or Online DAIwasawa & Matsuo, 2021;Panagiotakopoulos et al., 2022)can improve the target model performance with a small training cost; however, the target domain data is only learned once by the model and the performance improvement is limited (higher improvement would require a large amount of data). With these DA methods, although the model may perform better on the new target domain after sufficient adaptation, its performance on the target domain before and during the adaptation process, i.e., in the Unfamiliar Period, is often poor. In cases where the domain shift is sudden and the duration of seeing a new target domain is short, this problem becomes even more severe. In this work, we believe that for many applications, it is very important to ensure that the model can also perform reasonably well in the Unfamiliar Period, i.e., before seeing a lot of target domain data. For instance in environmental surveillance, having poor performance under uncommon/unfamiliar weather or lighting conditions may cause significant security and safety risks. In the example of lung imaging analysis for corona-viruses, being able to quickly provide good performance for detecting new variants is critical for the early containment and treatment of the disease.Domain generalization (DG) methods also solve the learning problem on multiple data domains, especially for cases where the target domain is unavailable or unknown during training. However, existing DG works are typically based on accurate supervision knowledge of the source domain data, whether it is drawn from a single domain(Wang et al., 2021c;or multiple domains(Yao et al., 2022;, which may not be achievable in continually changing scenarios. Moreover, when DG is applied in scenarios with continual domain shifts, as it focuses more on the target domain, there could be severe catastrophic forgetting of domains that have been learned. There are also some works unifying DA and DG (Ghifary et al., 2016;Motiian et al., 2017;Jin et al., 2021); however they can only be used in standard DA or DG individually, thus still suffering their limitations. and Nasery et al. (2021) study the smooth temporal shifts of data distribution, but they cannot handle large domain shifts over time.Our Approach and Contribution. In this work, we focus on the study of Continual Domain Shift Learning (CDSL) problem, in which the learning model is first trained on a labeled source domain and then faces a series of unlabeled target domains that appear continually. Our goal, in particular, is to improve model performance before and during the training stage of each previously-unseen target domain (i.e., in the Unfamiliar Period), while also maintaining good performance in the time periods after the training. Thus, we propose a framework called RaTP that optimizes three objectives:(1) to improve the model generalization performance on a new target domain before and during its training, namely the target domain generalization (TDG) performance, (2) to provide good model performance on a target domain right after its training, namely the target domain adaptation (TDA) performance, and (3) to maintain good performance on a trained domain after the model is trained with other domains, namely the forgetting alleviation (FA) performance. For improving TDG, RaTP includes a training-free data augmentation module that is based on Random Mixup, and this module can generate data outside of the current training domain. For TDA, RaTP includes a Top 2 Pseudo Labeling mechanism that lays more emphasis on samples with a higher possibility of correct classification, which can produce more accurate pseudo labels. Finally, for optimizing the model towards TDG, TDA, and FA at the same time, RaTP includes a Prototype Contrastive Alignment algorithm. Comprehensive experiments and ablation studies on Digits, PACS, and DomainNet demonstrate that RaTP can significantly outperform state-of-the-art works in TDG, including Continual DA, Source-Free DA, Test-Time/Online DA, Single DG, Multiple DG, and Unified DA&DG. RaTP can also produce comparable performance in TDA and FA as these baselines. In summary:• We tackle an important problem in practical scenarios with continual domain shifts, i.e., to improve model performance before and during training on a new target domain, in what we call the Unfamiliar Period. And we also try to achieve good model performance after training, providing the model with capabilities of target domain adaptation and forgetting alleviation. | Published as a conference paper at ICLR 2023 DEJA VU: CONTINUAL MODEL GENERALIZATION FOR UNSEEN DOMAINS |
d246822597 | The dominant line of work in domain adaptation has focused on learning invariant representations using domain-adversarial training.In this paper, we interpret this approach from a game theoretical perspective.Defining optimal solutions in domain-adversarial training as local Nash equilibria, we show that gradient descent in domain-adversarial training can violate the asymptotic convergence guarantees of the optimizer, oftentimes hindering the transfer performance.Our analysis leads us to replace gradient descent with high-order ODE solvers (i.e., Runge-Kutta), for which we derive asymptotic convergence guarantees.This family of optimizers is significantly more stable and allows more aggressive learning rates, leading to high performance gains when used as a drop-in replacement over standard optimizers.Our experiments show that in conjunction with state-of-the-art domain-adversarial methods, we achieve up to 3.5% improvement with less than half of training iterations.Our optimizers are easy to implement, free of additional parameters, and can be plugged into any domain-adversarial framework. | DOMAIN ADVERSARIAL TRAINING A GAME PERSPECTIVE |
d246823323 | Diffusion models have emerged as an expressive family of generative models rivaling GANs in sample quality and autoregressive models in likelihood scores. Standard diffusion models typically require hundreds of forward passes through the model to generate a single high-fidelity sample. We introduce Differentiable Diffusion Sampler Search (DDSS): a method that optimizes fast samplers for any pre-trained diffusion model by differentiating through sample quality scores. We present Generalized Gaussian Diffusion Models (GGDM), a family of flexible non-Markovian samplers for diffusion models. We show that optimizing the degrees of freedom of GGDM samplers by maximizing sample quality scores via gradient descent leads to improved sample quality. Our optimization procedure backpropagates through the sampling process using the reparametrization trick and gradient rematerialization. DDSS achieves strong results on unconditional image generation across various datasets (e.g., FID scores on LSUN church 128x128 of 11.6 with only 10 inference steps, and 4.82 with 20 steps, compared to 51.1 and 14.9 with strongest DDPM/DDIM baselines). Our method is compatible with any pre-trained diffusion model without fine-tuning or re-training required. * Work done as part of the Google AI Residency. | Published as a conference paper at ICLR 2022 LEARNING FAST SAMPLERS FOR DIFFUSION MODELS BY DIFFERENTIATING THROUGH SAMPLE QUALITY |
d10713737 | We propose a novel training algorithm for reinforcement learning which combines the strength of deep Q-learning with a constrained optimization approach to tighten optimality and encourage faster reward propagation. Our novel technique makes deep reinforcement learning more practical by drastically reducing the training time. We evaluate the performance of our approach on the 49 games of the challenging Arcade Learning Environment, and report significant improvements in both training time and accuracy. | LEARNING TO PLAY IN A DAY: FASTER DEEP REIN- FORCEMENT LEARNING BY OPTIMALITY TIGHTENING |
d254199184 | Auxiliary tasks improve the representations learned by deep reinforcement learning agents. Analytically, their effect is reasonably well-understood; in practice, however, their primary use remains in support of a main learning objective, rather than as a method for learning representations. This is perhaps surprising given that many auxiliary tasks are defined procedurally, and hence can be treated as an essentially infinite source of information about the environment. Based on this observation, we study the effectiveness of auxiliary tasks for learning rich representations, focusing on the setting where the number of tasks and the size of the agent's network are simultaneously increased. For this purpose, we derive a new family of auxiliary tasks based on the successor measure. These tasks are easy to implement and have appealing theoretical properties. Combined with a suitable off-policy learning rule, the result is a representation learning algorithm that can be understood as extending Mahadevan & Maggioni (2007)'s proto-value functions to deep reinforcement learning -accordingly, we call the resulting object proto-value networks. Through a series of experiments on the Arcade Learning Environment, we demonstrate that proto-value networks produce rich features that may be used to obtain performance comparable to established algorithms, using only linear approximation and a small number (~4M) of interactions with the environment's reward function. | Published as a conference paper at ICLR 2023 PROTO-VALUE NETWORKS: SCALING REPRESENTA- TION LEARNING WITH AUXILIARY TASKS |
d249926745 | Mini-batch SGD with momentum is a fundamental algorithm for learning large predictive models. In this paper we develop a new analytic framework to analyze noise-averaged properties of mini-batch SGD for linear models at constant learning rates, momenta and sizes of batches. Our key idea is to consider the dynamics of the second moments of model parameters for a special family of "Spectrally Expressible" approximations. This allows to obtain an explicit expression for the generating function of the sequence of loss values. By analyzing this generating function, we find, in particular, that 1) the SGD dynamics exhibits several convergent and divergent regimes depending on the spectral distributions of the problem; 2) the convergent regimes admit explicit stability conditions, and explicit loss asymptotics in the case of power-law spectral distributions; 3) the optimal convergence rate can be achieved at negative momenta. We verify our theoretical predictions by extensive experiments with MNIST, CIFAR10 and synthetic problems, and find a good quantitative agreement.A fundamental way to characterize least squares problems is through their spectral distributions: the eigenvalues λ k of the Hessian and the coefficients c k of the expansion of the optimal solution w * over the Hessian eigenvectors. Then, one can estimate certain metrics of the problem through spectral expressions, i.e. explicit formulas that operate with spectral distributions λ k , c k but not with other details of the solution w * or the Hessian. A simple example is the standard stability condition for full-batch gradient descent (GD): α < 2/λ max . Various exact or approximate spectral expressions are available for full-batch GD-based algorithms(Fischer, 1996)and ridge regression(Canatar et al., 2021;Wei et al., 2022). Here, we aim at obtaining spectral expressions and associated results (stability conditions, phase structure, loss asymptotics,...) for average train loss under minibatch SGD.An important feature of spectral distributions in deep learning problems is that they often obey macroscopic laws -quite commonly a power law with a long tail of eigenvalues converging to 0 (see Cui et al. (2021); Bahri et al. (2021); Kopitkov & Indelman (2020); Velikanov & Yarotsky (2021); Atanasov et al. (2021); Basri et al. (2020) and Figs. 1, 9). The typically simple form of macroscopic laws allows to theoretically analyze spectral expressions and obtain fine-grained results.As an illustration, consider the full-batch GD for least squares regression on a MNIST dataset. Standard optimization results (Polyak, 1987) do not take into account fine spectral details and give either non-strongly convex bound L GD (w t ) = O(t −1 ) or strongly-convex bound L GD (w t ) ≤ L(w 0 )( λmax−λmin λmax+λmin ) 2t . Both these bounds are rather crude and poorly agree with the experimentally observed (Bordelon & Pehlevan, 2021; Velikanov & Yarotsky, 2022) loss trajectory which can be approximately described as L(w t ) ∼ Ct −ξ , ξ ≈ 0.25 (cf. ourFig. 1). In contrast, fitting power-laws to both eigenvalues λ k and coefficients c k and using the spectral expression L GD (w t ) = k (1 − αλ k ) 2t λ k c 2 k allows to accurately predict both exponent ξ and constant C. Accordingly, one of the purposes of the present paper is to investigate whether similar predictions can be made for mini-batch SGD under power-law spectral distributions.Outline and main contributions. We develop a new, spectrum based analytic approach to the study of mini-batch SGD. The results obtained within this approach and its key steps are naturally divided into three parts:1. We show that in contrast to the full-batch GD, loss trajectories of the mini-batch SGD cannot be determined merely from the spectral properties of the problem. To overcome this difficulty, we propose a natural family of Spectrally Expressible (SE) approximations for SGD dynamics that admit an analytic solution. We provide multiple justifications for these approximations, including theoretical scenarios where they are exact and empirical evidence of their accuracy for describing optimization of models on MNIST and CIFAR10.2. To characterize SGD dynamics under SE approximation, we derive explicit spectral expressions for the generating function of the sequence of loss values, L(z) ≡ t L(w t )z t , and show that it decomposes into the "signal" V (z) and "noise" U (z) generating functions. Analyzing U (z), we derive a novel stability condition of mini-batch SGD in terms of only the problem spectrum λ k . In the practically relevant case of large momentum parameter β ≈ 1, stability condition simplifies to the restriction of effective learning rate α eff ≡ α 1−β < 2b λcrit with some critical value λ crit determined by the spectrum. Finally, we find the characteristic divergence time when stability condition is violated.3. By assuming power-law distributions for both eigenvalues λ k ∝ k −ν and coefficient partial sums S k = l≤k λ l c 2 l ∝ k −κ , we show that SGD exhibits distinct "signal-dominated" and "noise-dominated" convergence regimes (previously known for SGD without momenta (Varre et al., 2021)) depending on the sign of κ + 1 − 2ν. For both regimes we obtain power-law loss convergence rates and find the explicit constant in the leading term. Using these rates, we demonstrate a dynamical phase transition between the phases and find its characteristic transition time. Finally, we analyze optimal hyper parameters in both phases. In particular, we show that negative momenta can be beneficial in the "noisedominated" phase but not in the "signal-dominated" phase.We discuss related work in Appendix A and experimental details 1 in Appendix F. 1 Our code: https://github.com/Godofnothing/PowerLawOptimization/ | A VIEW OF MINI-BATCH SGD VIA GENERATING FUNC- TIONS: CONDITIONS OF CONVERGENCE, PHASE TRAN- SITIONS, BENEFIT FROM NEGATIVE MOMENTA |
d10082291 | Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, our model can instantly learn new concepts that were not available in the training data but conform to a similar generative process. The proposed framework does not explicitly restrict diversity of the conditioning data and also does not require an extensive inference procedure for training or adaptation. Our experiments on the Omniglot dataset demonstrate that Generative Matching Networks significantly improve predictive performance on the fly as more additional data is available and outperform existing state of the art conditional generative models. | Fast Adaptation in Generative Models with Generative Matching Networks |
d247628080 | Deep metric learning (DML) enables learning with less supervision through its emphasis on the similarity structure of representations. There has been much work on improving generalization of DML in settings like zero-shot retrieval, but little is known about its implications for fairness. In this paper, we are the first to evaluate state-of-the-art DML methods trained on imbalanced data, and to show the negative impact these representations have on minority subgroup performance when used for downstream tasks. In this work, we first define fairness in DML through an analysis of three properties of the representation space -interclass alignment, intra-class alignment, and uniformity -and propose finDML, the f airness in non-balanced DML benchmark to characterize representation fairness. Utilizing finDML, we find bias in DML representations to propagate to common downstream classification tasks. Surprisingly, this bias is propagated even when training data in the downstream task is re-balanced. To address this problem, we present Partial Attribute De-correlation (PARADE) to de-correlate feature representations from sensitive attributes and reduce performance gaps between subgroups in both embedding space and downstream metrics. | IS FAIRNESS ONLY METRIC DEEP? EVALUATING AND ADDRESSING SUBGROUP GAPS IN DML |
d244714783 | DETR is the first end-to-end object detector using a transformer encoder-decoder architecture and demonstrates competitive performance but low computational efficiency on high resolution feature maps. The subsequent work, Deformable DETR, enhances the efficiency of DETR by replacing dense attention with deformable attention, which achieves 10× faster convergence and improved performance. Deformable DETR uses the multiscale feature to ameliorate performance, however, the number of encoder tokens increases by 20× compared to DETR, and the computation cost of the encoder attention remains a bottleneck. In our preliminary experiment, we observe that the detection performance hardly deteriorates even if only a part of the encoder token is updated. Inspired by this observation, we propose Sparse DETR that selectively updates only the tokens expected to be referenced by the decoder, thus help the model effectively detect objects. In addition, we show that applying an auxiliary detection loss on the selected tokens in the encoder improves the performance while minimizing computational overhead. We validate that Sparse DETR achieves better performance than Deformable DETR even with only 10% encoder tokens on the COCO dataset. Albeit only the encoder tokens are sparsified, the total computation cost decreases by 38% and the frames per second (FPS) increases by 42% compared to Deformable DETR. Code is available at https://github.com/kakaobrain/sparse-detr. * Equal contribution. † Corresponding author. ‡ Work is done during an internship at KakaoBrain. | SPARSE DETR: EFFICIENT END-TO-END OBJECT DE- TECTION WITH LEARNABLE SPARSITY |
d237416749 | Since training a large-scale backdoored model from scratch requires a large training dataset, several recent attacks have considered to inject backdoors into a trained clean model without altering model behaviors on the clean data. Previous work finds that backdoors can be injected into a trained clean model with Adversarial Weight Perturbation (AWP), which means the variation of parameters are small in backdoor learning. In this work, we observe an interesting phenomenon that the variations of parameters are always AWPs when tuning the trained clean model to inject backdoors. We further provide theoretical analysis to explain this phenomenon. We are the first to formulate the behavior of maintaining accuracy on clean data as the consistency of backdoored models, which includes both global consistency and instance-wise consistency. We extensively analyze the effects of AWPs on the consistency of backdoored models. In order to achieve better consistency, we propose a novel anchoring loss to anchor or freeze the model behaviors on the clean data, with a theoretical guarantee. Both the analytical and empirical results validate the effectiveness of our anchoring loss in improving the consistency, especially the instance-wise consistency. (a) Test Loss. Backdoored Region Test Minima Init Model BadNets Anchoring (b) Test Loss (zoomed). X: logits (init) Y: logits (backdoored) Y = 0.572 * X + 2.342, corr=0.90820 (c) BadNets. X: logits (init) Y: logits (backdoored) Y = 0.970 * X + 0.127, corr=0.99265 (d) Anchoring (λ=2). | HOW TO INJECT BACKDOORS WITH BETTER CONSIS- TENCY: LOGIT ANCHORING ON CLEAN DATA |
d159268776 | regional laws in a comparative perspective.The selection of the two laws is based on the preliminary study which found a quite unique form of criminal provisions on each laws.The analysis is also based on art 200 and 201 Law No. 36/2009 and its derivative regulations as a normative measurement in national level, with which the two regional laws must be in line to.This research found that there are quite a significance differences between the two laws especially regarding the form of action that is criminally regulated.Variation also found on how the two laws fulfil what is demanded by the national criminal policy. | Comparative Study on Criminal Provisions on Regional Regulations Concerning Exclusive Breastfeeding |
d3463260 | We propose a distributed architecture for deep reinforcement learning at scale, that enables agents to learn effectively from orders of magnitude more data than previously possible. The algorithm decouples acting from learning: the actors interact with their own instances of the environment by selecting actions according to a shared neural network, and accumulate the resulting experience in a shared experience replay memory; the learner replays samples of experience and updates the neural network. The architecture relies on prioritized experience replay to focus only on the most significant data generated by the actors. Our architecture substantially improves the state of the art on the Arcade Learning Environment, achieving better final performance in a fraction of the wall-clock training time. | Published as a conference paper at ICLR 2018 DISTRIBUTED PRIORITIZED EXPERIENCE REPLAY |
d13805769 | Distributed representations of meaning are a natural way to encode covariance relationships between words and phrases in NLP. By overcoming data sparsity problems, as well as providing information about semantic relatedness which is not available in discrete representations, distributed representations have proven useful in many NLP tasks. Recent work has shown how compositional semantic representations can successfully be applied to a number of monolingual applications such as sentiment analysis. At the same time, there has been some initial success in work on learning shared word-level representations across languages. We combine these two approaches by proposing a method for learning distributed representations in a multilingual setup. Our model learns to assign similar embeddings to aligned sentences and dissimilar ones to sentence which are not aligned while not requiring word alignments. We show that our representations are semantically informative and apply them to a cross-lingual document classification task where we outperform the previous state of the art. Further, by employing parallel corpora of multiple language pairs we find that our model learns representations that capture semantic relationships across languages for which no parallel data was used. | Multilingual Distributed Representations without Word Alignment |
d213729382 | Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models. Moreover, their performance surpasses that of models generated by all previously proposed pruning schemes. arXiv:2006.07253v1 [cs.LG] 12 Jun 2020 2 RELATED WORK Previous works on obtaining pruned networks can (loosely) be divided into three main categories.Pruning after training. Training approaches to obtain sparse networks usually include a three stage pipeline-training of a dense model, one-shot pruning and fine-tuning-e.g., (Han et al. | Published as a conference paper at ICLR 2020 DYNAMIC MODEL PRUNING WITH FEEDBACK |
d174801567 | The lottery ticket hypothesis proposes that over-parameterization of deep neural networks (DNNs) aids training by increasing the probability of a "lucky" sub-network initialization being present rather than by helping the optimization process (Frankle & Carbin, 2019). Intriguingly, this phenomenon suggests that initialization strategies for DNNs can be improved substantially, but the lottery ticket hypothesis has only previously been tested in the context of supervised learning for natural image tasks. Here, we evaluate whether "winning ticket" initializations exist in two different domains: natural language processing (NLP) and reinforcement learning (RL). For NLP, we examined both recurrent LSTM models and large-scale Transformer models(Vaswani et al., 2017). For RL, we analyzed a number of discrete-action space tasks, including both classic control and pixel control. Consistent with work in supervised image classification, we confirm that winning ticket initializations generally outperform parameter-matched random initializations, even at extreme pruning rates for both NLP and RL. Notably, we are able to find winning ticket initializations for Transformers which enable models one-third the size to achieve nearly equivalent performance. Together, these results suggest that the lottery ticket hypothesis is not restricted to supervised learning of natural images, but rather represents a broader phenomenon in DNNs. | Published as a conference paper at ICLR 2020 PLAYING THE LOTTERY WITH REWARDS AND MULTIPLE LANGUAGES: LOTTERY TICKETS IN RL AND NLP |
d8404331 | Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In | Workshop track -ICLR 2017 ROBUSTNESS TO ADVERSARIAL EXAMPLES THROUGH AN ENSEMBLE OF SPECIALISTS |
d1107124 | We propose a reparameterization of LSTM that brings the benefits of batch normalization to recurrent neural networks. Whereas previous works only apply batch normalization to the input-to-hidden transformation of RNNs, we demonstrate that it is both possible and beneficial to batch-normalize the hidden-to-hidden transition, thereby reducing internal covariate shift between time steps. We evaluate our proposal on various sequential problems such as sequence classification, language modeling and question answering. Our empirical results show that our batch-normalized LSTM consistently leads to faster convergence and improved generalization. | Published as a conference paper at ICLR 2017 RECURRENT BATCH NORMALIZATION |
d209532006 | We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph which describes a set of subtasks and their dependencies that are unknown to the agent. The agent needs to quickly adapt to the task over few episodes during adaptation phase to maximize the return in the test phase. Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference (MSGI), which infers the latent parameter of the task by interacting with the environment and maximizes the return given the latent parameter. To facilitate learning, we adopt an intrinsic reward inspired by upper confidence bound (UCB) that encourages efficient exploration. Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter, and to adapt more efficiently than existing meta RL and hierarchical RL methods 1 . | Published as a conference paper at ICLR 2020 META REINFORCEMENT LEARNING WITH AUTONOMOUS INFERENCE OF SUBTASK DEPENDENCIES |
d212877887 | The success of reinforcement learning for real world robotics has been, in many cases limited to instrumented laboratory scenarios, often requiring arduous human effort and oversight to enable continuous learning. In this work, we discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world. We propose a particular instantiation of such a system, using dexterous manipulation as our case study. Subsequently, we investigate a number of challenges that come up when learning without instrumentation. In such settings, learning must be feasible without manually designed resets, using only on-board perception, and without hand-engineered reward functions. We propose simple and scalable solutions to these challenges, and then demonstrate the efficacy of our proposed system on a set of dexterous robotic manipulation tasks, providing an in-depth analysis of the challenges associated with this learning paradigm. We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand. Results and videos can be found at https://sites.google.com/view/realworld-rl/. | THE INGREDIENTS OF REAL-WORLD ROBOTIC REINFORCEMENT LEARNING |
d246411523 | The implicit bias induced by the training of neural networks has become a topic of rigorous study. In the limit of gradient flow and gradient descent with appropriate step size, it has been shown that when one trains a deep linear network with logistic or exponential loss on linearly separable data, the weights converge to rank-1 matrices. In this paper, we extend this theoretical result to the last few linear layers of the much wider class of nonlinear ReLU-activated feedforward networks containing fully-connected layers and skip connections. Similar to the linear case, the proof relies on specific local training invariances, sometimes referred to as alignment, which we show to hold for submatrices where neurons are stably-activated in all training examples, and it reflects empirical results in the literature. We also show this is not true in general for the full matrix of ReLU fully-connected layers. Our proof relies on a specific decomposition of the network into a multilinear function and another ReLU network whose weights are constant under a certain parameter directional convergence.Published as a conference paper at ICLR 2022 which, e.g., imply the low-rank result, generalize to other, structured or local nonlinear and possibly non-homogeneous architectures, and how to even characterize these. | Published as a conference paper at ICLR 2022 TRAINING INVARIANCES AND THE LOW-RANK PHE- NOMENON: BEYOND LINEAR NETWORKS |
d235391000 | While large-scale pretrained language models have obtained impressive results when fine-tuned on a wide variety of tasks, they still often suffer from overfitting in low-resource scenarios. Since such models are general-purpose feature extractors, many of these features are inevitably irrelevant for a given target task. We propose to use Variational Information Bottleneck (VIB) to suppress irrelevant features when fine-tuning on low-resource target tasks, and show that our method successfully reduces overfitting. Moreover, we show that our VIB model finds sentence representations that are more robust to biases in natural language inference datasets, and thereby obtains better generalization to out-of-domain datasets. Evaluation on seven low-resource datasets in different tasks shows that our method significantly improves transfer learning in low-resource scenarios, surpassing prior work. Moreover, it improves generalization on 13 out of 15 out-of-domain natural language inference benchmarks. Our code is publicly available in https://github.com | Published as a conference paper at ICLR 2021 VARIATIONAL INFORMATION BOTTLENECK FOR EFFEC- TIVE LOW-RESOURCE FINE-TUNING |
d254275077 | The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains.While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained.In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model.To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data.We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS.Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets.Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small 1 . | A LEARNING BASED HYPOTHESIS TEST FOR HARM-FUL COVARIATE SHIFT |
d211133181 | Generative models are often used to sample high-dimensional data points from a manifold with small intrinsic dimension. Existing techniques for comparing generative models focus on global data properties such as mean and covariance; in that sense, they are extrinsic and uni-scale. We develop the first, to our knowledge, intrinsic and multi-scale method for characterizing and comparing underlying data manifolds, based on comparing all data moments by lower-bounding the spectral notion of the Gromov-Wasserstein distance between manifolds. In a thorough experimental study, we demonstrate that our method effectively evaluates the quality of generative models; further, we showcase its efficacy in discerning the disentanglement process in neural networks. * Equal | Intrinsic Multi-scale Evaluation of Generative Models |
d219966188 | We analyze the convergence of the averaged stochastic gradient descent for overparameterized two-layer neural networks for regression problems. It was recently found that a neural tangent kernel (NTK) plays an important role in showing the global convergence of gradient-based methods under the NTK regime, where the learning dynamics for overparameterized neural networks can be almost characterized by that for the associated reproducing kernel Hilbert space (RKHS). However, there is still room for a convergence rate analysis in the NTK regime. In this study, we show that the averaged stochastic gradient descent can achieve the minimax optimal convergence rate, with the global convergence guarantee, by exploiting the complexities of the target function and the RKHS associated with the NTK. Moreover, we show that the target function specified by the NTK of a ReLU network can be learned at the optimal convergence rate through a smooth approximation of a ReLU network under certain conditions. | OPTIMAL RATES FOR AVERAGED STOCHASTIC GRA- DIENT DESCENT UNDER NEURAL TANGENT KERNEL REGIME |
d238407772 | We consider the problem of training a classification model with group annotated training data. Recent work has established that, if there is distribution shift across different groups, models trained using the standard empirical risk minimization (ERM) objective suffer from poor performance on minority groups and that group distributionally robust optimization (Group-DRO) objective is a better alternative. The starting point of this paper is the observation that though Group-DRO performs better than ERM on minority groups for some benchmark datasets, there are several other datasets where it performs much worse than ERM. Inspired by ideas from the closely related problem of domain generalization, this paper proposes a new and simple algorithm that explicitly encourages learning of features that are shared across various groups. The key insight behind our proposed algorithm is that while Group-DRO focuses on groups with worst regularized loss, focusing instead, on groups that enable better performance even on other groups, could lead to learning of shared/common features, thereby enhancing minority performance beyond what is achieved by Group-DRO. Empirically, we show that our proposed algorithm matches or achieves better performance compared to strong contemporary baselines including ERM and Group-DRO on standard benchmarks on both minority groups and across all groups. Theoretically, we show that the proposed algorithm is a descent method and finds first order stationary points of smooth nonconvex functions. Our code and datasets can be found at this URL. * | Published as a conference paper at ICLR 2022 FOCUS ON THE COMMON GOOD: GROUP DISTRIBU- TIONAL ROBUSTNESS FOLLOWS |
d3699386 | Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by , who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corresponding algorithm, called Wasserstein GAN (WGAN), hinges on the 1-Lipschitz continuity of the discriminator. In this paper, we propose a novel approach to enforcing the Lipschitz continuity in the training procedure of WGANs. Our approach seamlessly connects WGAN with one of the recent semi-supervised learning methods. As a result, it gives rise to not only better photo-realistic samples than the previous methods but also state-of-the-art semi-supervised learning results. In particular, our approach gives rise to the inception score of more than 5.0 with only 1,000 CIFAR-10 images and is the first that exceeds the accuracy of 90% on the CIFAR-10 dataset using only 4,000 labeled images, to the best of our knowledge. * Equal contribution. | Published as a conference paper at ICLR 2018 IMPROVING THE IMPROVED TRAINING OF WASSERSTEIN GANS: A CONSISTENCY TERM AND ITS DUAL EFFECT |
d211146532 | Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and Im-ageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels. Our code is made public at: | Published as a conference paper at ICLR 2020 PICKING WINNING TICKETS BEFORE TRAINING BY PRESERVING GRADIENT FLOW |
d68221207 | Partial differential equations (PDEs) are widely used across the physical and computational sciences. Decades of research and engineering went into designing fast iterative solution methods. Existing solvers are general purpose, but may be sub-optimal for specific classes of problems. In contrast to existing hand-crafted solutions, we propose an approach to learn a fast iterative solver tailored to a specific domain. We achieve this goal by learning to modify the updates of an existing solver using a deep neural network. Crucially, our approach is proven to preserve strong correctness and convergence guarantees. After training on a single geometry, our model generalizes to a wide variety of geometries and boundary conditions, and achieves 2-3 times speedup compared to state-of-the-art solvers. | LEARNING NEURAL PDE SOLVERS WITH CONVER- GENCE GUARANTEES |
d256105154 | In computer vision, it is often observed that formulating regression problems as a classification task yields better performance. We investigate this curious phenomenon and provide a derivation to show that classification, with the crossentropy loss, outperforms regression with a mean squared error loss in its ability to learn high-entropy feature representations. Based on the analysis, we propose an ordinal entropy regularizer to encourage higher-entropy feature spaces while maintaining ordinal relationships to improve the performance of regression tasks. Experiments on synthetic and real-world regression tasks demonstrate the importance and benefits of increasing entropy for regression. Code can be found here: https://github.com/needylove/OrdinalEntropy | Published as a conference paper at ICLR 2023 IMPROVING DEEP REGRESSION WITH ORDINAL EN- TROPY |
d257232381 | Deep neural networks are likely to fail when the test data is corrupted in realworld deployment (e.g., blur, weather, etc.). Test-time optimization is an effective way that adapts models to generalize to corrupted data during testing, which has been shown in the image domain. However, the techniques for improving video classification corruption robustness remain few. In this work, we propose a Temporal Coherent Test-time Optimization framework (TeCo) to utilize spatiotemporal information in test-time optimization for robust video classification. To exploit information in video with self-supervised learning, TeCo minimizes the entropy of the prediction based on the global content from video clips. Meanwhile, it also feeds local content to regularize the temporal coherence at the feature level. TeCo retains the generalization ability of various video classification models and achieves significant improvements in corruption robustness across Mini Kinetics-C and Mini SSV2-C. Furthermore, TeCo sets a new baseline in video classification corruption robustness via test-time optimization. | Published as a conference paper at ICLR 2023 TEMPORAL COHERENT TEST-TIME OPTIMIZATION FOR ROBUST VIDEO CLASSIFICATION |
d232269775 | Distributionally robust optimization (DRO) provides a framework for training machine learning models that are able to perform well on a collection of related data distributions (the "uncertainty set"). This is done by solving a min-max game: the model is trained to minimize its maximum expected loss among all distributions in the uncertainty set. While careful design of the uncertainty set is critical to the success of the DRO procedure, previous work has been limited to relatively simple alternatives that keep the min-max optimization problem exactly tractable, such as f -divergence balls. In this paper, we argue instead for the use of neural generative models to characterize the worst-case distribution, allowing for more flexible and problem-specific selection of the uncertainty set. However, while simple conceptually, this approach poses a number of implementation and optimization challenges. To circumvent these issues, we propose a relaxation of the KL-constrained inner maximization objective that makes the DRO problem more amenable to gradient-based optimization of large scale generative models, and develop model selection heuristics to guide hyper-parameter search. On both toy settings and realistic NLP tasks, we find that the proposed approach yields models that are more robust than comparable baselines 1 . | Published as a conference paper at ICLR 2021 MODELING THE SECOND PLAYER IN DISTRIBUTIONALLY ROBUST OPTIMIZATION |
d252968162 | We consider Contextual Bandits with Concave Rewards (CBCR), a multi-objective bandit problem where the desired trade-off between the rewards is defined by a known concave objective function, and the reward vector depends on an observed stochastic context. We present the first algorithm with provably vanishing regret for CBCR without restrictions on the policy space, whereas prior works were restricted to finite policy spaces or tabular representations. Our solution is based on a geometric interpretation of CBCR algorithms as optimization algorithms over the convex set of expected rewards spanned by all stochastic policies. Building on Frank-Wolfe analyses in constrained convex optimization, we derive a novel reduction from the CBCR regret to the regret of a scalar-reward bandit problem. We illustrate how to apply the reduction off-the-shelf to obtain algorithms for CBCR with both linear and general reward functions, in the case of non-combinatorial actions. Motivated by fairness in recommendation, we describe a special case of CBCR with rankings and fairness-aware objectives, leading to the first algorithm with regret guarantees for contextual combinatorial bandits with fairness of exposure.Published as a conference paper at ICLR 2023 bounded by its regret on a proxy bandit task with single (scalar) reward. This reduction shows that it is straightforward to turn any contextual (scalar reward) bandits into algorithms for CBCR. We prove this reduction by first re-parameterizing CBCR as an optimization problem in the space of feasible rewards, and then revealing connections between Frank-Wolfe (FW) optimization in reward space and a decision problem in action space. This bypasses the challenges of optimization in policy space.To illustrate how to apply the reduction, we provide two example algorithms for CBCR with noncombinatorial actions, one for linear rewards based on LinUCB(Abbasi-Yadkori et al., 2011), and one for general reward functions based on the SquareCB algorithm (Foster & Rakhlin, 2020) which uses online regression oracles. In particular, we highlight that our reduction can be used together with any exploration/exploitation principle, while previous FW approaches to BCR relied exclusively on upper confidence bounds (Agrawal & Devanur, 2014; Berthet & Perchet, 2017; Cheung, 2019).Since fairness of exposure is our main motivation for CBCR, we show how our reduction also applies to the combinatorial task of fair ranking with contextual bandits, leading to the first algorithm with regret guarantees for this problem, and we show it is computationally efficient. We compare the empirical performance of our algorithm to relevant baselines on a music recommendation task. Related work. Agrawal et al. (2016) address a restriction of CBCR to a finite set of policies, where explicit search is possible. Cheung(2019)use FW for reinforcement learning with concave rewards, a similar problem to CBCR. However, they rely on a tabular setting where there are few enough policies to compute them explicitly. Our approach is the only one to apply to CBCR without restriction on the policy space, by removing the need for explicit representation and search of optimal policies.Our work is also related to fairness of exposure in bandits. Most previous works on this topic either do not consider rankings (Celis et al., 2018; Wang et al., 2021; Patil et al., 2020; Chen et al., 2020), or apply to combinatorial bandits without contexts (Xu et al., 2021). Both these restrictions are impractical for recommender systems. Mansoury et al.(2021); Jeunen & Goethals(2021)propose heuristics with experimental support that apply to both ranking and contexts in this space, but they lack theoretical guarantees. We present the first algorithm with regret guarantees for fair ranking with contextual bandits. We provide a more detailed discussion of the related work in Appendix A.MAXIMIZATION OF CONCAVE REWARDS IN CONTEXTUAL BANDITSNotation. For any n ∈ N, we denote by n = {1, . . . , n}. The dot product of two vectors x and y in R n is either denoted x y or using braket notation x | y , depending on which one is more readable. Setting. We define a stochastic contextual bandit (Langford & Zhang, 2007) problem with D rewards. At each time step t, the environment draws a context x t ∼ P , where x ∈ X ⊆ R q and P is a probability measure over X . The learner chooses an action a t ∈ A where A ⊆ R K is the action space, and receives a noisy multi-dimensional reward r t ∈ R D , with expectation E[r t |x t , a t ] = µ(x t )a t , where µ : X → R D×K is the matrix-value contextual expected reward function. 1 The trade-off between the D cumulative rewards is specified by a known concave function f : R D → R ∪ {±∞}. Let A denote the convex hull of A and π : X → A be a stationary policy, 2 then the optimal value for the problem is defined as f * = sup π:X →A f E x∼P µ(x)π(x) .We rely on either of the following assumptions on f : Assumption A f is closed proper concave 3 on R D and A is a compact subset of R K . Moreover, there is a compact convex set K ⊆ R D such that • (Bounded rewards) ∀(x, a) ∈ X × A, µ(x)a ∈ K and for all t ∈ N * , r t ∈ K with probability 1. • (Local Lipschitzness) f is L-Lipschitz continuous with respect to . 2 on an open set containing K.Assumption B Assumption A holds and f has C-Lipschitz-continuous gradients w.r.t. . 2 on K.1 Notice that linear structure between µ(xt) and at is standard in combinatorial bandits (Cesa-Bianchi & Lugosi, 2012) and it reduces to the usual multi-armed bandit setting when A is the canonical basis of R K .2 In the multi-armed setting, stationary policies return a distribution over arms given a context vector. In the combinatorial setup, π(x) ∈ A is the average feature vector of a stochastic policy over A. For the benchmark, we are only interested in expected rewards so there is to need to specify the full distribution over A.3 This means that f is concave and upper semi-continuous, is never equal to +∞ and is finite somewhere.2 Published as a conference paper at ICLR 2023The most general version of our algorithm, described in Appendix D, removes the need for the smoothness assumption using smoothing techniques. We describe an example in Section 3.3. In the rest of the paper, we denote by D K = sup z,z ∈K z − z 2 the diameter of K, and useC = C 2 D 2 K . We now give two examples of this problem setting, motivated by real-world applications in recommender systems, and which satisfy Assumption A.Example 1 (Optimizing multiple metrics in recommender systems.) Mehrotra et al. (2020) formalized the problem of optimizing D engagement metrics (e.g. clicks, streaming time) in a banditbased recommender system. At each t, x t represents the current user's features. The system chooses one arm among K, represented by a vector a t in the canonical basis of R K which is the action space A. Each entry of the observed reward vector (r t,i ) D i=1 corresponds to a metric's value. The trade-off between the metrics is defined by the Generalized Gini Function: f (z) = √ T ), and hence not substantially change our results.A GENERAL REDUCTION-BASED APPROACH FOR CBCRIn this section we describe our general approach for CBCR. We first derive our key reduction from CBCR to a specific scalar-reward bandit problem. We then instantiate our algorithm to the case of linear and general reward functions for smooth objectives f . Finally, we extend to the case of non-smooth objective functions using Moreau-Yosida regularization (Rockafellar & Wets, 2009). 4 Gini(z1, . . . , zm) = 1 2m m i=1 m j=1 |zi − zj| is an unnormalized Gini coefficient. | Published as a conference paper at ICLR 2023 CONTEXTUAL BANDITS WITH CONCAVE REWARDS, AND AN APPLICATION TO FAIR RANKING |
d252780995 | To obtain, deterministic guarantees of adversarial robustness, specialized training methods are used. We propose, SABR, a novel such certified training method, based on the key insight that propagating interval bounds for a small but carefully selected subset of the adversarial input region is sufficient to approximate the worst-case loss over the whole region while significantly reducing approximation errors. We show in an extensive empirical evaluation that SABR outperforms existing certified defenses in terms of both standard and certifiable accuracies across perturbation magnitudes and datasets, pointing to a new class of certified training methods promising to alleviate the robustness-accuracy trade-off. | Published as a conference paper at ICLR 2023 CERTIFIED TRAINING: SMALL BOXES ARE ALL YOU NEED |
d238419059 | In contrast to single-objective optimization (SOO), multi-objective optimization (MOO) requires an optimizer to find the Pareto frontier, a subset of feasible solutions that are not dominated by other feasible solutions. In this paper, we propose LaMOO, a novel multi-objective optimizer that learns a model from observed samples to partition the search space and then focus on promising regions that are likely to contain a subset of the Pareto frontier. The partitioning is based on the dominance number, which measures "how close" a data point is to the Pareto frontier among existing samples. To account for possible partition errors due to limited samples and model mismatch, we leverage Monte Carlo Tree Search (MCTS) to exploit promising regions while exploring suboptimal regions that may turn out to contain good solutions later. Theoretically, we prove the efficacy of learning space partitioning via LaMOO under certain assumptions. Empirically, on the HyperVolume (HV) benchmark, a popular MOO metric, LaMOO substantially outperforms strong baselines on multiple real-world MOO tasks, by up to 225% in sample efficiency for neural architecture search on Nasbench201, and up to 10% for molecular design. | MULTI-OBJECTIVE OPTIMIZATION BY LEARNING SPACE PARTITIONS |
d257365443 | In Natural Language Processing (NLP), intelligent neuron models can be susceptible to textual Trojan attacks. Such attacks occur when Trojan models behave normally for standard inputs but generate malicious output for inputs that contain a specific trigger. Syntactic-structure triggers, which are invisible, are becoming more popular for Trojan attacks because they are difficult to detect and defend against. However, these types of attacks require a large corpus of training data to generate poisoned samples with the necessary syntactic structures for Trojan insertion. Obtaining such data can be difficult for attackers, and the process of generating syntactic poisoned triggers and inserting Trojans can be time-consuming. This paper proposes a solution called TrojText, which aims to determine whether invisible textual Trojan attacks can be performed more efficiently and cost-effectively without training data. The proposed approach, called the Representation-Logit Trojan Insertion (RLI) algorithm, uses smaller sampled test data instead of large training data to achieve the desired attack. The paper also introduces two additional techniques, namely the accumulated gradient ranking (AGR) and Trojan Weights Pruning (TWP), to reduce the number of tuned parameters and the attack overhead. The TrojText approach was evaluated on three datasets (AG's News, SST-2, and OLID) using three NLP models (BERT, XL-Net, and DeBERTa). The experiments demonstrated that the TrojText approach achieved a 98.35% classification accuracy for test sentences in the target class on the BERT model for the AG's News dataset. The source code for TrojText is available at https://github.com/UCF-ML-Research/TrojText. | |
d231603061 | We study the multi-agent safe control problem where agents should avoid collisions to static obstacles and collisions with each other while reaching their goals. Our core idea is to learn the multi-agent control policy jointly with learning the control barrier functions as safety certificates. We propose a novel joint-learning framework that can be implemented in a decentralized fashion, with generalization guarantees for certain function classes. Such a decentralized framework can adapt to an arbitrarily large number of agents. Building upon this framework, we further improve the scalability by incorporating neural network architectures that are invariant to the quantity and permutation of neighboring agents. In addition, we propose a new spontaneous policy refinement method to further enforce the certificate condition during testing. We provide extensive experiments to demonstrate that our method significantly outperforms other leading multi-agent control approaches in terms of maintaining safety and completing original tasks. Our approach also shows exceptional generalization capability in that the control policy can be trained with 8 agents in one scenario, while being used on other scenarios with up to 1024 agents in complex multi-agent environments and dynamics. arXiv:2101.05436v3 [cs.MA] 31 Jan 2021Published as a conference paper at ICLR 2021 be applied on an arbitrarily number of agents and in scenarios that are different from the training scenarios, which resolves the fundamental scalability issue in multi-agent control. We also propose several effective techniques in Section 4 to make such a learning process even more scalable and practical, which are then validated extensively in Section 5.Experimental results are indeed promising. We study both 2D and 3D safe multi-agent control problems, each with several distinct environments and complex nonholonomic dynamics. Our jointlearning framework performs exceptionally well: our control policies trained on scenarios with 8 agents can be used on up to 1024 agents while maintaining low collision rates, which has notably pushed the boundary of learning-based safe multi-agent control. Speaking of which, 1024 is not the limit of our approach but rather due to the limited computational capability of our laptop used for the experiments. We also compare our approach with both leading learning-based methods(Lowe et al., 2017;Zhang & Bastani, 2019;Liu et al., 2020)and traditional planning methods(Ma et al., 2019;Fan et al., 2020). Our approach outperforms all the other approaches in terms of both completing the tasks and maintaining safety.Contributions. Our main contributions are three-fold: 1) We propose the first framework to jointly learning safe multi-agent control policies and CBF certificates, in a decentralized fashion. 2) We present several techniques that make the learning framework more effective and scalable for practical multi-agent systems, including the use of quantity-permutation invariant neural network architectures in learning to handle the permutation of neighbouring agents. 3) We demonstrate via extensive experiments that our method significantly outperforms other leading methods, and has exceptional generalization capability to unseen scenarios and an arbitrary number of agents, even in quite complex multi-agent environments such as ground robots and drones. The video that demonstrates the outstanding performance of our method can be found in the supplementary material. | Published as a conference paper at ICLR 2021 LEARNING SAFE MULTI-AGENT CONTROL WITH DECENTRALIZED NEURAL BARRIER CERTIFICATES |
d234334847 | Well-designed molecular representations (fingerprints) are vital to combine medical chemistry and deep learning. Whereas incorporating 3D geometry of molecules (i.e. conformations) in their representations seems beneficial, current 3D algorithms are still in infancy. In this paper, we propose a novel molecular representation algorithm which preserves 3D conformations of molecules with a Molecular Hamiltonian Network (HamNet). In HamNet, implicit positions and momentums of atoms in a molecule interact in the Hamiltonian Engine following the discretized Hamiltonian equations. These implicit coordinations are supervised with real conformations with translation-& rotation-invariant losses, and further used as inputs to the Fingerprint Generator, a message-passing neural network. Experiments show that the Hamiltonian Engine can well preserve molecular conformations, and that the fingerprints generated by HamNet achieve stateof-the-art performances on MoleculeNet, a standard molecular machine learning benchmark. | Published as a conference paper at ICLR 2021 HAMNET: CONFORMATION-GUIDED MOLECULAR REPRESENTATION WITH HAMILTONIAN NEURAL NET- WORKS |
d10316648 | In this work, we present a novel neural network based architecture for inducing compositional crosslingual word representations. Unlike previously proposed methods, our method fulfills the following three criteria; it constrains the wordlevel representations to be compositional, it is capable of leveraging both bilingual and monolingual data, and it is scalable to large vocabularies and large quantities of data. The key component of our approach is what we refer to as a monolingual inclusion criterion, that exploits the observation that phrases are more closely semantically related to their sub-phrases than to other randomly sampled phrases. We evaluate our method on a well-established crosslingual document classification task and achieve results that are either comparable, or greatly improve upon previous state-of-the-art methods. Concretely, our method reaches a level of 92.7% and 84.4% accuracy for the English to German and German to English sub-tasks respectively. The former advances the state of the art by 0.9% points of accuracy, the latter is an absolute improvement upon the previous state of the art by 7.7% points of accuracy and an improvement of 33.0% in error reduction. | LEVERAGING MONOLINGUAL DATA FOR CROSSLIN- GUAL COMPOSITIONAL WORD REPRESENTATIONS |
d210838871 | As deep neural networks (DNNs) achieve tremendous success across many application domains, researchers tried to explore in many aspects on why they generalize well. In this paper, we provide a novel perspective on these issues using the gradient signal to noise ratio (GSNR) of parameters during training process of DNNs. The GSNR of a parameter is defined as the ratio between its gradient's squared mean and variance, over the data distribution. Based on several approximations, we establish a quantitative relationship between model parameters' GSNR and the generalization gap. This relationship indicates that larger GSNR during training process leads to better generalization performance. Moreover, we show that, different from that of shallow models (e.g. logistic regression, support vector machines), the gradient descent optimization dynamics of DNNs naturally produces large GSNR during training, which is probably the key to DNNs' remarkable generalization ability.Published as a conference paper at ICLR 2020 unsupervised DNNs such as variational auto-encoder (VAE). Here we focus on analyzing the relation between GSNR and the generalization gap. | Published as a conference paper at ICLR 2020 UNDERSTANDING WHY NEURAL NETWORKS GENER- ALIZE WELL THROUGH GSNR OF PARAMETERS |
d59523607 | We introduce a framework for Continual Learning (CL) based on Bayesian inference over the function space rather than the parameters of a deep neural network. This method, referred to as functional regularisation for Continual Learning, avoids forgetting a previous task by constructing and memorising an approximate posterior belief over the underlying task-specific function. To achieve this we rely on a Gaussian process obtained by treating the weights of the last layer of a neural network as random and Gaussian distributed. Then, the training algorithm sequentially encounters tasks and constructs posterior beliefs over the task-specific functions by using inducing point sparse Gaussian process methods. At each step a new task is first learnt and then a summary is constructed consisting of (i) inducing inputs -a fixed-size subset of the task inputs selected such that it optimally represents the task -and (ii) a posterior distribution over the function values at these inputs. This summary then regularises learning of future tasks, through Kullback-Leibler regularisation terms. Our method thus unites approaches focused on (pseudo-)rehearsal with those derived from a sequential Bayesian inference perspective in a principled way, leading to strong results on accepted benchmarks. | Published as a conference paper at ICLR 2020 FUNCTIONAL REGULARISATION FOR CONTINUAL LEARNING WITH GAUSSIAN PROCESSES |
d210845646 | We propose to study the problem of few-shot graph classification in graph neural networks (GNNs) to recognize unseen classes, given limited labeled graph examples. Despite several interesting GNN variants being proposed recently for node and graph classification tasks, when faced with scarce labeled examples in the few-shot setting, these GNNs exhibit significant loss in classification performance. Here, we present an approach where a probability measure is assigned to each graph based on the spectrum of the graph's normalized Laplacian. This enables us to accordingly cluster the graph base-labels associated with each graph into super-classes, where the L p Wasserstein distance serves as our underlying distance metric. Subsequently, a super-graph constructed based on the super-classes is then fed to our proposed GNN framework which exploits the latent inter-class relationships made explicit by the super-graph to achieve better class label separation among the graphs. We conduct exhaustive empirical evaluations of our proposed method and show that it outperforms both the adaptation of state-ofthe-art graph classification methods to few-shot scenario and our naive baseline GNNs. Additionally, we also extend and study the behavior of our method to semi-supervised and active learning scenarios. | FEW-SHOT LEARNING ON GRAPHS VIA SUPER- CLASSES BASED ON GRAPH SPECTRAL MEASURES |
d252682915 | Accurate delineation of fine-scale structures is a very important yet challenging problem. Existing methods use topological information as an additional training loss, but are ultimately making pixel-wise predictions. In this paper, we propose the first deep learning based method to learn topological/structural representations. We use discrete Morse theory and persistent homology to construct an one-parameter family of structures as the topological/structural representation space. Furthermore, we learn a probabilistic model that can perform inference tasks in such a topological/structural representation space. Our method generates true structures rather than pixel-maps, leading to better topological integrity in automatic segmentation tasks. It also facilitates semi-automatic interactive annotation/proofreading via the sampling of structures and structure-aware uncertainty. | LEARNING PROBABILISTIC TOPOLOGICAL REPRESEN- TATIONS USING DISCRETE MORSE THEORY |
d219531522 | Non-autoregressive text to speech (TTS) models such as FastSpeech(Ren et al., 2019)can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more information as input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speech variations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated and time-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTS by 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech (e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directly take them as conditional inputs in training and use predicted values in inference. We further design FastSpeech 2s, which is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and Fast-Speech 2 can even surpass autoregressive models. Audio samples are available at https://speechresearch.github.io/fastspeech2/. * Authors contribute equally to this work. † Corresponding author arXiv:2006.04558v8 [eess.AS] 8 Aug 2022 2019). They usually suffer from slow inference speed and robustness (word skipping and repeating) issues(Ren et al., 2019;Chen et al., 2020). In recent years, non-autoregressive TTS models(Ren et al., 2019;Łańcucki, 2020;Lim et al., 2020;Miao et al., 2020;are designed to address these issues, which generate mel-spectrograms with extremely fast speed and avoid robustness issues, while achieving comparable voice quality with previous autoregressive models.Among those non-autoregressive TTS methods, FastSpeech (Ren et al., 2019) is one of the most successful models. FastSpeech designs two ways to alleviate the one-to-many mapping problem: 1) Reducing data variance in the target side by using the generated mel-spectrogram from an autoregressive teacher model as the training target (i.e., knowledge distillation). 2) Introducing the duration information (extracted from the attention map of the teacher model) to expand the text sequence to match the length of the mel-spectrogram sequence. While these designs in FastSpeech ease the learning of the one-to-many mapping problem (see Section 2.1) in TTS, they also bring several disadvantages: 1) The two-stage teacher-student training pipeline makes the training process complicated.2) The target mel-spectrograms generated from the teacher model have some information loss 1 compared with the ground-truth ones, since the quality of the audio synthesized from the generated mel-spectrograms is usually worse than that from the ground-truth ones.3) The duration extracted from the attention map of teacher model is not accurate enough. | FASTSPEECH 2: FAST AND HIGH-QUALITY END-TO- END TEXT TO SPEECH |
d226964694 | Top-k predictions are used in many real-world applications such as machine learning as a service, recommender systems, and web searches. 0 -norm adversarial perturbation characterizes an attack that arbitrarily modifies some features of an input such that a classifier makes an incorrect prediction for the perturbed input. 0 -norm adversarial perturbation is easy to interpret and can be implemented in the physical world. Therefore, certifying robustness of top-k predictions against 0 -norm adversarial perturbation is important. However, existing studies either focused on certifying 0 -norm robustness of top-1 predictions or 2 -norm robustness of top-k predictions. In this work, we aim to bridge the gap. Our approach is based on randomized smoothing, which builds a provably robust classifier from an arbitrary classifier via randomizing an input. Our major theoretical contribution is an almost tight 0 -norm certified robustness guarantee for top-k predictions. We empirically evaluate our method on CIFAR10 and ImageNet. For instance, our method can build a classifier that achieves a certified top-3 accuracy of 69.2% on ImageNet when an attacker can arbitrarily perturb 5 pixels of a testing image.arXiv:2011.07633v2 [cs.CR] 3 Jun 2022Published as a conference paper at ICLR 2022However, most existing certified defenses focus on top-1 predictions. In many applications, top-k predictions that return the k most likely labels are more relevant. For instance, when a classifier is deployed as a cloud service (also called machine learning as a service) (Google Cloud Vision; Microsoft; Amazon AWS; Clarifai), top-k labels for a testing input are often returned to a customer for more informed decisions; in recommender systems and web searches, top-k items/webpages are recommended to a user. Despite the importance and relevance of top-k predictions, their certified robustness against adversarial perturbations is largely unexplored. One exception is the recent work from Jia et al.(2020), which derived a tight 2 -norm certified robustness for top-k predictions. Such 2 -norm certified robustness can be transformed to 0 -norm certified robustness via employing the inequality between 0 -norm and 2 -norm. However, the 0 -norm certified robustness derived from such transformations is suboptimal. | Published as a conference paper at ICLR 2022 ALMOST TIGHT L0-NORM CERTIFIED ROBUSTNESS OF TOP-k PREDICTIONS AGAINST ADVERSARIAL PERTUR- BATIONS |
d10766488 | In this paper, we introduce a novel deep learning framework, termed Purine. In Purine, a deep network is expressed as a bipartite graph (bi-graph), which is composed of interconnected operators and data tensors. With the bi-graph abstraction, networks are easily solvable with event-driven task dispatcher. We then demonstrate that different parallelism schemes over GPUs and/or CPUs on single or multiple PCs can be universally implemented by graph composition. This eases researchers from coding for various parallelization schemes, and the same dispatcher can be used for solving variant graphs. Scheduled by the task dispatcher, memory transfers are fully overlapped with other computations, which greatly reduces the communication overhead and helps us achieve approximate linear acceleration. | PURINE: A BI-GRAPH BASED DEEP LEARNING FRAME- WORK |
d254044710 | Recent improvements in conditional generative modeling have made it possible to generate high-quality images from language descriptions alone. We investigate whether these methods can directly address the problem of sequential decisionmaking. We view decision-making not through the lens of reinforcement learning (RL), but rather through conditional generative modeling. To our surprise, we find that our formulation leads to policies that can outperform existing offline RL approaches across standard benchmarks. By modeling a policy as a returnconditional diffusion model, we illustrate how we may circumvent the need for dynamic programming and subsequently eliminate many of the complexities that come with traditional offline RL. We further demonstrate the advantages of modeling policies as conditional diffusion models by considering two other conditioning variables: constraints and skills. Conditioning on a single constraint or skill during training leads to behaviors at test-time that can satisfy several constraints together or demonstrate a composition of skills. Our results illustrate that conditional generative modeling is a powerful tool for decision-making. | IS CONDITIONAL GENERATIVE MODELING ALL YOU NEED FOR DECISION-MAKING? |
d248239874 | We consider the offline constrained reinforcement learning (RL) problem, in which the agent aims to compute a policy that maximizes expected return while satisfying given cost constraints, learning only from a pre-collected dataset. This problem setting is appealing in many real-world scenarios, where direct interaction with the environment is costly or risky, and where the resulting policy should comply with safety constraints. However, it is challenging to compute a policy that guarantees satisfying the cost constraints in the offline RL setting, since the offpolicy evaluation inherently has an estimation error. In this paper, we present an offline constrained RL algorithm that optimizes the policy in the space of the stationary distribution. Our algorithm, COptiDICE, directly estimates the stationary distribution corrections of the optimal policy with respect to returns, while constraining the cost upper bound, with the goal of yielding a cost-conservative policy for actual constraint satisfaction. Experimental results show that COptiDICE attains better policies in terms of constraint satisfaction and return-maximization, outperforming baseline algorithms. * Work done during an internship at DeepMind. | COPTIDICE: OFFLINE CONSTRAINED REINFORCE- MENT LEARNING VIA STATIONARY DISTRIBUTION CORRECTION ESTIMATION |
d256503890 | De novo molecular generation is an essential task for science discovery. Recently, fragment-based deep generative models have attracted much research attention due to their flexibility in generating novel molecules based on existing molecule fragments. However, the motif vocabulary, i.e., the collection of frequent fragments, is usually built upon heuristic rules, which brings difficulties to capturing common substructures from large amounts of molecules. In this work, we propose a new method, MiCaM, to generate molecules based on mined connection-aware motifs. Specifically, it leverages a data-driven algorithm to automatically discover motifs from a molecule library by iteratively merging subgraphs based on their frequency. The obtained motif vocabulary consists of not only molecular motifs (i.e., the frequent fragments), but also their connection information, indicating how the motifs are connected with each other. Based on the mined connectionaware motifs, MiCaM builds a connection-aware generator, which simultaneously picks up motifs and determines how they are connected. We test our method on distribution-learning benchmarks (i.e., generating novel molecules to resemble the distribution of a given training set) and goal-directed benchmarks (i.e., generating molecules with target properties), and achieve significant improvements over previous fragment-based baselines. Furthermore, we demonstrate that our method can effectively mine domain-specific motifs for different tasks. Recap retrosynthetic combinatorial analysis procedure: a powerful new technique for identifying privileged molecular fragments with useful applications in combinatorial chemistry. -objective de novo drug design with conditional graph generative model. . Learning to extend molecular scaffolds with structural motifs. arXiv preprint arXiv:2103.03864, 2021. | Published as a conference paper at ICLR 2023 DE NOVO MOLECULAR GENERATION VIA CONNECTION-AWARE MOTIF MINING |
d250408169 | Transformers have quickly shined in the computer vision world since the emergence of Vision Transformers (ViTs). The dominant role of convolutional neural networks (CNNs) seems to be challenged by increasingly effective transformer-based models. Very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. While one of them, i.e. RepLKNet, impressively manages to scale the kernel size to 31×31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced ViTs such as Swin Transformer. In this paper, we explore the possibility of training extreme convolutions larger than 31×31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. This study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61×61 with better performance. Built on this recipe, we propose Sparse Large Kernel Network (SLaK), a pure CNN architecture equipped with sparse factorized 51×51 kernels that can perform on par with or better than state-of-the-art hierarchical Transformers and modern ConvNet architectures like ConvNeXt and RepLKNet, on ImageNet classification as well as a wide range of downstream tasks including semantic segmentation on ADE20K, object detection on PASCAL VOC 2007, and object detection/segmentation on MS COCO. | MORE CONVNETS IN THE 2020S: SCALING UP KER- NELS BEYOND 51 × 51 USING SPARSITY |
d211171395 | Inductive representation learning on temporal graphs is an important step toward salable machine learning on real-world dynamic networks. The evolving nature of temporal dynamic graphs requires handling new nodes as well as capturing temporal patterns. The node embeddings, which are now functions of time, should represent both the static node features and the evolving topological structures. Moreover, node and topological features can be temporal as well, whose patterns the node embeddings should also capture. We propose the temporal graph attention (TGAT) layer to efficiently aggregate temporal-topological neighborhood features as well as to learn the time-feature interactions. For TGAT, we use the self-attention mechanism as building block and develop a novel functional time encoding technique based on the classical Bochner's theorem from harmonic alaysis. By stacking TGAT layers, the network recognizes the node embeddings as functions of time and is able to inductively infer embeddings for both new and observed nodes as the graph evolves. The proposed approach handles both node classification and link prediction task, and can be naturally extended to include the temporal edge features. We evaluate our method with transductive and inductive tasks under temporal settings with two benchmark and one industrial dataset. Our TGAT model compares favorably to state-of-the-art baselines as well as the previous temporal graph embedding approaches. * Both authors contributed equally to this research. | Published as a conference paper at ICLR 2020 INDUCTIVE REPRESENTATION LEARNING ON TEMPORAL GRAPHS |
d231786434 | Zero-shot learning (ZSL) aims to classify images of an unseen class only based on a few attributes describing that class but no access to any training sample. A popular strategy is to learn a mapping between the semantic space of class attributes and the visual space of images based on the seen classes and their data. Thus, an unseen class image can be ideally mapped to its corresponding class attributes.The key challenge is how to align the representations in the two spaces. For most ZSL settings, the attributes for each seen/unseen class are only represented by a vector while the seen-class data provide much more information. Thus, the imbalanced supervision from the semantic and the visual space can make the learned mapping easily overfitting to the seen classes. To resolve this problem, we propose Isometric Propagation Network (IPN), which learns to strengthen the relation between classes within each space and align the class dependency in the two spaces. Specifically, IPN learns to propagate the class representations on an auto-generated graph within each space. In contrast to only aligning the resulted static representation, we regularize the two dynamic propagation procedures to be isometric in terms of the two graphs' edge weights per step by minimizing a consistency loss between them. IPN achieves state-of-the-art performance on three popular ZSL benchmarks. To evaluate the generalization capability of IPN, we further build two larger benchmarks with more diverse unseen classes, and demonstrate the advantages of IPN on them. | Published as a conference paper at ICLR 2021 ISOMETRIC PROPAGATION NETWORK FOR GENERALIZED ZERO-SHOT LEARNING |
d3286670 | Deep neural networks are surprisingly efficient at solving practical tasks, but the theory behind this phenomenon is only starting to catch up with the practice. Numerous works show that depth is the key to this efficiency. A certain class of deep convolutional networks -namely those that correspond to the Hierarchical Tucker (HT) tensor decomposition -has been proven to have exponentially higher expressive power than shallow networks. I.e. a shallow network of exponential width is required to realize the same score function as computed by the deep architecture. In this paper, we prove the expressive power theorem (an exponential lower bound on the width of the equivalent shallow network) for a class of recurrent neural networks -ones that correspond to the Tensor Train (TT) decomposition. This means that even processing an image patch by patch with an RNN can be exponentially more efficient than a (shallow) convolutional network with one hidden layer. Using theoretical results on the relation between the tensor decompositions we compare expressive powers of the HT-and TT-Networks. We also implement the recurrent TT-Networks and provide numerical evidence of their expressivity. | EXPRESSIVE POWER OF RECURRENT NEURAL NET- WORKS |
d259108495 | Recently, sequence learning methods have been applied to the problem of off-policy Reinforcement Learning, including the seminal work on Decision Transformers, which employs transformers for this task. Since transformers are parameter-heavy, cannot benefit from history longer than a fixed window size, and are not computed using recurrence, we set out to investigate the suitability of the S4 family of models, which are based on state-space layers and have been shown to outperform transformers, especially in modeling long-range dependencies. In this work we present two main algorithms: (i) an off-policy training procedure that works with trajectories, while still maintaining the training efficiency of the S4 model. (ii) An on-policy training procedure that is trained in a recurrent manner, benefits from long-range dependencies, and is based on a novel stable actor-critic mechanism.Our results indicate that our method outperforms multiple variants of decision transformers, as well as the other baseline methods on most tasks, while reducing the latency, number of parameters, and training time by several orders of magnitude, making our approach more suitable for real-world RL. * These authors contributed equally to this work. † Tel Aviv University ‡ Meta AI Research arXiv:2306.05167v1 [cs.LG] 8 Jun 2023Published as a conference paper at ICLR 2023 off-policy training with on-policy fine-tuning. This scheme allows us to run on-policy algorithms, while exploiting the advantages of S4. In the beginning, we trained the model in an off-policy manner on sequences, via the convolutional view. This process exploits the ability of S4 to operate extremely fast on sequences, thanks to the fact that computations can be performed with FFT instead of several recurrent operations. Later, at the fine-tuning stage, we used an on-policy algorithm. While pure on-policy training is a difficult task due to the instability and randomness that arise at the beginning of training, our method starts the on-policy training at a more stable point.From the technical perspective, our method applies recurrence during the training of S4 model. As far as we can ascertain, such a capability has not been demonstrated for S4, although it was part of the advantages of the earlier HIPPO Gu et al. (2020) model, which has fixed (unlearned) recurrent matrices and different parameterization and is outperformed by S4. Furthermore, in Appendix E we show that the recurrent view of the diagonal state space layer is unstable from both a theoretical and empirical perspective, and we propose a method to mitigate this problem in on-policy RL. This observation provides a further theoretical explanation for why state-space layers empirically outperform RNNs. Moreover, we present a novel transfer learning technique that involves training both the recurrent and convolutional views of S4 and show its applicability for RL.We conduct experiments on multiple Mujoco (Todorov et al., 2012) benchmarks and show the advantage of our method over existing off-policy methods, including the decision transformer, and over similar on-policy methods.RELATED WORKClassic RL methods, such as dynamic programming (Veinott, 1966;Blackwell, 1962)and Q-learning variants Schwartz (1993); Hasselt (2010); Rummery & Niranjan (1994) are often outperformed by deep RL methods, starting with the seminal deep Q-learning method (Mnih et al., 2015) and followed by thousands of follow-up contributions. Some of the most prominent methods are AlphaGo (Silver et al., 2016), AlphaZero (Silver et al., 2018), and Pluribus (Brown & Sandholm, 2019), which outperform humans in chess, go and shogi, and poker, respectively.Sequence Models in RL There are many RL methods that employ recurrent neural networks (RNNs), such as vanilla RNNs(Schäfer, 2008;Li et al., 2015) or LSTMs Bakker (20012007). Recurrent models are suitable for RL tasks for two reasons. First, these models are fast in inference, which is necessary for a system that operates and responds to the environment in real-time. Second, since the agent should make decisions recursively based on the decisions made in the past, RL tasks are recursive in nature.Hiroki Furuta, Yutaka Matsuo, and Shixiang Shane Gu. Generalized decision transformer for offline hindsight information matching. arXiv preprint arXiv:2111.10364, 2021. . It's raw! audio generation with state-space models. arXiv preprint arXiv:2202.09729, 2022.David Goldberg. What every computer scientist should know about floating-point arithmetic. ACM computing surveys (CSUR), 23(1):5-48, 1991. . Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021a. | Published as a conference paper at ICLR 2023 DECISION S4: EFFICIENT SEQUENCE-BASED RL VIA STATE SPACE LAYERS |
d234501004 | Academic trade requires juggling multiple variants of the same content published in different formats: manuscripts, presentations, posters and computational notebooks. The need to track versions to accommodate for the write-review-rebutrevise life-cycle adds another layer of complexity. We propose to significantly reduce this burden by maintaining a single source document in a version-controlled environment (such as git), adding functionality to generate a collection of output formats popular in academia. To this end, we utilise various open-source tools from the Jupyter scientific computing ecosystem and operationalise selected software engineering concepts. We offer a proof-of-concept workflow that composes Jupyter Book (an online document), Jupyter Notebook (a computational narrative) and reveal.js slides from a single markdown source file. Hosted on GitHub, our approach supports change tracking and versioning, as well as a transparent review process based on the underlying code issue management infrastructure. An exhibit of our workflow can be previewed at | Published at Rethinking ML Papers -ICLR 2021 Workshop YOU ONLY WRITE THRICE: CREATING DOCUMENTS, COMPUTATIONAL NOTE- BOOKS AND PRESENTATIONS FROM A SINGLE SOURCE |
d16053260 | Learning compact, interpretable image representations is a very natural task which has not been solved satisfactorily even for simple classes of binary images. In this paper, we review various ways of composing parts (or experts) for binary data and argue that competitive forms of interaction are best suited to learn lowdimensional representations. We propose a new rule which discourages parts from learning similar structures and which penalizes opposing expert opinions strongly so that abstaining from voting becomes more attractive. Using a process of oversimplification and correction we show in experiments that very intuitive models can be obtained. | COMPACT PART-BASED IMAGE REPRESENTATIONS |
d253080440 | This paper focuses on computing the convex conjugate operation that arises when solving Euclidean Wasserstein-2 optimal transport problems. This conjugation, which is also referred to as the Legendre-Fenchel conjugate or c-transform, is considered difficult to compute and in practice, Wasserstein-2 methods are limited by not being able to exactly conjugate the dual potentials in continuous space. To overcome this, the computation of the conjugate can be approximated with amortized optimization, which learns a model to predict the conjugate. I show that combining amortized approximations to the conjugate with a solver for fine-tuning significantly improves the quality of transport maps learned for the Wasserstein-2 benchmark by Korotin et al. (2021a) and is able to model many 2-dimensional couplings and flows considered in the literature. All of the baselines, methods, and solvers in this paper are available at http://github. com/facebookresearch/w2ot.where L 1 (α) is the space of measurable functions that are Lebesgue-integrable over α and f is the convex conjugate, or Legendre-Fenchel transform, of a function f defined by: | Published as a conference paper at ICLR 2023 ON AMORTIZING CONVEX CONJUGATES FOR OPTIMAL TRANSPORT |
d249605400 | While machine learning models rapidly advance the state-of-the-art on various real-world tasks, out-of-domain (OOD) generalization remains a challenging problem given the vulnerability of these models to spurious correlations. We propose a balanced mini-batch sampling strategy to transform a biased data distribution into a spurious-free balanced distribution, based on the invariance of the underlying causal mechanisms for the data generation process. We argue that the Bayes optimal classifiers trained on such balanced distribution are minimax optimal across a diverse enough environment space. We also provide an identifiability guarantee of the latent variable model of the proposed data generation process, when utilizing enough train environments. Experiments are conducted on DomainBed, demonstrating empirically that our method obtains the best performance across 20 baselines reported on the benchmark. 1 Published as a conference paper at ICLR 2023 (a) Observed distribution p(X, Y |E = e) (b) Balanced distributionp(X, Y |E = e) | Published as a conference paper at ICLR 2023 CAUSAL BALANCING FOR DOMAIN GENERALIZATION |
d249191434 | The problem of optimization on Stiefel manifold, i.e., minimizing functions of (not necessarily square) matrices that satisfy orthogonality constraints, has been extensively studied. Yet, a new approach is proposed based on, for the first time, an interplay between thoughtfully designed continuous and discrete dynamics. It leads to a gradient-based optimizer with intrinsically added momentum. This method exactly preserves the manifold structure but does not require additional operation to keep momentum in the changing (co)tangent space, and thus has low computational cost and pleasant accuracy. Its generalization to adaptive learning rates is also demonstrated. Notable performances are observed in practical tasks. For instance, we found that placing orthogonal constraints on attention heads of trained-from-scratch Vision Transformer (Dosovitskiy et al., 2020) could markedly improve its performance, when our optimizer is used, and it is better that each head is made orthogonal within itself but not necessarily to other heads. This optimizer also makes the useful notion of Projection Robust Wasserstein Distance (Paty and Cuturi, 2019; Lin et al., 2020) for high-dim. optimal transport even more effective.Code: https://github.com/konglk1203/VariationalStiefelOptimizer 1 It helped computer vision applications prior to the deep learning era as well (e.g., Liu et al., 2003).Published as a conference paper at ICLR 2023 of ODEs corresponding to damped mechanical systems on a constrained manifold; then, design a delicate time discretization of these ODEs, which yields optimization algorithms that precisely preserve the constraints and mimic the continuous dynamics. Our optimizer has several pleasant properties: 1) It is exactly preserving the manifold structure, not only of the Stiefel manifold, but in fact of its tangent bundle. In other words, throughout the course of optimization, the position variable remains exactly on the Stiefel manifold, and the momentum variable remains exactly in the (co)tangent space. 2) Typically, in order to maintain the manifold structure, some kind of projection/retraction/exponential-map operation is needed, and since we have both position and momentum, such operation is needed for both variables (i.e. to maintain the cotangent bundle structure). However, our carefully designed ODE and its discretization make the structure preservation of momentum automatic, meaning that no extra operation (projection, retraction, parallel transport, etc.) is needed for the momentum variable. This not only leads to improved computational efficiency, but also serves an indirect evidence of having a reduced overall (i.e. both position and momentum) local error. 3) We used a quadratic-convergent iterative solver for our specific position retraction operation, which makes it fast. 4) Due to 2)+3), our per iteration computational complexity, O(nm 2 ), has a small constant factor (see Sec. C for details). 5) Our discretization is also numerically stable so that it well preserves the structure even under low machine precision and numerous iterations, which are beneficial in machine learning contexts. 6) Because our algorithm is derived from a variational framework that unify both Euclidean and Stiefel variables, the same hyperparameters can be used for both these parameters; see Sec.3 and note this difference from previous milestones (e.g., Li et al.(2020)) significantly reduces tuning efforts. 7) Our algorithm works for a range of Riemannian metrics, allowing extra flexibility in choosing suitable geometry to optimize the performance for a specific problem. Selected (due to space) experimental tests of our optimizer are: (1) We consider the simple problem of leading eigenvalues, which is yet practically important in data sciences. It systematically investigates algorithmic performances under different parameters. (2) We show the elegant idea of approximating optimal transport distance in high-dim. via a good low-dim. projection (Paty and Cuturi, 2019; Lin et al., 2020) can be made even more efficacious by our optimizer. (3) We note that Vision Transformer (ViT) can be further improved by imposing attention heads to be orthogonal; more precisely: Consider training ViT from scratch. We discover that 1) requiring each head to be orthogonal in itself improves both training and testing accuracies the most. An important recent work by Zhang et al. (2021) applied orthogonality to transformers and demonstrated improved performance in NLP tasks. It concatenates each of the W Q i , W K i , W V i matrices in attention across all heads, and applies orthogonal constraint to each of the three, via regularizer. This makes each head (approximately) orthogonal, not only within itself, but also to other heads. Orthogonal constraint is also applied, via regularizer, to each weight matrix of feed-forward layers in their case. With our Stiefel optimizer which are not restricted to square matrices, we can now make each head exactly and only orthogonal within itself, which leads to further improvements at least in CV tasks. Meanwhile, 2) having orthogonality both in and across heads is found less effective than 1), but it is still better than requiring no orthogonality (i.e. vanilla ViT). No orthogonality on feed-forward layers was used in either 1) or 2). In addition, 3) to achieve these improvements, our Stiefel optimizer needs to be used; methods that do not have momentum or not exactly preserve structure (e.g., regularizer-based) are seen not fully exploiting the benefit of orthogonality. | MOMENTUM STIEFEL OPTIMIZER, WITH APPLICA- TIONS TO SUITABLY-ORTHOGONAL ATTENTION, AND OPTIMAL TRANSPORT |
d256826892 | In deep learning, transferring information from a pretrained network to a downstream task by finetuning has many benefits. The choice of task head plays an important role in fine-tuning, as the pretrained and downstream tasks are usually different. Although there exist many different designs for finetuning, a full understanding of when and why these algorithms work has been elusive. We analyze how the choice of task head controls feature adaptation and hence influences the downstream performance. By decomposing the learning dynamics of adaptation, we find that the key aspect is the training accuracy and loss at the beginning of finetuning, which determines the "energy" available for the feature's adaptation. We identify a significant trend in the effect of changes in this initial energy on the resulting features after finetuning. Specifically, as the energy increases, the Euclidean and cosine distances between the resulting and original features increase, while their dot products (and the resulting features' norm) first increase then decrease. Inspired by this, we give several practical principles that lead to better downstream performance. We analytically prove this trend in an overparamterized linear setting, and verify its applicability to different experimental settings. | Published as a conference paper at ICLR 2023 HOW TO PREPARE YOUR TASK HEAD FOR FINETUNING |
d226282381 | We show when maximizing a properly defined f -divergence measure with respect to a classifier's predictions and the supervised labels is robust with label noise. Leveraging its variational form, we derive a nice decoupling property for a family of f -divergence measures when label noise presents, where the divergence is shown to be a linear combination of the variational difference defined on the clean distribution and a bias term introduced due to the noise. The above derivation helps us analyze the robustness of different f -divergence functions. With established robustness, this family of f -divergence functions arises as useful metrics for the problem of learning with noisy labels, which do not require the specification of the labels' noise rate. When they are possibly not robust, we propose fixes to make them so. In addition to the analytical results, we present thorough experimental evidence. Our code is available at https:Published as a conference paper at ICLR 2021 an addition of a bias term. Using this result, we analyze under which conditions maximizing an fdivergence measure would be robust to label noise. In particular, we demonstrate strong robustness results for Total Variation divergence, identify conditions under which several other divergences, including Jensen-Shannon divergence and Pearson X 2 divergence, are robust. The resultant fdivergence functions offer ways to learn with noisy labels, without estimating the noise parameters. As mentioned above, this distinguishes our solutions from a major line of previous studies that would require such estimates. When the f -divergence functions are possibly not robust with label noise, our analysis also offers a new way to perform "loss correction". We'd like to emphasize that instead of offering one method/loss/measure, our results effectively offer a family of functions that can be used to perform this noisy training task. Our contributions summarize as follows:• We show a certain set of f -divergence measures that are robust with label noise (some under certain conditions). The corresponding f -divergence functions provide the community with robust learning measures that do not require the knowledge of the noise rates. | Published as a conference paper at ICLR 2021 WHEN OPTIMIZING f -DIVERGENCE IS ROBUST WITH LABEL NOISE |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.