query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
This paper explores the scenarios under which an attacker can claim that ‘Noise and access to the softmax layer of the model is all you need’ to steal the weights of a convolutional neural network whose architecture is already known. We were able to achieve 96% test accuracy using the stolen MNIST model and 82% accurac...
Input only noise , glean the softmax outputs, steal the weights
1,700
scitldr
We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN). It is known that GAN can produce very realistic samples while VAE does not suffer from mode collapsing problem. Our model optimizes λ-Jeffreys divergence bet...
We propose a new form of an autoencoding model which incorporates the best properties of variational autoencoders (VAE) and generative adversarial networks (GAN)
1,701
scitldr
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task. This approach encounters difficulty when transfer is not mutually beneficial, for instance, when tasks are sufficiently dissimilar or change over time. Here, we use the connection between grad...
We use the connection between gradient-based meta-learning and hierarchical Bayes to learn a mixture of meta-learners that is appropriate for a heterogeneous and evolving task distribution.
1,702
scitldr
We introduce a new routing algorithm for capsule networks, in which a child capsule is routed to a parent based only on agreement between the parent's state and the child's vote. Unlike previously proposed routing algorithms, the parent's ability to reconstruct the child is not explicitly taken into account to update t...
We present a new routing method for Capsule networks, and it performs at-par with ResNet-18 on CIFAR-10/ CIFAR-100.
1,703
scitldr
We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing. Such data can be used to train automated dialogue agents performing customer care tasks for the enterprises or organizations. In particular, the framework takes the documents as input and ...
We introduce Doc2Dial, an end-to-end framework for generating conversational data grounded in business documents via crowdsourcing for train automated dialogue agents
1,704
scitldr
Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations suc...
We introduce an autoregressive generative model for spectrograms and demonstrate applications to speech and music generation
1,705
scitldr
Deep convolutional network architectures are often assumed to guarantee generalization for small image translations and deformations. In this paper we show that modern CNNs (VGG16, ResNet50, and InceptionResNetV2) can drastically change their output when an image is translated in the image plane by a few pixels, and th...
Modern deep CNNs are not invariant to translations, scalings and other realistic image transformations, and this lack of invariance is related to the subsampling operation and the biases contained in image datasets.
1,706
scitldr
We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after ...
Synthesize complex and extended human motions using an auto-conditioned LSTM network
1,707
scitldr
{\em Saliency methods} attempt to explain a deep net's decision by assigning a {\em score} to each feature/pixel in the input, often doing this credit-assignment via the gradient of the output with respect to input. Recently \citet{adebayosan} questioned the validity of many of these methods since they do not pass simp...
We devise a mechanism called competition among pixels that allows (approximately) complete saliency methods to pass the sanity checks.
1,708
scitldr
Classification systems typically act in isolation, meaning they are required to implicitly memorize the characteristics of all candidate classes in order to classify. The cost of this is increased memory usage and poor sample efficiency. We propose a model which instead verifies using reference images during the classi...
Image classification via iteratively querying for reference image from a candidate class with a RNN and use CNN to compare to the input image
1,709
scitldr
To reduce memory footprint and run-time latency, techniques such as neural net-work pruning and binarization have been explored separately. However, it is un-clear how to combine the best of the two worlds to get extremely small and efficient models. In this paper, we, for the first time, define the filter-level prunin...
we define the filter-level pruning problem for binary neural networks for the first time and propose method to solve it.
1,710
scitldr
Wide adoption of complex RNN based models is hindered by their inference performance, cost and memory requirements. To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired ac...
Reducing computation and memory complexity of RNN models by up to 100x using sparse low-rank compression modules, trained via knowledge distillation.
1,711
scitldr
Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains. In this paper, we are interested to design neural networks for graphs with variable length in order to solve le...
We compare graph RNNs and graph ConvNets, and we consider the most generic class of graph ConvNets with residuality.
1,712
scitldr
Complex-value neural networks are not a new concept, however, the use of real-values has often been favoured over complex-values due to difficulties in training and accuracy of . Existing literature ignores the number of parameters used. We compared complex- and real-valued neural networks using five activation functio...
Comparison of complex- and real-valued multi-layer perceptron with respect to the number of real-valued parameters.
1,713
scitldr
The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we intro...
A spectral graph convolutional neural network with spectral zoom properties.
1,714
scitldr
We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has b...
We present a real-time segmentation model automatically discovered by a multi-scale NAS framework, achieving 30% faster than state-of-the-art models.
1,715
scitldr
This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks...
We introduce R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions.
1,716
scitldr
We investigate the learned dynamical landscape of a recurrent neural network solving a simple task requiring the interaction of two memory mechanisms: long- and short-term. Our show that while long-term memory is implemented by asymptotic attractors, sequential recall is now additionally implemented by oscillatory dyna...
We investigate how a recurrent neural network successfully learns a task combining long-term memory and sequential recall.
1,717
scitldr
The problem of exploration in reinforcement learning is well-understood in the tabular case and many sample-efficient algorithms are known. Nevertheless, it is often unclear how the algorithms in the tabular setting can be extended to tasks with large state-spaces where generalization is required. Recent promising deve...
We propose the idea of using the norm of the successor representation an exploration bonus in reinforcement learning. In hard exploration Atari games, our the deep RL algorithm matches the performance of recent pseudo-count-based methods.
1,718
scitldr
Deep generative modeling using flows has gained popularity owing to the tractable exact log-likelihood estimation with efficient training and synthesis process. However, flow models suffer from the challenge of having high dimensional latent space, same in dimension as the input space. An effective solution to the abov...
Data-dependent factorization of dimensions in a multi-scale architecture based on contribution to the total log-likelihood
1,719
scitldr
Understanding the flow of information in Deep Neural Networks (DNNs) is a challenging problem that has gain increasing attention over the last few years. While several methods have been proposed to explain network predictions, there have been only a few attempts to compare them from a theoretical perspective. What is m...
Four existing backpropagation-based attribution methods are fundamentally similar. How to assess it?
1,720
scitldr
We study SGD and Adam for estimating a rank one signal planted in matrix or tensor noise. The extreme simplicity of the problem setup allows us to isolate the effects of various factors: signal to noise ratio, density of critical points, stochasticity and initialization. We observe a surprising phenomenon: Adam seems t...
SGD and Adam under single spiked model for tensor PCA
1,721
scitldr
This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically. How to analyze the specific rationale of each prediction made by the CNN presents one of key issues of understanding neural networks, but it is also of significant practical values in c...
This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.
1,722
scitldr
We present methodology for using dynamic evaluation to improve neural sequence models. Models are adapted to recent history via a gradient descent based mechanism, causing them to assign higher probabilities to re-occurring sequential patterns. Dynamic evaluation outperforms existing adaptation approaches in our compar...
Paper presents dynamic evaluation methodology for adaptive sequence modelling
1,723
scitldr
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the ...
We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model.
1,724
scitldr
Learning theory tells us that more data is better when minimizing the generalization error of identically distributed training and test sets. However, when training and test distribution differ, this distribution shift can have a significant effect. With a novel perspective on function transfer learning, we are able to...
The Frechet Distance between train and test distribution correlates with the change in performance for functions that are not invariant to the shift.
1,725
scitldr
We introduce a new procedural dynamic system that can generate a variety of shapes that often appear as curves, but technically, the figures are plots of many points. We name them spiroplots and show how this new system relates to other procedures or processes that generate figures. Spiroplots are an extremely simple p...
A new, very simple dynamic system is introduced that generates pretty patterns; properties are proved and possibilities are explored
1,726
scitldr
Unsupervised image-to-image translation aims to learn a mapping between several visual domains by using unpaired training pairs. Recent studies have shown remarkable success in image-to-image translation for multiple domains but they suffer from two main limitations: they are either built from several two-domain mappin...
GMM-UNIT is an image-to-image translation model that maps an image to multiple domains in a stochastic fashion.
1,727
scitldr
We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic black-box...
We present a novel architecture, based on dynamic memory, attention and composition for the task of machine reasoning.
1,728
scitldr
Variational Auto-Encoders (VAEs) are designed to capture compressible information about a dataset. As a consequence the information stored in the latent space is seldom sufficient to reconstruct a particular image. To help understand the type of information stored in the latent space we train a GAN-style decoder constr...
To understand the information stored in the latent space, we train a GAN-style decoder constrained to produce images that the VAE encoder will map to the same region of latent space.
1,729
scitldr
Human brain function as measured by functional magnetic resonance imaging (fMRI), exhibits a rich diversity. In response, understanding the individual variability of brain function and its association with behavior has become one of the major concerns in modern cognitive neuroscience. Our work is motivated by the view ...
Two novel GANs are constructed to generate high-quality 3D fMRI brain images and synthetic brain images greatly help to improve downstream classification tasks.
1,730
scitldr
Transferring representations from large-scale supervised tasks to downstream tasks have shown outstanding in Machine Learning in both Computer Vision and natural language processing (NLP). One particular example can be sequence-to-sequence models for Machine Translation (Neural Machine Translation - NMT). It is because...
Zero-shot cross-lingual transfer by using multilingual neural machine translation
1,731
scitldr
This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architectu...
We propose a differentiable architecture search algorithm for both convolutional and recurrent networks, achieving competitive performance with the state of the art using orders of magnitude less computation resources.
1,732
scitldr
Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high qualit...
Current language generation systems either aim for high likelihood and devolve into generic repetition or miscalibrate their stochasticity—we provide evidence of both and propose a solution: Nucleus Sampling.
1,733
scitldr
Up until very recently, inspired by a mass of researches on adversarial examples for computer vision, there has been a growing interest in designing adversarial attacks for Natural Language Processing (NLP) tasks, followed by very few works of adversarial defenses for NLP. To our knowledge, there exists no defense meth...
The first text adversarial defense method in word level, and the improved generic based attack method against synonyms substitution based attacks.
1,734
scitldr
A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason,...
Towards Efficient Credit Assignment in Recurrent Networks without Backpropagation Through Time
1,735
scitldr
We propose a novel adversarial learning framework in this work. Existing adversarial learning methods involve two separate networks, i.e., the structured prediction models and the discriminative models, in the training. The information captured by discriminative models complements that in the structured prediction mode...
We propose a novel adversarial learning framework for structured prediction, in which discriminative models can be used to refine structured prediction models at the inference stage.
1,736
scitldr
We propose RaPP, a new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder. Precisely, RaPP compares input and its autoencoder reconstruction not only in the input space but also in the hidden spaces. We show that if we feed a reconstructed input to the same au...
A new methodology for novelty detection by utilizing hidden space activation values obtained from a deep autoencoder.
1,737
scitldr
Learning preferences of users over plan traces can be a challenging task given a large number of features and limited queries that we can ask a single user. Additionally, the preference function itself can be quite convoluted and non-linear. Our approach uses feature-directed active learning to gather the necessary inf...
Learning preferences over plan traces using active learning.
1,738
scitldr
Reinforcement learning is a promising framework for solving control problems, but its use in practical situations is hampered by the fact that reward functions are often difficult to engineer. Specifying goals and tasks for autonomous machines, such as robots, is a significant challenge: conventionally, reward function...
We ground language commands in a high-dimensional visual environment by learning language-conditioned rewards using inverse reinforcement learning.
1,739
scitldr
Many biological learning systems such as the mushroom body, hippocampus, and cerebellum are built from sparsely connected networks of neurons. For a new understanding of such networks, we study the function spaces induced by sparse random features and characterize what functions may and may not be learned. A network wi...
We advocate for random features as a theory of biological neural networks, focusing on sparsely connected networks
1,740
scitldr
We propose a new application of embedding techniques to problem retrieval in adaptive tutoring. The objective is to retrieve problems similar in mathematical concepts. There are two challenges: First, like sentences, problems helpful to tutoring are never exactly the same in terms of the underlying concepts. Instead, g...
We propose the Prob2Vec method for problem embedding used in a personalized e-learning tool in addition to a data level classification method, called negative pre-training, for cases where the training data set is imbalanced.
1,741
scitldr
We introduce a new deep convolutional neural network, CrescendoNet, by stacking simple building blocks without residual connections. Each Crescendo block contains independent convolution paths with increased depths. The numbers of convolution layers and parameters are only increased linearly in Crescendo blocks. In exp...
We introduce CrescendoNet, a deep CNN architecture by stacking simple building blocks without residual connections.
1,742
scitldr
Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity). Here we propose Deep Random Splines, a flexible class of random functions obtained by tr...
We combine splines with neural networks to obtain a novel distribution over functions and use it to model intensity functions of point processes.
1,743
scitldr
The recent development of Natural Language Processing (NLP) has achieved great success using large pre-trained models with hundreds of millions of parameters. However, these models suffer from the heavy model size and high latency such that we cannot directly deploy them to resource-limited mobile devices. In this pape...
We develop a task-agnosticlly compressed BERT, which is 4.3x smaller and 4.0x faster than BERT-BASE while achieving competitive performance on GLUE and SQuAD.
1,744
scitldr
The importance weighted autoencoder (IWAE) is a popular variational-inference method which achieves a tighter evidence bound (and hence a lower bias) than standard variational autoencoders by optimising a multi-sample objective, i.e. an objective that is expressible as an integral over $K > 1$ Monte Carlo samples. Unfo...
We show that most variants of importance-weighted autoencoders can be derived in a more principled manner as special cases of adaptive importance-sampling approaches like the reweighted-wake sleep algorithm.
1,745
scitldr
As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs....
NUQSGD closes the gap between the theoretical guarantees of QSGD and the empirical performance of QSGDinf.
1,746
scitldr
The impressive lifelong learning in animal brains is primarily enabled by plastic changes in synaptic connectivity. Importantly, these changes are not passive, but are actively controlled by neuromodulation, which is itself under the control of the brain. The ing self-modifying abilities of the brain play an important ...
Neural networks can be trained to modify their own connectivity, improving their online learning performance on challenging tasks.
1,747
scitldr
Deep learning has made remarkable achievement in many fields. However, learning the parameters of neural networks usually demands a large amount of labeled data. The algorithms of deep learning, therefore, encounter difficulties when applied to supervised learning where only little data are available. This specific tas...
A simplex-based geometric method is proposed to cope with few-shot learning problems.
1,748
scitldr
Reservoir computing is a powerful tool to explain how the brain learns temporal sequences, such as movements, but existing learning schemes are either biologically implausible or too inefficient to explain animal performance. We show that a network can learn complicated sequences with a reward-modulated Hebbian learnin...
We show that a working memory input to a reservoir network makes a local reward-modulated Hebbian rule perform as well as recursive least-squares (aka FORCE)
1,749
scitldr
Convolutional architectures have recently been shown to be competitive on many sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs) while providing computational and modelling advantages due to inherent parallelism. However, currently, there remains a performance gap to mo...
We combine the computational advantages of temporal convolutional architectures with the expressiveness of stochastic latent variables.
1,750
scitldr
The weak contraction mapping is a self mapping that the range is always a subset of the domain, which admits a unique fixed-point. The iteration of weak contraction mapping is a Cauchy sequence that yields the unique fixed-point. A gradient-free optimization method as an application of weak contraction mapping is propo...
A gradient-free method is proposed for non-convex optimization problem
1,751
scitldr
Over the last decade, two competing control strategies have emerged for solving complex control tasks with high efficacy. Model-based control algorithms, such as model-predictive control (MPC) and trajectory optimization, peer into the gradients of underlying system dynamics in order to solve control tasks with high sa...
We propose a novel method that leverages the gradients from differentiable simulators to improve the performance of RL for robotics control
1,752
scitldr
We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks. A Bayesian hypernetwork, h, is a neural network which learns to transform a simple noise distribution, p(e) = N(0,I), to a distribution q(t):= q(h(e)) over the parameters t of another neural network (the ``primary netw...
We propose Bayesian hypernetworks: a framework for approximate Bayesian inference in neural networks.
1,753
scitldr
Evolutionary-based optimization approaches have recently shown promising in domains such as Atari and robot locomotion but less so in solving 3D tasks directly from pixels. This paper presents a method called Deep Innovation Protection (DIP) that allows training complex world models end-to-end for such 3D environments....
Deep Innovation Protection allows evolving complex world models end-to-end for 3D tasks.
1,754
scitldr
Adversarial training is one of the most popular ways to learn robust models but is usually attack-dependent and time costly. In this paper, we propose the MACER algorithm, which learns robust models without using adversarial training but performs better than all existing provable l2-defenses. Recent work shows that ran...
We propose MACER: a provable defense algorithm that trains robust models by maximizing the certified radius. It does not use adversarial training but performs better than all existing provable l2-defenses.
1,755
scitldr
Actor-critic methods solve reinforcement learning problems by updating a parameterized policy known as an actor in a direction that increases an estimate of the expected return known as a critic. However, existing actor-critic methods only use values or gradients of the critic to update the policy parameter. In this pa...
This paper proposes a novel actor-critic method that uses Hessians of a critic to update an actor.
1,756
scitldr
Deep Infomax~(DIM) is an unsupervised representation learning framework by maximizing the mutual information between the inputs and the outputs of an encoder, while probabilistic constraints are imposed on the outputs. In this paper, we propose Supervised Deep InfoMax~(SDIM), which introduces supervised probabilistic c...
scale generative classifiers on complex datasets, and evaluate their effectiveness to reject illegal inputs including out-of-distribution samples and adversarial examples.
1,757
scitldr
Abstract In this work, we describe a set of rules for the design and initialization of well-conditioned neural networks, guided by the goal of naturally balancing the diagonal blocks of the Hessian at the start of training. We show how our measure of conditioning of a block relates to another natural measure of conditi...
A theory for initialization and scaling of ReLU neural network layers
1,758
scitldr
We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model. Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT , we show that the advers...
Outputs of modern NLP APIs on nonsensical text provide strong signals about model internals, allowing adversaries to steal the APIs.
1,759
scitldr
We propose SEARNN, a novel training algorithm for recurrent neural networks (RNNs) inspired by the "learning to search" (L2S) approach to structured prediction. RNNs have been widely successful in structured prediction applications such as machine translation or parsing, and are commonly trained using maximum likelihoo...
We introduce SeaRNN, a novel algorithm for RNN training, inspired by the learning to search approach to structured prediction, in order to avoid the limitations of MLE training.
1,760
scitldr
Deep reinforcement learning has demonstrated increasing capabilities for continuous control problems, including agents that can move with skill and agility through their environment. An open problem in this setting is that of developing good strategies for integrating or merging policies for multiple skills, where each...
A continual learning method that uses distillation to combine expert policies and transfer learning to accelerate learning new skills.
1,761
scitldr
Deep Reinforcement Learning (DRL) has led to many recent breakthroughs on complex control tasks, such as defeating the best human player in the game of Go. However, decisions made by the DRL agent are not explainable, hindering its applicability in safety-critical settings. Viper, a recently proposed technique, constru...
Explainable reinforcement learning model using novel combination of mixture of experts with non-differentiable decision tree experts.
1,762
scitldr
We consider the problem of unconstrained minimization of a smooth objective function in $\mathbb{R}^d$ in setting where only function evaluations are possible. We propose and analyze stochastic zeroth-order method with heavy ball momentum. In particular, we propose, SMTP, a momentum version of the stochastic three-poin...
We develop and analyze a new derivative free optimization algorithm with momentum and importance sampling with applications to continuous control.
1,763
scitldr
Using class labels to represent class similarity is a typical approach to training deep hashing systems for retrieval; samples from the same or different classes take binary 1 or 0 similarity values. This similarity does not model the full rich knowledge of semantic relations that may be present between data points. In...
We propose a new method for training deep hashing for image retrieval using only a relational distance metric between samples
1,764
scitldr
In this paper, we propose a novel approach to improve a given surface mapping through local refinement. The approach receives an established mapping between two surfaces and follows four phases: (i) inspection of the mapping and creation of a sparse set of landmarks in mismatching regions; (ii) segmentation with a low-...
We propose a novel approach to improve a given cross-surface mapping through local refinement with a new iterative method to deform the mesh in order to meet user constraints.
1,765
scitldr
Understanding the groundbreaking performance of Deep Neural Networks is one of the greatest challenges to the scientific community today. In this work, we introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities. By studying the Information ...
Introduce an information theoretic viewpoint on the behavior of deep networks optimization processes and their generalization abilities
1,766
scitldr
We propose and study a method for learning interpretable representations for the task of regression. Features are represented as networks of multi-type expression trees comprised of activation functions common in neural networks in addition to other elementary functions. Differentiable features are trained via gradient...
Representing the network architecture as a set of syntax trees and optimizing their structure leads to accurate and concise regression models.
1,767
scitldr
Most distributed machine learning (ML) systems store a copy of the model parameters locally on each machine to minimize network communication. In practice, in order to reduce synchronization waiting time, these copies of the model are not necessarily updated in lock-step, and can become stale. Despite much development ...
Empirical and theoretical study of the effects of staleness in non-synchronous execution on machine learning algorithms.
1,768
scitldr
System identification is the process of building a mathematical model of an unknown system from measurements of its inputs and outputs. It is a key step for model-based control, estimator design, and output prediction. This work presents an algorithm for non-linear offline system identification from partial observation...
This work presents a scalable algorithm for non-linear offline system identification from partial observations.
1,769
scitldr
Various gradient compression schemes have been proposed to mitigate the communication cost in distributed training of large scale machine learning models. Sign-based methods, such as signSGD , have recently been gaining popularity because of their simple compression rule and connection to adaptive gradient methods, lik...
General analysis of sign-based methods (e.g. signSGD) for non-convex optimization, built on intuitive bounds on success probabilities.
1,770
scitldr
Off-policy learning, the task of evaluating and improving policies using historic data collected from a logging policy, is important because on-policy evaluation is usually expensive and has adverse impacts. One of the major challenge of off-policy learning is to derive counterfactual estimators that also has low varia...
For off-policy learning with bandit feedbacks, we propose a new variance regularized counterfactual learning algorithm, which has both theoretical foundations and superior empirical performance.
1,771
scitldr
We outline new approaches to incorporate ideas from deep learning into wave-based least-squares imaging. The aim, and main contribution of this work, is the combination of handcrafted constraints with deep convolutional neural networks, as a way to harness their remarkable ease of generating natural images. The mathema...
We combine hard handcrafted constraints with a deep prior weak constraint to perform seismic imaging and reap information on the "posterior" distribution leveraging multiplicity in the data.
1,772
scitldr
When translating natural language questions into SQL queries to answer questions from a database, contemporary semantic parsing models struggle to generalize to unseen database schemas. The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modelin...
State of the art in complex text-to-SQL parsing by combining hard and soft relational reasoning in schema/question encoding.
1,773
scitldr
As our experience shows, humans can learn and deploy a myriad of different skills to tackle the situations they encounter daily. Neural networks, in contrast, have a fixed memory capacity that prevents them from learning more than a few sets of skills before starting to forget them. In this work, we make a step to brid...
We introduce a continual learning setup based on language modelling where no explicit task segmentation signal is given and propose a neural network model with growing long term memory to tackle it.
1,774
scitldr
We propose to tackle a time series regression problem by computing temporal evolution of a probability density function to provide a probabilistic forecast. A Recurrent Neural Network (RNN) based model is employed to learn a nonlinear operator for temporal evolution of a probability density function. We use a softmax l...
Proposed RNN-based algorithm to estimate predictive distribution in one- and multi-step forecasts in time series prediction problems
1,775
scitldr
In cognitive systems, the role of a working memory is crucial for visual reasoning and decision making. Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of artificial neural networks. In the case of humans, the visual wo...
LSTMs can more effectively model the working memory if they are learned using reinforcement learning, much like the dopamine system that modulates the memory in the prefrontal cortex
1,776
scitldr
Nonlinearity is crucial to the performance of a deep (neural) network (DN). To date there has been little progress understanding the menagerie of available nonlinearities, but recently progress has been made on understanding the r\^{o}le played by piecewise affine and convex nonlinearities like the ReLU and absolute va...
Reformulate deep networks nonlinearities from a vector quantization scope and bridge most known nonlinearities together.
1,777
scitldr
Engineered proteins offer the potential to solve many problems in biomedicine, energy, and materials science, but creating designs that succeed is difficult in practice. A significant aspect of this challenge is the complex coupling between protein sequence and 3D structure, and the task of finding a viable design is o...
We learn to conditionally generate protein sequences given structures with a model that captures sparse, long-range dependencies.
1,778
scitldr
We provide a novel perspective on the forward pass through a block of layers in a deep network. In particular, we show that a forward pass through a standard dropout layer followed by a linear layer and a non-linear activation is equivalent to optimizing a convex objective with a single iteration of a $\tau$-nice Proxi...
A framework that links deep network layers to stochastic optimization algorithms; can be used to improve model accuracy and inform network design.
1,779
scitldr
Deep networks run with low precision operations at inference time offer power and space advantages over high precision alternatives, but need to overcome the challenge of maintaining high accuracy as precision decreases. Here, we present a method for training such networks, Learned Step Size Quantization, that achieves...
A method for learning quantization configuration for low precision networks that achieves state of the art performance for quantized networks.
1,780
scitldr
There have been several studies recently showing that strong natural language understanding (NLU) models are prone to relying on unwanted dataset biases without learning the underlying task, ing in models which fail to generalize to out-of-domain datasets, and are likely to perform poorly in real-world scenarios. We pr...
We propose several general debiasing strategies to address common biases seen in different datasets and obtain substantial improved out-of-domain performance in all settings.
1,781
scitldr
Reconstruction of few-view x-ray Computed Tomography (CT) data is a highly ill-posed problem. It is often used in applications that require low radiation dose in clinical CT, rapid industrial scanning, or fixed-gantry CT. Existing analytic or iterative algorithms generally produce poorly reconstructed images, severely ...
We present a CNN inference-based reconstruction algorithm to address extremely few-view CT.
1,782
scitldr
In open-domain dialogue intelligent agents should exhibit the use of knowledge, however there are few convincing demonstrations of this to date. The most popular sequence to sequence models typically “generate and hope” generic utterances that can be memorized in the weights of the model when mapping from input utteran...
We build knowledgeable conversational agents by conditioning on Wikipedia + a new supervised task.
1,783
scitldr
We formulate a new problem at the intersection of semi-supervised learning and contextual bandits, motivated by several applications including clinical trials and dialog systems. We demonstrate how contextual bandit and graph convolutional networks can be adjusted to the new problem formulation. We then take the best o...
Synthesis of GCN and LINUCB algorithms for online learning with missing feedbacks
1,784
scitldr
A collection of scientific papers is often accompanied by tags: keywords, topics, concepts etc., associated with each paper. Sometimes these tags are human-generated, sometimes they are machine-generated. We propose a simple measure of the consistency of the tagging of scientific papers: whether these tags are predicti...
A good tagger gives similar tags to a given paper and the papers it cites
1,785
scitldr
Recent research has intensively revealed the vulnerability of deep neural networks, especially for convolutional neural networks (CNNs) on the task of image recognition, through creating adversarial samples which `"slightly" differ from legitimate samples. This vulnerability indicates that these powerful models are sen...
We propose a quantization-based method which regularizes a CNN's learned representations to be automatically aligned with trainable concept matrix hence effectively filtering out adversarial perturbations.
1,786
scitldr
Invariant and equivariant networks have been successfully used for learning images, sets, point clouds, and graphs. A basic challenge in developing such networks is finding the maximal collection of invariant and equivariant \emph{linear} layers. Although this question is answered for the first three examples (for popu...
The paper provides a full characterization of permutation invariant and equivariant linear layers for graph data.
1,787
scitldr
In reinforcement learning, we can learn a model of future observations and rewards, and use it to plan the agent's next actions. However, jointly modeling future observations can be computationally expensive or even intractable if the observations are high-dimensional (e.g. images). For this reason, previous works have...
Causally correct partial models do not have to generate the whole observation to remain causally correct in stochastic environments.
1,788
scitldr
In lifelong learning, the learner is presented with a sequence of tasks, incrementally building a data-driven prior which may be leveraged to speed up learning of a new task. In this work, we investigate the efficiency of current lifelong approaches, in terms of sample complexity, computational and memory cost. Towards...
An efficient lifelong learning algorithm that provides a better trade-off between accuracy and time/ memory complexity compared to other algorithms.
1,789
scitldr
Reduced precision computation is one of the key areas addressing the widening’compute gap’, driven by an exponential growth in deep learning applications. In recent years, deep neural network training has largely migrated to 16-bit precision,with significant gains in performance and energy efficiency. However, attempts...
We demonstrated state-of-the-art training results using 8-bit floating point representation, across Resnet, GNMT, Transformer.
1,790
scitldr
Loss functions play a crucial role in deep metric learning thus a variety of them have been proposed. Some supervise the learning process by pairwise or tripletwise similarity constraints while others take the advantage of structured similarity information among multiple data points. In this work, we approach deep metr...
We propose instance cross entropy (ICE) which measures the difference between an estimated instance-level matching distribution and its ground-truth one.
1,791
scitldr
In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined. If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures. This paper focuses...
incorporating, in the model, latent variables that encode future content improves the long-term prediction accuracy, which is critical for better planning in model-based RL.
1,792
scitldr
Detecting anomalies is of growing importance for various industrial applications and mission-critical infrastructures, including satellite systems. Although there have been several studies in detecting anomalies based on rule-based or machine learning-based approaches for satellite systems, a tensor-based decomposition...
Integrative Tensor-based Anomaly Detection(ITAD) framework for a satellite system.
1,793
scitldr
Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time. In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning. We start from the KL regularized expected reward objective which introduces an additional c...
Limiting state information for the default policy can improvement performance, in a KL-regularized RL framework where both agent and default policy are optimized together
1,794
scitldr
When an image classifier makes a prediction, which parts of the image are relevant and why? We can rephrase this question to ask: which parts of the image, if they were not seen by the classifier, would most change its decision? Producing an answer requires marginalizing over images that could have been seen but weren'...
We compute saliency by using a strong generative model to efficiently marginalize over plausible alternative inputs, revealing concentrated pixel areas that preserve label information.
1,795
scitldr
This paper presents the Variation Network (VarNet), a generative model providing means to manipulate the high-level attributes of a given input. The originality of our approach is that VarNet is not only capable of handling pre-defined attributes but can also learn the relevant attributes of the dataset by itself. Thes...
The Variation Network is a generative model able to learn high-level attributes without supervision that can then be used for controlled input manipulation.
1,796
scitldr
Despite alarm over the reliance of machine learning systems on so-called spurious patterns in training data, the term lacks coherent meaning in standard statistical frameworks. However, the language of causality offers clarity: spurious associations are those due to a common cause (confounding) vs direct or indirect ef...
Humans in the loop revise documents to accord with counterfactual labels, resulting resource helps to reduce reliance on spurious associations.
1,797
scitldr
Among multiple ways of interpreting a machine learning model, measuring the importance of a set of features tied to a prediction is probably one of the most intuitive way to explain a model. In this paper, we establish the link between a set of features to a prediction with a new evaluation criteria, robustness analysi...
We propose new objective measurement for evaluating explanations based on the notion of adversarial robustness. The evaluation criteria further allows us to derive new explanations which capture pertinent features qualitatively and quantitatively.
1,798
scitldr
Generative adversarial networks (GANs) have been shown to provide an effective way to model complex distributions and have obtained impressive on various challenging tasks. However, typical GANs require fully-observed data during training. In this paper, we present a GAN-based framework for learning from complex, high-...
This paper presents a GAN-based framework for learning the distribution from high-dimensional incomplete data.
1,799
scitldr