query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
Generative Adversarial Networks (GANs) have proven to be a powerful framework for learning to draw samples from complex distributions. However, GANs are also notoriously difficult to train, with mode collapse and oscillations a common problem. We hypothesize that this is at least in part due to the evolution of the gen...
Generative Adversarial Network Training is a Continual Learning Problem.
100
scitldr
Many problems with large-scale labeled training data have been impressively solved by deep learning. However, Unseen Class Categorization (UCC) with minimal information provided about target classes is the most commonly encountered setting in industry, which remains a challenging research problem in machine learning. P...
A unified frame for both few-shot learning and zero-shot learning based on network reparameterization
101
scitldr
Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure. Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible. Alternatively, protein folding can be modeled using computational methods, which how...
GraphQA is a graph-based method for protein Quality Assessment that improves the state-of-the-art for both hand-engineered and representation-learning approaches
102
scitldr
We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles. We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce the trainin...
We address the active learning in batch setting with noisy oracles and use model uncertainty to encode the decision quality of active learning algorithm during acquisition.
103
scitldr
Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another. Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the user doesn't...
Stochastic style transfer with adjustable features.
104
scitldr
Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards. However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the complexity o...
We propose AGILE, a framework for training agents to perform instructions from examples of respective goal-states.
105
scitldr
We present Multitask Soft Option Learning (MSOL), a hierarchical multi-task framework based on Planning-as-Inference. MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior. The learned soft-options are temporally extended, allowing a higher-level master ...
In Hierarchical RL, we introduce the notion of a 'soft', i.e. adaptable, option and show that this helps learning in multitask settings.
106
scitldr
We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning. The learning objective is achieved by providing signal to the latent encoding/embedding in VAE without changing its main backbone arch...
Learning a controllable generative model by performing latent representation disentanglement learning.
107
scitldr
Neural language models (NLMs) are generative, and they model the distribution of grammatical sentences. Trained on huge corpus, NLMs are pushing the limit of modeling accuracy. Besides, they have also been applied to supervised learning tasks that decode text, e.g., automatic speech recognition (ASR). By re-scoring the...
Enhance the language model for supervised learning task
108
scitldr
Conventionally, convolutional neural networks (CNNs) process different images with the same set of filters. However, the variations in images pose a challenge to this fashion. In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass. Since the filters are generated on-t...
dynamically generate filters conditioned on the input image for CNNs in each forward pass
109
scitldr
We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths. Compared to conventional anytime networks only with the depth controllability, the increased architectural diversity leads to higher resource utilization and consequent performance improvement...
We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths.
110
scitldr
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions. Given a target compound, the task is to predict the likely chemical reactants to produce the target. This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of the molecules....
We propose a new model for making generalizable and diverse retrosynthetic reaction predictions.
111
scitldr
Unsupervised learning of disentangled representations is an open problem in machine learning. The Disentanglement-PyTorch library is developed to facilitate research, implementation, and testing of new variational algorithms. In this modular library, neural architectures, dimensionality of the latent space, and the tra...
Disentanglement-PyTorch is a library for variational representation learning
112
scitldr
Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we prop...
We extend recent insights related to softmax consistency to achieve state-of-the-art results in continuous control.
113
scitldr
Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization. Here we consider a fam...
We adapt a family of combinatorial games with tunable difficulty and an optimal policy expressible as linear network, developing it as a rich environment for reinforcement learning, showing contrasts in performance with supervised learning, and analyzing multiagent learning and generalization.
114
scitldr
Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand. Several methodologies to estimate model uncertainty have been proposed, but these methodologies constrain either how the neural network is trained or constructed. We present Outlier Detecti...
An add-on method for deep learning to detect outliers during prediction-time
115
scitldr
This work introduces a simple network for producing character aware word embeddings. Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word. The learned word representations are shown to be very sparse and facilitate improved on language modeling tasks, despi...
A fully connected architecture is used to produce word embeddings from character representations, outperforms traditional embeddings and provides insight into sparsity and dropout.
116
scitldr
Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a t...
We conduct adversarial attacks against binarized neural networks and show that we reduce the impact of the strongest attacks, while maintaining comparable accuracy in a black-box setting
117
scitldr
Unsupervised bilingual dictionary induction (UBDI) is useful for unsupervised machine translation and for cross-lingual transfer of models into low-resource languages. One approach to UBDI is to align word vector spaces in different languages using Generative adversarial networks (GANs) with linear generators, achievin...
An empirical investigation of GAN-based alignment of word vector spaces, focusing on cases, where linear transformations provably exist, but training is unstable.
118
scitldr
Owing to the ubiquity of computer software, software vulnerability detection (SVD) has become an important problem in the software industry and in the field of computer security. One of the most crucial issues in SVD is coping with the scarcity of labeled vulnerabilities in projects that require the laborious manual la...
Our aim in this paper is to propose a new approach for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches.
119
scitldr
Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition. Recent works have proposed new methods for learning representations for nodes and edges in graphs. Sever...
A faster method for generating node embeddings that employs a number of permutations over a node's immediate neighborhood as context to generate its representation.
120
scitldr
Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix. This class of models is particularly effective to solve tasks that require the memorization of long sequences. We propose an alternative solution based on explicit memoriza...
We show how to initialize recurrent architectures with the closed-form solution of a linear autoencoder for sequences. We show the advantages of this approach compared to orthogonal RNNs.
121
scitldr
This paper improves upon the line of research that formulates named entity recognition (NER) as a sequence-labeling problem. We use so-called black-box long short-term memory (LSTM) encoders to achieve state-of-the-art while providing insightful understanding of what the auto-regressive model learns with a parallel sel...
We provide insightful understanding of sequence-labeling NER and propose to use two types of cross structures, both of which bring theoretical and empirical improvements.
122
scitldr
Knowledge Graph Embedding (KGE) is the task of jointly learning entity and relation embeddings for a given knowledge graph. Existing methods for learning KGEs can be seen as a two-stage process where (a) entities and relations in the knowledge graph are represented using some linear algebraic structures (embeddings), a...
We present a theoretically proven generative model of knowledge graph embedding.
123
scitldr
Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation. Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads. As a scalable techni...
We study empirically how hard it is to recover missing parts of trained models
124
scitldr
This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference. Unlike the existing methods on domain transfer through deep generative models, such as StarGAN and UFDN , the variational domain adaptation has three advantages. Fi...
This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference
125
scitldr
We propose a new method to train neural networks based on a novel combination of adversarial training and provable defenses. The key idea is to model training as a procedure which includes both, the verifier and the adversary. In every iteration, the verifier aims to certify the network using convex relaxation while th...
We propose a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on CIFAR-10.
126
scitldr
Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax. For example, long-range dependencies induced by using the same variable or function in di...
Programs have structure that can be represented as graphs, and graph neural networks can learn to find bugs on such graphs
127
scitldr
Overfitting is an ubiquitous problem in neural network training and usually mitigated using a holdout data set. Here we challenge this rationale and investigate criteria for overfitting without using a holdout data set. Specifically, we train a model for a fixed number of epochs multiple times with varying fractions of...
We introduce and analyze several criteria for detecting overfitting.
128
scitldr
Partially observable Markov decision processes (POMDPs) are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome. In conventional POMDP models, the observations that the agent receives originate from fixed known distribution. However, in a variety of real-...
We develop a point-based value iteration solver for POMDPs with active perception and planning tasks.
129
scitldr
Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements. This success can be attributed in part to their ability to represent and generate...
We introduce an underparameterized, nonconvolutional, and simple deep neural network that can, without training, effectively represent natural images and solve image processing tasks like compression and denoising competitively.
130
scitldr
In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU). We give an algorithm to train a ReLU DNN with one hidden layer to {\em global optimality} with runtime polynomial in the data size albeit exponential in the input dimension. Further, we i...
This paper 1) characterizes functions representable by ReLU DNNs, 2) formally studies the benefit of depth in such architectures, 3) gives an algorithm to implement empirical risk minimization to global optimality for two layer ReLU nets.
131
scitldr
The backpropagation of error algorithm (BP) is often said to be impossible to implement in a real brain. The recent success of deep networks in machine learning and AI, however, has inspired a number of proposals for understanding how the brain might learn across multiple layers, and hence how it might implement or app...
Benchmarks for biologically plausible learning algorithms on complex datasets and architectures
132
scitldr
Deep neural networks (DNNs) usually contain millions, maybe billions, of parameters/weights, making both storage and computation very expensive. This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers. Another well-known approach for controlling the...
We have proposed using the recent GrOWL regularizer for simultaneous parameter sparsity and tying in DNN learning.
133
scitldr
Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search. However, its success is poorly understood and often found to be surprising. We argue that, rather than just being an optimization trick, the w...
An analysis of the learning and optimization structures of architecture search in neural networks and beyond.
134
scitldr
Deep latent variable models have seen recent success in many data domains. Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner. We present'`Bits Back with ANS' (BB-ANS), a scheme to perform lossless compression w...
We do lossless compression of large image datasets using a VAE, beat existing compression algorithms.
135
scitldr
Hyperparameter tuning is arguably the most important ingredient for obtaining state of art performance in deep networks. We focus on hyperparameters that are related to the optimization algorithm, e.g. learning rates, which have a large impact on the training speed and the ing accuracy. Typically, fixed learning rate s...
Bayesian optimization based online hyperparameter optimization.
136
scitldr
Multi-hop text-based question-answering is a current challenge in machine comprehension. This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions. In this paper, we propose a novel architecture, called the Latent Question Reformulation Network (LQR-net), a m...
In this paper, we propose the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities.
137
scitldr
We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations. We define the importance of each observation as the change in the model output caused by replacing the observation with a generated one. Our...
Explaining Multivariate Time Series Models by finding important observations in time using Counterfactuals
138
scitldr
This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data. Like much of previous work, we seek to align the learned representations of the source and target domains wh...
We use self-supervision on both domain to align them for unsupervised domain adaptation.
139
scitldr
We introduce simple, efficient algorithms for computing a MinHash of a probability distribution, suitable for both sparse and dense data, with equivalent running times to the state of the art for both cases. The collision probability of these algorithms is a new measure of the similarity of positive vectors which we in...
The minimum of a set of exponentially distributed hashes has a very useful collision probability that generalizes the Jaccard Index to probability distributions.
140
scitldr
Recently, progress has been made towards improving relational reasoning in machine learning field. Among existing models, graph neural networks (GNNs) is one of the most effective approaches for multi-hop relational reasoning. In fact, multi-hop relational reasoning is indispensable in many natural language processing ...
A graph neural network model with parameters generated from natural languages, which can perform multi-hop reasoning.
141
scitldr
Off-Policy Actor-Critic (Off-PAC) methods have proven successful in a variety of continuous control tasks. Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return. In this paper, we...
We present Meta-Critic, an auxiliary critic module for off-policy actor-critic methods that can be meta-learned online during single task learning.
142
scitldr
Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be ``compressed to much smaller representations. The purpose of this paper is to connect...
We obtain non-vacuous generalization bounds on ImageNet-scale deep neural networks by combining an original PAC-Bayes bound and an off-the-shelf neural network compression method.
143
scitldr
Adversarial examples can be defined as inputs to a model which induce a mistake -- where the model output is different than that of an oracle, perhaps in surprising or malicious ways. Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks. While several a...
We propose an alternative measure for determining effectiveness of adversarial attacks in NLP models according to a distance measure-based method like incremental L2-gain in control theory.
144
scitldr
We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. We apply a perturbation theory on residual networks and decouple the interactions between residual units. The ing warp ...
We propose the Warped Residual Network using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network.
145
scitldr
A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence (XAI) community. Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial. In this work, we propose a suite of mult...
We propose a suite of metrics that capture desired properties of explainability algorithms and use it to objectively compare and evaluate such methods
146
scitldr
Neural networks are known to produce unexpected on inputs that are far from the training distribution. One approach to tackle this problem is to detect the samples on which the trained network can not answer reliably. ODIN is a recently proposed method for out-of-distribution detection that does not modify the trained ...
A recent out-of-distribution detection method helps to measure the confidence of RNN predictions for some NLP tasks
147
scitldr
Some recent work has shown separation between the expressive power of depth-2 and depth-3 neural networks. These separation are shown by constructing functions and input distributions, so that the function is well-approximable by a depth-3 neural network of polynomial size but it cannot be well-approximated under the c...
depth-2-vs-3 separation for sigmoidal neural networks over general distributions
148
scitldr
The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph. In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approximating the ...
We propose a scalable method to approximate the eigenvectors of the Laplacian in the reinforcement learning context and we show that the learned representations can improve the performance of an RL agent.
149
scitldr
Our work offers a new method for domain translation from semantic label maps and Computer Graphic (CG) simulation edge map images to photo-realistic im- ages. We train a Generative Adversarial Network (GAN) in a conditional way to generate a photo-realistic version of a given CG scene. Existing architectures of GANs st...
Simulation to real images translation and video generation
150
scitldr
Deep neural networks are widely used in various domains, but the prohibitive computational complexity prevents their deployment on mobile devices. Numerous model compression algorithms have been proposed, however, it is often difficult and time-consuming to choose proper hyper-parameters to obtain an efficient compress...
We propose PocketFlow, an automated framework for model compression and acceleration, to facilitate deep learning models' deployment on mobile devices.
151
scitldr
Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest. However, current techniques for training generative models require access to fully-observed samples. In many settings, it is expensive or even impossible to obtain fully-obs...
How to learn GANs from noisy, distorted, partial observations
152
scitldr
Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. Empirical and theoretical clearly indicate that the e...
See the abstract. (For the revision, the paper is identical, except for a 59 page Supplementary Material, which can serve as a stand-along technical report version of the paper.)
153
scitldr
We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting. The proposed attention mechanism is based on recent methods to visually explain predictions made by DNNs. We apply the proposed explanation-based attention to MNIST and SVHN classification. Th...
We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting.
154
scitldr
We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces. We present an algorithm for calculations of the objective function's barcodes of minima. Our experiments confirm two principal observations: the barcodes of minima are located in a small lower part of the range of values...
We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces.
155
scitldr
New types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs. However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silicon. In parti...
Using asynchronous gradient updates to accelerate dynamic neural network training
156
scitldr
In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem. The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward signal provide...
We study reward design problem in cooperative MARL based on packet routing environments. The experimental results remind us to be careful to design the rewards, as they are really important to guide the agent behavior.
157
scitldr
Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least squares solutions. Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end learned arc...
Neumann networks are an end-to-end, sample-efficient learning approach to solving linear inverse problems in imaging that are compatible with the MSE optimal approach and admit an extension to patch-based learning.
158
scitldr
End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework. We propose the global-to-local memory pointer (GLMP) networks to address this issue. In our model, a global memory encoder and a local memory decoder are proposed to share...
GLMP: Global memory encoder (context RNN, global pointer) and local memory decoder (sketch RNN, local pointer) that share external knowledge (MemNN) are proposed to strengthen response generation in task-oriented dialogue.
159
scitldr
The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field. The origins and solutions of checkerboard artifacts in the pixel space have been studied for a long time, but their effects on the gradient space have rarely been investigated. In this paper, we revisit the checkerboard ...
We propose a novel aritificial checkerboard enhancer (ACE) module which guides attacks to a pre-specified pixel space and successfully defends it with a simple padding operation.
160
scitldr
Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, and understanding the dynamics of (stochastic) gradient algorithms for solving such formulations has been a grand challenge. As a first step, we restrict to bilinear zero-sum games ...
We systematically analyze the convergence behaviour of popular gradient algorithms for solving bilinear games, with both simultaneous and alternating updates.
161
scitldr
Most approaches in generalized zero-shot learning rely on cross-modal mapping between an image feature space and a class embedding space or on generating artificial image features. However, learning a shared cross-modal embedding by aligning the latent spaces of modality-specific autoencoders is shown to be promising i...
We use VAEs to learn a shared latent space embedding between image features and attributes and thereby achieve state-of-the-art results in generalized zero-shot learning.
162
scitldr
Intuitively, image classification should profit from using spatial information. Recent work, however, suggests that this might be overrated in standard CNNs. In this paper, we are pushing the envelope and aim to further investigate the reliance on and necessity of spatial information. We propose and analyze three metho...
Spatial information at last layers is not necessary for a good classification accuracy.
163
scitldr
Disentangling underlying generative factors of a data distribution is important for interpretability and generalizable representations. In this paper, we introduce two novel disentangling methods. Our first method, Unlabeled Disentangling GAN (UD-GAN, unsupervised), decomposes the latent noise by generating similar/dis...
We use Siamese Networks to guide and disentangle the generation process in GANs without labeled data.
164
scitldr
We present Predicted Variables, an approach to making machine learning (ML) a first class citizen in programming languages. There is a growing divide in approaches to building systems: using human experts (e.g. programming) on the one hand, and using behavior learned from data (e.g. ML) on the other hand. PVars aim to ...
We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages.
165
scitldr
Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons. The hierarchical video prediction method by is an example of a state of the art method for long term video prediction. However, their method has limited applicability in practical settings as it requires ...
We show ways to train a hierarchical video prediction model without needing pose labels.
166
scitldr
Combining information from different sensory modalities to execute goal directed actions is a key aspect of human intelligence. Specifically, human agents are very easily able to translate the task communicated in one sensory domain (say vision) into a representation that enables them to complete this task when they ca...
In this work, we study the problem of learning representations to identify novel objects by exploring objects using tactile sensing. Key point here is that the query is provided in image domain.
167
scitldr
Locality sensitive hashing schemes such as \simhash provide compact representations of multisets from which similarity can be estimated. However, in certain applications, we need to estimate the similarity of dynamically changing sets. In this case, we need the representation to be a homomorphism so that the hash of un...
We employ linear homomorphic compression schemes to represent the sufficient statistics of a conditional random field model of coreference and this allows us to scale inference and improve speed by an order of magnitude.
168
scitldr
Motivated by applications to unsupervised learning, we consider the problem of measuring mutual information. Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods. In this paper we prove that serious statistical limitations are inh...
We give a theoretical analysis of the measurement and optimization of mutual information.
169
scitldr
In this paper, we propose a neural network framework called neuron hierarchical network (NHN), that evolves beyond the hierarchy in layers, and concentrates on the hierarchy of neurons. We observe mass redundancy in the weights of both handcrafted and randomly searched architectures. Inspired by the development of huma...
By breaking the layer hierarchy, we propose a 3-step approach to the construction of neuron-hierarchy networks that outperform NAS, SMASH and hierarchical representation with fewer parameters and shorter searching time.
170
scitldr
Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire. In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any (non-differentiable) simulator, thereby controlling the distribution o...
We propose an algorithm that automatically adjusts parameters of a simulation engine to generate training data for a neural network such that validation accuracy is maximized.
171
scitldr
Modelling statistical relationships beyond the conditional mean is crucial in many settings. Conditional density estimation (CDE) aims to learn the full conditional probability density from data. Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained with the maximum ...
A model-agnostic regularization scheme for neural network-based conditional density estimation.
172
scitldr
Unsupervised representation learning holds the promise of exploiting large amount of available unlabeled data to learn general representations. A promising technique for unsupervised learning is the framework of Variational Auto-encoders (VAEs). However, unsupervised representations learned by VAEs are significantly ou...
A patch-based bottleneck formulation in a VAE framework that learns unsupervised representations better suited for visual recognition.
173
scitldr
Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks (RNNs). In this paper, we present an efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradients that ...
To solve the gradient vanishing/exploding problems, we proprose an efficient parametrization of the transition matrix of RNN that loses no expressive power, converges faster and has good generalization.
174
scitldr
Recent image style transferring methods achieved arbitrary stylization with input content and style images. To transfer the style of an arbitrary image to a content image, these methods used a feed-forward network with a lowest-scaled feature transformer or a cascade of the networks with a feature transformer of a corr...
A paper suggesting a method to transform the style of images using deep neural networks.
175
scitldr
Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling. One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill that is trivial for humans. Research in this area is made difficul...
We improve existing dialogue systems for responding to people sharing personal stories, incorporating emotion prediction representations and also release a new benchmark and dataset of empathetic dialogues.
176
scitldr
Granger causality is a widely-used criterion for analyzing interactions in large-scale networks. As most physical interactions are inherently nonlinear, we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series measuremen...
A new recurrent neural network architecture for detecting pairwise Granger causality between nonlinearly interacting time series.
177
scitldr
Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data. However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers. Previous attempts on reducing the receptive field size by subsampli...
A control variate based stochastic training algorithm for graph convolutional networks that the receptive field can be only two neighbors per node.
178
scitldr
Low bit-width integer weights and activations are very important for efficient inference, especially with respect to lower power consumption. We propose to apply Monte Carlo methods and importance sampling to sparsify and quantize pre-trained neural networks without any retraining. We obtain sparse, low bit-width integ...
Monte Carlo methods for quantizing pre-trained models without any additional training.
179
scitldr
We propose the Information Maximization Autoencoder (IMAE), an information theoretic approach to simultaneously learn continuous and discrete representations in an unsupervised setting. Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid disc...
Information theoretical approach for unsupervised learning of unsupervised learning of a hybrid of discrete and continuous representations,
180
scitldr
Learning rules for neural networks necessarily include some form of regularization. Most regularization techniques are conceptualized and implemented in the space of parameters. However, it is also possible to regularize in the space of functions. Here, we propose to measure networks in an $L^2$ Hilbert space, and test...
It's important to consider optimization in function space, not just parameter space. We introduce a learning rule that reduces distance traveled in function space, just like SGD limits distance traveled in parameter space.
181
scitldr
Stochastic gradient descent (SGD), which dates back to the 1950s, is one of the most popular and effective approaches for performing stochastic optimization. Research on SGD resurged recently in machine learning for optimizing convex loss functions and training nonconvex deep neural networks. The theory assumes that on...
Convergence theory for biased (but consistent) gradient estimators in stochastic optimization and application to graph convolutional networks
182
scitldr
We consider the problem of uncertainty estimation in the context of (non-Bayesian) deep neural classification. In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand. We demonstrate that such techniques tend to introduc...
We use snapshots from the training process to improve any uncertainty estimation method of a DNN classifier.
183
scitldr
Existing public face image datasets are strongly biased toward Caucasian faces, and other races (e.g., Latino) are significantly underrepresented. The models trained from such datasets suffer from inconsistent classification accuracy, which limits the applicability of face analytic systems to non-White race groups. To ...
A new face image dataset for balanced race, gender, and age which can be used for bias measurement and mitigation
184
scitldr
Dramatic advances in generative models have ed in near photographic quality for artificially rendered faces, animals and other objects in the natural world. In spite of such advances, a higher level understanding of vision and imagery does not arise from exhaustively modeling an object, but instead identifying higher-l...
We attempt to model the drawing process of fonts by building sequential generative models of vector graphics (SVGs), a highly structured representation of font characters.
185
scitldr
What can we learn about the functional organization of cortical microcircuits from large-scale recordings of neural activity? To obtain an explicit and interpretable model of time-dependent functional connections between neurons and to establish the dynamics of the cortical information flow, we develop'dynamic neural r...
We develop 'dynamic neural relational inference', a variational autoencoder model that can explicitly and interpretably represent the hidden dynamic relations between neurons.
186
scitldr
DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks. DeePa optimizes parallelism at the granularity of each individual layer in the network. We present an elimination-based algorithm that finds an optimal paral...
To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of CNNs in all parallelizable dimensions at the granularity of each layer.
187
scitldr
One can substitute each neuron in any neural network with a kernel machine and obtain a counterpart powered by kernel machines. The new network inherits the expressive power and architecture of the original but works in a more intuitive way since each node enjoys the simple interpretation as a hyperplane (in a reproduc...
We combine kernel method with connectionist models and show that the resulting deep architectures can be trained layer-wise and have more transparent learning dynamics.
188
scitldr
Many notions of fairness may be expressed as linear constraints, and the ing constrained objective is often optimized by transforming the problem into its Lagrangian dual with additive linear penalties. In non-convex settings, the ing problem may be difficult to solve as the Lagrangian is not guaranteed to have a deter...
We propose a method to stochastically optimize second-order penalties and show how this may apply to training fairness-aware classifiers.
189
scitldr
Training methods for deep networks are primarily variants on stochastic gradient descent. Techniques that use (approximate) second-order information are rarely used because of the computational cost and noise associated with those approaches in deep learning contexts. However, in this paper, we show how feedforward dee...
We show that deep learning network derivatives have a low-rank structure, and this structure allows us to use second-order derivative information to calculate learning rates adaptively and in a computationally feasible manner.
190
scitldr
The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations. Nonetheless, min-max optimization beyond the purpose of AT has not been ...
A unified min-max optimization framework for adversarial attack and defense
191
scitldr
Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification. However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting. We propose a simple, intuitive and scalable dimension reductio...
dimensionality reduction for cases where examples can be represented as soft probability distributions
192
scitldr
Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments. These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state and action ...
We propose a novel Intrinsically Motivated Goal Exploration architecture with unsupervised learning of goal space representations, and evaluate how various implementations enable the discovery of a diversity of policies.
193
scitldr
One of the main challenges of deep learning methods is the choice of an appropriate training strategy. In particular, additional steps, such as unsupervised pre-training, have been shown to greatly improve the performances of deep structures. In this article, we propose an extra training step, called post-training, whi...
We propose an additional training step, called post-training, which computes optimal weights for the last layer of the network.
194
scitldr
Natural language processing (NLP) models often require a massive number of parameters for word embeddings, ing in a large storage or memory footprint. Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance. For this purpose, we propose to...
Compressing the word embeddings over 94% without hurting the performance.
195
scitldr
It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We...
OE teaches anomaly detectors to learn heuristics for detecting unseen anomalies; experiments are in classification, density estimation, and calibration in NLP and vision settings; we do not tune on test distribution samples, unlike previous work
196
scitldr
While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets. For instance, to fool the discriminator, a generative adversarial network (GAN) exclusively trained to transform images of black-haired *m...
A method for learning a transformation between one pair of source/target datasets and applying it a separate source dataset for which there is no target dataset
197
scitldr
In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle. We study the effectiveness of the near- optimal cost-to-go oracle on the planning horizon and demonstrate that the cost- to-go oracle shortens the learner’s planning horizon as function of its accur...
Combining Imitation Learning and Reinforcement Learning to learn to outperform the expert
198
scitldr
Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions. Most of the existing works implicitly assume that the clean samples from the target distribution are easily available. However, in many applications, this assumption is violated. I...
An unsupervised learning approach for separating two structured signals from their superposition
199
scitldr