query
stringlengths
273
149k
pos
stringlengths
18
667
idx
int64
0
1.99k
task_name
stringclasses
1 value
This paper explores the use of self-ensembling for visual domain adaptation problems. Our technique is derived from the mean teacher variant (Tarvainen et. al 2017) of temporal ensembling (Laine et al. 2017), a technique that achieved state of the art in the area of semi-supervised learning. We introduce a number of mo...
Self-ensembling based algorithm for visual domain adaptation, state of the art results, won VisDA-2017 image classification domain adaptation challenge.
1,000
scitldr
It is easy for people to imagine what a man with pink hair looks like, even if they have never seen such a person before. We call the ability to create images of novel semantic concepts visually grounded imagination. In this paper, we show how we can modify variational auto-encoders to perform this task. Our method use...
A VAE-variant which can create diverse images corresponding to novel concrete or abstract "concepts" described using attribute vectors.
1,001
scitldr
We introduce "Search with Amortized Value Estimates" (SAVE), an approach for combining model-free Q-learning with model-based Monte-Carlo Tree Search (MCTS). In SAVE, a learned prior over state-action values is used to guide MCTS, which estimates an improved set of state-action values. The new Q-estimates are then used...
We propose a model-based method called "Search with Amortized Value Estimates" (SAVE) which leverages both real and planned experience by combining Q-learning with Monte-Carlo Tree Search, achieving strong performance with very small search budgets.
1,002
scitldr
Two main families of reinforcement learning algorithms, Q-learning and policy gradients, have recently been proven to be equivalent when using a softmax relaxation on one part, and an entropic regularization on the other. We relate this to the well-known convex duality of Shannon entropy and the softmax function. Such ...
A short proof of the equivalence of soft Q-learning and policy gradients.
1,003
scitldr
Computer simulation provides an automatic and safe way for training robotic control policies to achieve complex tasks such as locomotion. However, a policy trained in simulation usually does not transfer directly to the real hardware due to the differences between the two environments. Transfer learning using domain ra...
We propose a policy transfer algorithm that can overcome large and challenging discrepancies in the system dynamics such as latency, actuator modeling error, etc.
1,004
scitldr
We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the N...
Learn how to quantize speech signal and apply algorithms requiring discrete inputs to audio data such as BERT.
1,005
scitldr
Deep Reinforcement Learning algorithms lead to agents that can solve difficult decision making problems in complex environments. However, many difficult multi-agent competitive games, especially real-time strategy games are still considered beyond the capability of current deep reinforcement learning algorithms, althou...
We develop Hierarchical Agent with Self-play (HASP), a learning approach for obtaining hierarchically structured policies that can achieve high performance than conventional self-play on competitive real-time strategic games.
1,006
scitldr
We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic compar...
Variable capacity input word embeddings and SOTA on WikiText-103, Billion Word benchmarks.
1,007
scitldr
In this paper, we consider the problem of detecting object under occlusion. Most object detectors formulate bounding box regression as a unimodal task (i.e., regressing a single set of bounding box coordinates independently). However, we observe that the bounding box borders of an occluded object can have multiple plau...
a deep multivariate mixture of Gaussians model for bounding box regression under occlusion
1,008
scitldr
Adversarial examples are a pervasive phenomenon of machine learning models where seemingly imperceptible perturbations to the input lead to misclassifications for otherwise statistically accurate models. Adversarial training, one of the most successful empirical defenses to adversarial examples, refers to training on a...
We replace the Lp ball constraint with the Voronoi cells of the training data to produce more robust models.
1,009
scitldr
Recent research efforts enable study for natural language grounded navigation in photo-realistic environments, e.g., following natural language instructions or dialog. However, existing methods tend to overfit training data in seen environments and fail to generalize well in previously unseen environments. In order to ...
We propose to learn a more generalized policy for natural language grounded navigation tasks via environment-agnostic multitask learning.
1,010
scitldr
In this paper we propose to perform model ensembling in a multiclass or a multilabel learning setting using Wasserstein (W.) barycenters. Optimal transport metrics, such as the Wasserstein distance, allow incorporating semantic side information such as word embeddings. Using W. barycenters to find the consensus between...
we propose to use Wasserstein barycenters for semantic model ensembling
1,011
scitldr
While deep learning has been incredibly successful in modeling tasks with large, carefully curated labeled datasets, its application to problems with limited labeled data remains a challenge. The aim of the present work is to improve the label efficiency of large neural networks operating on audio data through a combin...
Label-efficient audio classification via multi-task learning and self-supervision
1,012
scitldr
Recent neural network and language models have begun to rely on softmax distributions with an extremely large number of categories. In this context calculating the softmax normalizing constant is prohibitively expensive. This has spurred a growing literature of efficiently computable but biased estimates of the softmax...
Propose first methods for exactly optimizing the softmax distribution using stochastic gradient with runtime independent on the number of classes or datapoints.
1,013
scitldr
Crafting adversarial examples on discrete inputs like text sequences is fundamentally different from generating such examples for continuous inputs like images. This paper tries to answer the question: under a black-box setting, can we create adversarial examples automatically to effectively fool deep learning classifi...
Use Monte carlo Tree Search and Homoglyphs to generate indistinguishable adversarial samples on text data
1,014
scitldr
We introduce a model that learns to convert simple hand drawings into graphics programs written in a subset of \LaTeX.~ The model combines techniques from deep learning and program synthesis. We learn a convolutional neural network that proposes plausible drawing primitives that explain an image. These drawing primitiv...
Learn to convert a hand drawn sketch into a high-level program
1,015
scitldr
Adversarial examples remain an issue for contemporary neural networks. This paper draws on Background Check , a technique in model calibration, to assist two-class neural networks in detecting adversarial examples, using the one dimensional difference between logit values as the underlying measure. This method interest...
This paper uses principles from the field of calibration in machine learning on the logits of a neural network to defend against adversarial attacks
1,016
scitldr
We present a novel multi-task training approach to learning multilingual distributed representations of text. Our system learns word and sentence embeddings jointly by training a multilingual skip-gram model together with a cross-lingual sentence similarity model. We construct sentence embeddings by processing word emb...
We jointly train a multilingual skip-gram model and a cross-lingual sentence similarity model to learn high quality multilingual text embeddings that perform well in the low resource scenario.
1,017
scitldr
Generative Adversarial Networks (GANs) are a very powerful framework for generative modeling. However, they are often hard to train, and learning of GANs often becomes unstable. Wasserstein GAN (WGAN) is a promising framework to deal with the instability problem as it has a good convergence property. One drawback of th...
We have proposed a flexible generative model that learns stably by directly minimizing exact empirical Wasserstein distance.
1,018
scitldr
Neural Architecture Search (NAS) is an exciting new field which promises to be as much as a game-changer as Convolutional Neural Networks were in 2012. Despite many great works leading to substantial improvements on a variety of tasks, comparison between different methods is still very much an open issue. While most al...
A study of how different components in the NAS pipeline contribute to the final accuracy. Also, a benchmark of 8 methods on 5 datasets.
1,019
scitldr
Identifying analogies across domains without supervision is a key task for artificial intelligence. Recent advances in cross domain image mapping have concentrated on translating images across domains. Although the progress made is impressive, the visual fidelity many times does not suffice for identifying the matching...
Finding correspondences between domains by performing matching/mapping iterations
1,020
scitldr
Effectively inferring discriminative and coherent latent topics of short texts is a critical task for many real world applications. Nevertheless, the task has been proven to be a great challenge for traditional topic models due to the data sparsity problem induced by the characteristics of short texts. Moreover, the co...
a neural sparsity-enhanced topic model based on VAE
1,021
scitldr
Neural Tangents is a library for working with infinite-width neural networks. It provides a high-level API for specifying complex and hierarchical neural network architectures. These networks can then be trained and evaluated either at finite-width as usual, or in their infinite-width limit. For the infinite-width netw...
Keras for infinite neural networks.
1,022
scitldr
Symbolic logic allows practitioners to build systems that perform rule-based reasoning which is interpretable and which can easily be augmented with prior knowledge. However, such systems are traditionally difficult to apply to problems involving natural language due to the large linguistic variability of language. Cur...
We introduce NLProlog, a system that performs rule-based reasoning on natural language by leveraging pretrained sentence embeddings and fine-tuning with Evolution Strategies, and apply it to two multi-hop Question Answering tasks.
1,023
scitldr
Training labels are expensive to obtain and may be of varying quality, as some may be from trusted expert labelers while others might be from heuristics or other sources of weak supervision such as crowd-sourcing. This creates a fundamental quality-versus-quantity trade-off in the learning process. Do we learn from the...
We propose Fidelity-weighted Learning, a semi-supervised teacher-student approach for training neural networks using weakly-labeled data.
1,024
scitldr
This paper is focused on investigating and demystifying an intriguing robustness phenomena in over-parameterized neural network training. In particular we provide empirical and theoretical evidence that first order methods such as gradient descent are provably robust to noise/corruption on a constant fraction of the la...
We prove that gradient descent is robust to label corruption despite over-parameterization under a rich dataset model.
1,025
scitldr
Weight pruning has been introduced as an efficient model compression technique. Even though pruning removes significant amount of weights in a network, memory requirement reduction was limited since conventional sparse matrix formats require significant amount of memory to store index-related information. Moreover, com...
We present a new weight encoding scheme which enables high compression ratio and fast sparse-to-dense matrix conversion.
1,026
scitldr
The use of deep learning models as priors for compressive sensing tasks presents new potential for inexpensive seismic data acquisition. An appropriately designed Wasserstein generative adversarial network is designed based on a generative adversarial network architecture trained on several historical surveys, capable ...
Improved a GAN based pixel inpainting network for compressed seismic image recovery andproposed a non-uniform sampling survey recommendatio, which can be easily applied to medical and other domains for compressive sensing technique.
1,027
scitldr
In recent years we have seen fast progress on a number of benchmark problems in AI, with modern methods achieving near or super human performance in Go, Poker and Dota. One common aspect of all of these challenges is that they are by design adversarial or, technically speaking, zero-sum. In contrast to these settings, ...
We develop Simplified Action Decoder, a simple MARL algorithm that beats previous SOTA on Hanabi by a big margin across 2- to 5-player games.
1,028
scitldr
We present an end-to-end trainable approach for optical character recognition (OCR) on printed documents. It is based on predicting a two-dimensional character grid ('chargrid') representation of a document image as a semantic segmentation task. To identify individual character instances from the chargrid, we regard ch...
End-to-end trainable Optical Character Recognition on printed documents; we achieve state-of-the-art results, beating Tesseract4 on benchmark datasets both in terms of accuracy and runtime, using a purely computer vision based approach.
1,029
scitldr
We propose NovoGrad, an adaptive stochastic gradient descent method with layer-wise gradient normalization and decoupled weight decay. In our experiments on neural networks for image classification, speech recognition, machine translation, and language modeling, it performs on par or better than well tuned SGD with mom...
NovoGrad - an adaptive SGD method with layer-wise gradient normalization and decoupled weight decay.
1,030
scitldr
Most generative models of audio directly generate samples in one of two domains: time or frequency. While sufficient to express any signal, these representations are inefficient, as they do not utilize existing knowledge of how sound is generated and perceived. A third approach (vocoders/synthesizers) successfully inco...
Better audio synthesis by combining interpretable DSP with end-to-end learning.
1,031
scitldr
Spectral Graph Convolutional Networks (GCNs) are a generalization of convolutional networks to learning on graph-structured data. Applications of spectral GCNs have been successful, but limited to a few problems where the graph is fixed, such as shape correspondence and node classification. In this work, we address thi...
A novel approach to graph classification based on spectral graph convolutional networks and its extension to multigraphs with learnable relations and hierarchical structure. We show state-of-the art results on chemical, social and image datasets.
1,032
scitldr
Deep neural networks have been recently demonstrated to be vulnerable to backdoor attacks. Specifically, by altering a small set of training examples, an adversary is able to install a backdoor that can be used during inference to fully control the model’s behavior. While the attack is very powerful, it crucially relie...
We show how to successfully perform backdoor attacks without changing training labels.
1,033
scitldr
Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activa...
We find that deep networks which generalize poorly are more reliant on single directions than those that generalize well, and evaluate the impact of dropout and batch normalization, as well as class selectivity on single direction reliance.
1,034
scitldr
Typical amortized inference in variational autoencoders is specialized for a single probabilistic query. Here we propose an inference network architecture that generalizes to unseen probabilistic queries. Instead of an encoder-decoder pair, we can train a single inference network directly from data, using a cost functi...
Instead of learning the parameters of a graphical model from data, learn an inference network that can answer the same probabilistic queries.
1,035
scitldr
A plethora of computer vision tasks, such as optical flow and image alignment, can be formulated as non-linear optimization problems. Before the resurgence of deep learning, the dominant family for solving such optimization problems was numerical optimization, e.g, Gauss-Newton (GN). More recently, several attempts wer...
We demonstrate how residual blocks can be viewed as Gauss-Newton steps; we propose a new residual block that exploits second order information.
1,036
scitldr
In competitive situations, agents may take actions to achieve their goals that unwittingly facilitate an opponent’s goals. We consider a domain where three agents operate: a user (human), an attacker (human or a software) agent and an observer (a software) agent. The user and the attacker compete to achieve different g...
We introduce a machine learning model that uses domain-independent features to estimate the criticality of the current state to cause a known undesirable state.
1,037
scitldr
Deep generative models can emulate the perceptual properties of complex image datasets, providing a latent representation of the data. However, manipulating such representation to perform meaningful and controllable transformations in the data space remains challenging without some form of supervision. While previous w...
We develop a framework to find modular internal representations in generative models and manipulate then to generate counterfactual examples.
1,038
scitldr
Catastrophic forgetting poses a grand challenge for continual learning systems, which prevents neural networks from protecting old knowledge while learning new tasks sequentially. We propose a Differentiable Hebbian Plasticity (DHP) Softmax layer which adds a fast learning plastic component to the slow weights of the s...
Hebbian plastic weights can behave as a compressed episodic memory storage in neural networks; improving their ability to alleviate catastrophic forgetting in continual learning.
1,039
scitldr
While real brain networks exhibit functional modularity, we investigate whether functional mod- ularity also exists in Deep Neural Networks (DNN) trained through back-propagation. Under the hypothesis that DNN are also organized in task-specific modules, in this paper we seek to dissect a hidden layer into disjoint gro...
We develop an approach to parcellate a hidden layer in DNN into functionally related groups, by applying spectral coclustering on the attribution scores of hidden neurons.
1,040
scitldr
Power-efficient CNN Domain Specific Accelerator (CNN-DSA) chips are currently available for wide use in mobile devices. These chips are mainly used in computer vision applications. However, the recent work of Super Characters method for text classification and sentiment analysis tasks using two-dimensional CNN models h...
Deploy text classification and sentiment analysis applications for English and Chinese on a 300mW CNN accelerator chip for on-device application scenarios.
1,041
scitldr
Comparing the inferences of diverse candidate models is an essential part of model checking and escaping local optima. To enable efficient comparison, we introduce an amortized variational inference framework that can perform fast and reliable posterior estimation across models of the same architecture. Our Any Paramet...
We develop VAEs where the encoder takes a model parameter vector as input, so we can do rapid inference for many models
1,042
scitldr
The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign ...
Secret is a transfer method for RL based on the transfer of credit assignment.
1,043
scitldr
Recent advances in Neural Variational Inference allowed for a renaissance in latent variable models in a variety of domains involving high-dimensional data. In this paper, we introduce two generic Variational Inference frameworks for generative models of Knowledge Graphs; Latent Fact Model and Latent Information Model....
Working toward generative knowledge graph models to better estimate predictive uncertainty in knowledge inference.
1,044
scitldr
We present a deep generative model, named Monge-Amp\`ere flow, which builds on continuous-time gradient flow arising from the Monge-Amp\`ere equation in optimal transport theory. The generative map from the latent space to the data space follows a dynamical system, where a learnable potential function guides a compress...
A gradient flow based dynamical system for invertible generative modeling
1,045
scitldr
Graph Neural Networks (GNNs) are a class of deep models that operates on data with arbitrary topology and order-invariant structure represented as graphs. We introduce an efficient memory layer for GNNs that can learn to jointly perform graph representation learning and graph pooling. We also introduce two new networks...
We introduce an efficient memory layer that can learn representation and coarsen input graphs simultaneously without relying on message passing.
1,046
scitldr
Deep neural networks require extensive computing resources, and can not be efficiently applied to embedded devices such as mobile phones, which seriously limits their applicability. To address this problem, we propose a novel encoding scheme by using {-1,+1} to decompose quantized neural networks (QNNs) into multi-bran...
A novel encoding scheme of using {-1, +1} to decompose QNNs into multi-branch binary networks, in which we used bitwise operations (xnor and bitcount) to achieve model compression, computational acceleration and resource saving.
1,047
scitldr
We propose a novel method for incorporating conditional information into a generative adversarial network (GAN) for structured prediction tasks. This method is based on fusing features from the generated and conditional information in feature space and allows the discriminator to better capture higher-order statistics ...
We propose a novel way to incorporate conditional image information into the discriminator of GANs using feature fusion that can be used for structured prediction tasks.
1,048
scitldr
Intelligent creatures can explore their environments and learn useful skills without supervision. In this paper, we propose ``Diversity is All You Need''(DIAYN), a method for learning useful skills without a reward function. Our proposed method learns skills by maximizing an information theoretic objective using a maxi...
We propose an algorithm for learning useful skills without a reward function, and show how these skills can be used to solve downstream tasks.
1,049
scitldr
Lexical ambiguity, i.e., the presence of two or more meanings for a single word, is an inherent and challenging problem for machine translation systems. Even though the use of recurrent neural networks and attention mechanisms are expected to solve this problem, machine translation systems are not always able to correc...
The paper solves a lexical ambiguity problem caused from homonym in neural translation by BERT.
1,050
scitldr
This paper focuses on the synthetic generation of human mobility data in urban areas. We present a novel and scalable application of Generative Adversarial Networks (GANs) for modeling and generating human mobility data. We leverage actual ride requests from ride sharing/hailing services from four major cities in the U...
This paper focuses on the synthetic generation of human mobility data in urban areas using GANs.
1,051
scitldr
While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements. Consequently, model size reduction has become an utmost goal in deep learning. Following the classical bits-back argument, we en...
This paper proposes an effective coding scheme for neural networks that encodes a random set of weights from a variational distribution.
1,052
scitldr
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy image. The underlying principle is that neural networks trained on large datasets have empirically been shown to be able to generate natural images well from a low-dimensional...
By analyzing an algorithms minimizing a non-convex loss, we show that all but a small fraction of noise can be removed from an image using a deep neural network based generative prior.
1,053
scitldr
Deep learning yields great across many fields, from speech recognition, image classification, to translation. But for each problem, getting a deep model to work well involves research into the architecture and a long period of tuning. We present a single model that yields good on a number of problems spanning multiple ...
Large scale multi-task architecture solves ImageNet and translation together and shows transfer learning.
1,054
scitldr
Machine learning algorithms for controlling devices will need to learn quickly, with few trials. Such a goal can be attained with concepts borrowed from continental philosophy and formalized using tools from the mathematical theory of categories. Illustrations of this approach are presented on a cyberphysical system: t...
Continental-philosophy-inspired approach to learn with few data.
1,055
scitldr
Audio signals are sampled at high temporal resolutions, and learning to synthesize audio requires capturing structure across a range of timescales. Generative adversarial networks (GANs) have seen wide success at generating images that are both locally and globally coherent, but they have seen little application to aud...
Learning to synthesize raw waveform audio with GANs
1,056
scitldr
The difficulty of obtaining sufficient labeled data for supervised learning has motivated domain adaptation, in which a classifier is trained in one domain, source domain, but operates in another, target domain. Reducing domain discrepancy has improved the performance, but it is hampered by the embedded features that d...
A novel domain adaptation method to align manifolds from source and target domains using label propagation for better accuracy.
1,057
scitldr
We propose a new method for training neural networks online in a bandit setting. Similar to prior work, we model the uncertainty only in the last layer of the network, treating the rest of the network as a feature extractor. This allows us to successfully balance between exploration and exploitation due to the efficien...
This paper proposes a new method for neural network learning in online bandit settings by marginalizing over the last layer
1,058
scitldr
Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation. Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a si...
We show in theory and in practice that combining multiple explanation methods for DNN benefits the explanation.
1,059
scitldr
We present SOSELETO (SOurce SELEction for Target Optimization), a new method for exploiting a source dataset to solve a classification problem on a target dataset. SOSELETO is based on the following simple intuition: some source examples are more informative than others for the target problem. To capture this intuition...
Learning with limited training data by exploiting "helpful" instances from a rich data source.
1,060
scitldr
We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components. To cleanly capture the set of neural architectures to which our method applies, we introduce the concept of a differential neural computational machine (∂NCM) an...
We increase the amount of trace supervision possible to utilize when training fully differentiable neural machine architectures.
1,061
scitldr
Bayesian learning of model parameters in neural networks is important in scenarios where estimates with well-calibrated uncertainty are important. In this paper, we propose Bayesian quantized networks (BQNs), quantized neural networks (QNNs) for which we learn a posterior distribution over their discrete parameters. We...
We propose Bayesian quantized networks, for which we learn a posterior distribution over their quantized parameters.
1,062
scitldr
Injecting adversarial examples during training, known as adversarial training, can improve robustness against one-step attacks, but not for unknown iterative attacks. To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy. Inspir...
Cascade adversarial training + low level similarity learning improve robustness against both white box and black box attacks.
1,063
scitldr
Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities ``solve'' the exploding gradient problem, we show that this is not the case and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be ...
We show that in contras to popular wisdom, the exploding gradient problem has not been solved and that it limits the depth to which MLPs can be effectively trained. We show why gradients explode and how ResNet handles them.
1,064
scitldr
In this paper, we are interested in two seemingly different concepts: \textit{adversarial training} and \textit{generative adversarial networks (GANs)}. Particularly, how these techniques work to improve each other. To this end, we analyze the limitation of adversarial training as a defense method, starting from questi...
We found adversarial training not only speeds up the GAN training but also increases the image quality
1,065
scitldr
We present a method to train self-binarizing neural networks, that is, networks that evolve their weights and activations during training to become binary. To obtain similar binary networks, existing methods rely on the sign activation function. This function, however, has no gradients for non-zero values, which makes ...
A method to binarize both weights and activations of a deep neural network that is efficient in computation and memory usage and performs better than the state-of-the-art.
1,066
scitldr
In many applications, the training data for a machine learning task is partitioned across multiple nodes, and aggregating this data may be infeasible due to storage, communication, or privacy constraints. In this work, we present Good-Enough Model Spaces (GEMS), a novel framework for learning a global satisficing (i.e....
We present Good-Enough Model Spaces (GEMS), a framework for learning an aggregate model over distributed nodes within a small number of communication rounds.
1,067
scitldr
We propose the Wasserstein Auto-Encoder (WAE)---a new algorithm for building a generative model of the data distribution. WAE minimizes a penalized form of the Wasserstein distance between the model distribution and the target distribution, which leads to a different regularizer than the one used by the Variational Aut...
We propose a new auto-encoder based on the Wasserstein distance, which improves on the sampling properties of VAE.
1,068
scitldr
In this paper, we aim to develop a novel mechanism to preserve differential privacy (DP) in adversarial learning for deep neural networks, with provable robustness to adversarial examples. We leverage the sequential composition theory in DP, to establish a new connection between DP preservation and provable robustness....
Preserving Differential Privacy in Adversarial Learning with Provable Robustness to Adversarial Examples
1,069
scitldr
In high-dimensional reinforcement learning settings with sparse rewards, performing effective exploration to even obtain any reward signal is an open challenge. While model-based approaches hold promise of better exploration via planning, it is extremely difficult to learn a reliable enough Markov Decision Process (MDP...
We automatically construct and explore a small abstract Markov Decision Process, enabling us to achieve state-of-the-art results on Montezuma's Revenge, Pitfall!, and Private Eye by a significant margin.
1,070
scitldr
Most deep reinforcement learning (RL) systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment. This is a critical shortcoming for applying RL to real-world problems where collecting data is expensive, and models must be tested offline before being depl...
We show that KL-control from a pre-trained prior can allow RL models to learn from a static batch of collected data, without the ability to explore online in the environment.
1,071
scitldr
Transfer and adaptation to new unknown environmental dynamics is a key challenge for reinforcement learning (RL). An even greater challenge is performing near-optimally in a single attempt at test time, possibly without access to dense rewards, which is not addressed by current methods that require multiple experience ...
Single episode policy transfer in a family of environments with related dynamics, via optimized probing for rapid inference of latent variables and immediate execution of a universal policy.
1,072
scitldr
Domain specific goal-oriented dialogue systems typically require modeling three types of inputs, viz., (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances and (iii) the current utterance for which the response needs to be generated. While modeling t...
We propose a Graph Convolutional Network based encoder-decoder model with sequential attention for goal-oriented dialogue systems.
1,073
scitldr
Effectively capturing graph node sequences in the form of vector embeddings is critical to many applications. We achieve this by (i) first learning vector embeddings of single graph nodes and (ii) then composing them to compactly represent node sequences. Specifically, we propose SENSE-S (Semantically Enhanced Node Seq...
Node sequence embedding mechanism that captures both graph and text properties.
1,074
scitldr
Recent evidence shows that convolutional neural networks (CNNs) are biased towards textures so that CNNs are non-robust to adversarial perturbations over textures, while traditional robust visual features like SIFT (scale-invariant feature transforms) are designed to be robust across a substantial range of affine disto...
This paper aims to leverage good properties of robust visual features like SIFT to renovate CNN architectures towards better accuracy and robustness.
1,075
scitldr
Compressed representations generalize better , which may be crucial when learning from limited or noisy labeled data. The Information Bottleneck (IB) method provides an insightful and principled approach for balancing compression and prediction in representation learning. The IB objective I(X; Z) − βI(Y ; Z) employs a ...
Theory predicts the phase transition between unlearnable and learnable values of beta for the Information Bottleneck objective
1,076
scitldr
We consider a new class of \emph{data poisoning} attacks on neural networks, in which the attacker takes control of a model by making small perturbations to a subset of its training data. We formulate the task of finding poisons as a bi-level optimization problem, which can be solved using methods borrowed from the met...
Generate corrupted training images that are imperceptible yet change CNN behavior on a target during any new training.
1,077
scitldr
We give a new algorithm for learning a two-layer neural network under a very general class of input distributions. Assuming there is a ground-truth two-layer network y = A \sigma(Wx) + \xi, where A, W are weight matrices, \xi represents noise, and the number of neurons in the hidden layer is no larger than the input or...
We give an algorithm for learning a two-layer neural network with symmetric input distribution.
1,078
scitldr
Teachers intentionally pick the most informative examples to show their students. However, if the teacher and student are neural networks, the examples that the teacher network learns to give, although effective at teaching the student, are typically uninterpretable. We show that training the student and teacher iterat...
We show that training a student and teacher network iteratively, rather than jointly, can produce emergent, interpretable teaching strategies.
1,079
scitldr
Stochastic gradient descent (SGD), which trades off noisy gradient updates for computational efficiency, is the de-facto optimization algorithm to solve large-scale machine learning problems. SGD can make rapid learning progress by performing updates using subsampled training data, but the noisy updates also lead to sl...
Non-asymptotic analysis of SGD and SVRG, showing the strength of each algorithm in convergence speed and computational cost, in both under-parametrized and over-parametrized settings.
1,080
scitldr
Non-autoregressive machine translation (NAT) systems predict a sequence of output tokens in parallel, achieving substantial improvements in generation speed compared to autoregressive models. Existing NAT models usually rely on the technique of knowledge distillation, which creates the training data from a pretrained a...
We systematically examine why knowledge distillation is crucial to the training of non-autoregressive translation (NAT) models, and propose methods to further improve the distilled data to best match the capacity of an NAT model.
1,081
scitldr
Owing to their ability to both effectively integrate information over long time horizons and scale to massive amounts of data, self-attention architectures have recently shown breakthrough success in natural language processing (NLP), achieving state-of-the-art in domains such as language modeling and machine translati...
We succeed in stabilizing transformers for training in the RL setting and demonstrate a large improvement over LSTMs on DMLab-30, matching an external memory architecture.
1,082
scitldr
Knowledge distillation is an effective model compression technique in which a smaller model is trained to mimic a larger pretrained model. However in order to make these compact models suitable for real world deployment, not only do we need to reduce the performance gap but also we need to make them more robust to comm...
Inspired by trial-to-trial variability in the brain that can result from multiple noise sources, we introduce variability through noise in the knowledge distillation framework and studied their effect on generalization and robustness.
1,083
scitldr
We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. Our clustering objective is based on optimizing normalized cuts, a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity. We define a differentiable loss function equivalent to th...
We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples. We define a differentiable loss function equivalent to the expected normalized cuts.
1,084
scitldr
We introduce the largest (among publicly available) dataset for Cyrillic Handwritten Text Recognition and the first dataset for Cyrillic Text in the Wild Recognition, as well as suggest a method for recognizing Cyrillic Handwritten Text and Text in the Wild. Based on this approach, we develop a system that can reduce t...
We introduce several datasets for Cyrillic OCR and a method for its recognition
1,085
scitldr
Most deep learning for NLP represents each word with a single point or single-mode region in semantic space, while the existing multi-mode word embeddings cannot represent longer word sequences like phrases or sentences. We introduce a phrase representation (also applicable to sentences) where each phrase has a distinc...
We propose an unsupervised way to learn multiple embeddings for sentences and phrases
1,086
scitldr
We present a meta-learning approach for adaptive text-to-speech (TTS) with few data. During training, we learn a multi-speaker model using a shared conditional WaveNet core and independent learned embeddings for each speaker. The aim of training is not to produce a neural network with fixed weights, which is then deplo...
Sample efficient algorithms to adapt a text-to-speech model to a new voice style with the state-of-the-art performance.
1,087
scitldr
This paper introduces a probabilistic framework for k-shot image classification. The goal is to generalise from an initial large-scale classification task to a separate task comprising new classes and small numbers of examples. The new approach not only leverages the feature-based representation learned by a neural net...
This paper introduces a probabilistic framework for k-shot image classification that achieves state-of-the-art results
1,088
scitldr
Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from distributed prioritized experience replay. We study the effects of parameter lag ing in representational drift and recurrent state staleness and empirically derive an improved tra...
Investigation on combining recurrent neural networks and experience replay leading to state-of-the-art agent on both Atari-57 and DMLab-30 using single set of hyper-parameters.
1,089
scitldr
The current state-of-the-art end-to-end semantic role labeling (SRL) model is a deep neural network architecture with no explicit linguistic features. However, prior work has shown that gold syntax trees can dramatically improve SRL, suggesting that neural network models could see great improvements from explicit model...
Our combination of multi-task learning and self-attention, training the model to attend to parents in a syntactic parse tree, achieves state-of-the-art CoNLL-2005 and CoNLL-2012 SRL results for models using predicted predicates.
1,090
scitldr
Bottleneck structures with identity (e.g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently. In this paper, we focus on the information-preserving nature of identity connection and utilize this to enable a con...
We propose a new module that improves any ResNet-like architectures by enforcing "channel selective" behavior to convolutional layers
1,091
scitldr
Predictive models that generalize well under distributional shift are often desirable and sometimes crucial to machine learning applications. One example is the estimation of treatment effects from observational data, where a subtask is to predict the effect of a treatment on subjects that are systematically different ...
A theory and algorithmic framework for prediction under distributional shift, including causal effect estimation and domain adaptation
1,092
scitldr
Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in p...
We study deep ensembles through the lens of loss landscape and the space of predictions, demonstrating that the decorrelation power of random initializations is unmatched by subspace sampling that only explores a single mode.
1,093
scitldr
Existing deep learning approaches for learning visual features tend to extract more information than what is required for the task at hand. From a privacy preservation perspective, the input visual information is not protected from the model; enabling the model to become more intelligent than it is trained to be. Exist...
Can we trust our deep learning models? A framework to measure and improve a deep learning model's trust during training.
1,094
scitldr
Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. In this work, we propose to train a policy while explicitly penalizing the mismatch between these two distributions over a fixed time ...
A model-based RL approach which uses a differentiable uncertainty penalty to learn driving policies from purely observational data.
1,095
scitldr
Dynamical system models (including RNNs) often lack the ability to adapt the sequence generation or prediction to a given context, limiting their real-world application. In this paper we show that hierarchical multi-task dynamical systems (MTDSs) provide direct user control over sequence generation, via use of a latent...
Tailoring predictions from sequence models (such as LDSs and RNNs) via an explicit latent code.
1,096
scitldr
By injecting adversarial examples into training data, adversarial training is promising for improving the robustness of deep learning models. However, most existing adversarial training approaches are based on a specific type of adversarial attack. It may not provide sufficiently representative samples from the adversa...
We propose a novel adversarial training with domain adaptation method that significantly improves the generalization ability on adversarial examples from different attacks.
1,097
scitldr
Character-level language modeling is an essential but challenging task in Natural Language Processing. Prior works have focused on identifying long-term dependencies between characters and have built deeper and wider networks for better performance. However, their models require substantial computational resources, whi...
This paper proposes a novel lightweight Transformer for character-level language modeling, utilizing group-wise operations.
1,098
scitldr
Domain adaptation tackles the problem of transferring knowledge from a label-rich source domain to an unlabeled or label-scarce target domain. Recently domain-adversarial training (DAT) has shown promising capacity to learn a domain-invariant feature space by reversing the gradient propagation of a domain classifier. H...
A stable domain-adversarial training approach for robust and comprehensive domain adaptation
1,099
scitldr