query stringlengths 273 149k | pos stringlengths 18 667 | idx int64 0 1.99k | task_name stringclasses 1
value |
|---|---|---|---|
Due to a resource-constrained environment, network compression has become an important part of deep neural networks research. In this paper, we propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in all convolution lay... | We propose a new compression method, Inter-Layer Weight Prediction (ILWP) and quantization method which quantize the predicted residuals between the weights in convolution layers. | 1,800 | scitldr |
There exists a plethora of techniques for inducing structured sparsity in parametric models during the optimization process, with the final goal of resource-efficient inference. However, to the best of our knowledge, none target a specific number of floating-point operations (FLOPs) as part of a single end-to-end optim... | We extend a state-of-the-art technique to directly incorporate FLOPs as part of the optimization objective, and we show that, given a desired FLOPs requirement, different neural networks are successfully trained. | 1,801 | scitldr |
Unpaired image-to-image translation among category domains has achieved remarkable success in past decades. Recent studies mainly focus on two challenges. For one thing, such translation is inherently multimodal due to variations of domain-specific information (e.g., the domain of house cat has multiple fine-grained su... | Granularity controled multi-domain and multimodal image to image translation method | 1,802 | scitldr |
Recurrent Neural Networks (RNNs) are very successful at solving challenging problems with sequential data. However, this observed efficiency is not yet entirely explained by theory. It is known that a certain class of multiplicative RNNs enjoys the property of depth efficiency --- a shallow network of exponentially lar... | Analysis of expressivity and generality of recurrent neural networks with ReLu nonlinearities using Tensor-Train decomposition. | 1,803 | scitldr |
While deep neural networks have proven to be a powerful tool for many recognition and classification tasks, their stability properties are still not well understood. In the past, image classifiers have been shown to be vulnerable to so-called adversarial attacks, which are created by additively perturbing the correctly... | We propose a new, efficient algorithm to construct adversarial examples by means of deformations, rather than additive perturbations. | 1,804 | scitldr |
Adversarial learning methods have been proposed for a wide range of applications, but the training of adversarial models can be notoriously unstable. Effectively balancing the performance of the generator and discriminator is critical, since a discriminator that achieves very high accuracy will produce relatively uninf... | Regularizing adversarial learning with an information bottleneck, applied to imitation learning, inverse reinforcement learning, and generative adversarial networks. | 1,805 | scitldr |
Deploying machine learning systems in the real world requires both high accuracy on clean data and robustness to naturally occurring corruptions. While architectural advances have led to improved accuracy, building robust models remains challenging, involving major changes in training procedure and datasets. Prior work... | Simple augmentation method overcomes robustness/accuracy trade-off observed in literature and opens questions about the effect of training distribution on out-of-distribution generalization. | 1,806 | scitldr |
Offset regression is a standard method for spatial localization in many vision tasks, including human pose estimation, object detection, and instance segmentation. However, if high localization accuracy is crucial for a task, convolutional neural networks will offset regression usually struggle to deliver. This can be ... | We use mixture density networks to do full conditional density estimation for spatial offset regression and apply it to the human pose estimation task. | 1,807 | scitldr |
Like language, music can be represented as a sequence of discrete symbols that form a hierarchical syntax, with notes being roughly like characters and motifs of notes like words. Unlike text however, music relies heavily on repetition on multiple timescales to build structure and meaning. The Music Transformer has sho... | Visualizing the differences between regular and relative attention for Music Transformer. | 1,808 | scitldr |
We study the statistical properties of the endpoint of stochastic gradient descent (SGD). We approximate SGD as a stochastic differential equation (SDE) and consider its Boltzmann Gibbs equilibrium distribution under the assumption of isotropic variance in loss gradients.. Through this analysis, we find that three fact... | Three factors (batch size, learning rate, gradient noise) change in predictable way the properties (e.g. sharpness) of minima found by SGD. | 1,809 | scitldr |
Although word analogy problems have become a standard tool for evaluating word vectors, little is known about why word vectors are so good at solving these problems. In this paper, I attempt to further our understanding of the subject, by developing a simple, but highly accurate generative approach to solve the word an... | Simple generative approach to solve the word analogy problem which yields insights into word relationships, and the problems with estimating them | 1,810 | scitldr |
Fine-tuning from pre-trained ImageNet models has become the de-facto standard for various computer vision tasks. Current practices for fine-tuning typically involve selecting an ad-hoc choice of hyper-parameters and keeping them fixed to values normally used for training from scratch. This paper re-examines several com... | This paper re-examines several common practices of setting hyper-parameters for fine-tuning. | 1,811 | scitldr |
Transfer learning through fine-tuning a pre-trained neural network with an extremely large dataset, such as ImageNet, can significantly accelerate training while the accuracy is frequently bottlenecked by the limited dataset size of the new target task. To solve the problem, some regularization methods, constraining th... | improving deep transfer learning with regularization using attention based feature maps | 1,812 | scitldr |
Neural models achieved considerable improvement for many natural language processing tasks, but they offer little transparency, and interpretability comes at a cost. In some domains, automated predictions without justifications have limited applicability. Recently, progress has been made regarding single-aspect sentime... | Neural model predicting multi-aspect sentiments and generating a probabilistic multi-dimensional mask simultaneously. Model outperforms strong baselines and generates masks that are: strong feature predictors, meaningful, and interpretable. | 1,813 | scitldr |
Neural sequence generation is commonly approached by using maximum- likelihood (ML) estimation or reinforcement learning (RL). However, it is known that they have their own shortcomings; ML presents training/testing discrepancy, whereas RL suffers from sample inefficiency. We point out that it is difficult to resolve a... | Propose new objective function for neural sequence generation which integrates ML-based and RL-based objective functions. | 1,814 | scitldr |
Capsule Networks have shown encouraging on \textit{defacto} benchmark computer vision datasets such as MNIST, CIFAR and smallNORB. Although, they are yet to be tested on tasks where the entities detected inherently have more complex internal representations and there are very few instances per class to learn from and w... | A pairwise learned capsule network that performs well on face verification tasks given limited labeled data | 1,815 | scitldr |
In this work we present a new agent architecture, called Reactor, which combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN and Categorical DQN , while giving better run-time performance than A3C . Our first contribution is a new p... | Reactor combines multiple algorithmic and architectural contributions to produce an agent with higher sample-efficiency than Prioritized Dueling DQN while giving better run-time performance than A3C. | 1,816 | scitldr |
Hierarchical planning, in particular, Hierarchical Task Networks, was proposed as a method to describe plans by decomposition of tasks to sub-tasks until primitive tasks, actions, are obtained. Plan verification assumes a complete plan as input, and the objective is finding a task that decomposes to this plan. In plan ... | The paper describes methods to verify and recognize HTN plans by parsing of attribute grammars. | 1,817 | scitldr |
Neural architecture search (NAS), the task of finding neural architectures automatically, has recently emerged as a promising approach for unveiling better models over human-designed ones. However, most success stories are for vision tasks and have been quite limited for text, except for a small language modeling setup... | We explore neural architecture search for language tasks. Recurrent cell search is challenging for NMT, but attention mechanism search works. The result of attention search on translation is transferable to reading comprehension. | 1,818 | scitldr |
Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce ... | We propose a regularizer that improves interpolation and autoencoders and show that it also improves the learned representation for downstream tasks. | 1,819 | scitldr |
We consider the problem of generating plausible and diverse video sequences, when we are only given a start and an end frame. This task is also known as inbetweening, and it belongs to the broader area of stochastic video generation, which is generally approached by means of recurrent neural networks (RNN). In this pap... | This paper presents method for stochastically generating in-between video frames from given key frames, using direct 3D convolutions. | 1,820 | scitldr |
Aligning knowledge graphs from different sources or languages, which aims to align both the entity and relation, is critical to a variety of applications such as knowledge graph construction and question answering. Existing methods of knowledge graph alignment usually rely on a large number of aligned knowledge triplet... | This paper studies weakly-supervised knowledge graph alignment with adversarial training frameworks. | 1,821 | scitldr |
Multi-view learning can provide self-supervision when different views are available of the same data. Distributional hypothesis provides another form of useful self-supervision from adjacent sentences which are plentiful in large unlabelled corpora. Motivated by the asymmetry in the two hemispheres of the human brain a... | Multi-view learning improves unsupervised sentence representation learning | 1,822 | scitldr |
There are myriad kinds of segmentation, and ultimately the `"right" segmentation of a given scene is in the eye of the annotator. Standard approaches require large amounts of labeled data to learn just one particular kind of segmentation. As a first step towards relieving this annotation burden, we propose the problem ... | We propose a meta-learning approach for guiding visual segmentation tasks from varying amounts of supervision. | 1,823 | scitldr |
Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we introduce a new form of latent optimisation inspired by the CS-GAN and show that it improves adversarial dynami... | Latent optimisation improves adversarial training dynamics. We present both theoretical analysis and state-of-the-art image generation with ImageNet 128x128. | 1,824 | scitldr |
In this paper, we study the problem of optimizing a two-layer artificial neural network that best fits a training dataset. We look at this problem in the setting where the number of parameters is greater than the number of sampled points. We show that for a wide class of differentiable activation functions (this class ... | This paper talks about theoretical properties of first-order optimal point of two layer neural network in over-parametrized case | 1,825 | scitldr |
We introduce the concept of channel aggregation in ConvNet architecture, a novel compact representation of CNN features useful for explicitly modeling the nonlinear channels encoding especially when the new unit is embedded inside of deep architectures for action recognition. The channel aggregation is based on multipl... | An architecture enables CNN trained on the video sequences converging rapidly | 1,826 | scitldr |
We present a new method for black-box adversarial attack. Unlike previous methods that combined transfer-based and scored-based methods by using the gradient or initialization of a surrogate white-box model, this new method tries to learn a low-dimensional embedding using a pretrained model, and then performs efficient... | We present a new method that combines transfer-based and scored black-box adversarial attack, improving the success rate and query efficiency of black-box adversarial attack across different network architectures. | 1,827 | scitldr |
Deep neural networks (DNNs) are inspired from the human brain and the interconnection between the two has been widely studied in the literature. However, it is still an open question whether DNNs are able to make decisions like the brain. Previous work has demonstrated that DNNs, trained by matching the neural response... | Describe a neuro-AI interface technique to evaluate generative adversarial networks | 1,828 | scitldr |
While recent developments in autonomous vehicle (AV) technology highlight substantial progress, we lack tools for rigorous and scalable testing. Real-world testing, the de facto evaluation environment, places the public in danger, and, due to the rare nature of accidents, will require billions of miles in order to stat... | Using adaptive sampling methods to accelerate rare-event probability evaluation, we estimate the probability of an accident under a base distribution governing standard traffic behavior. | 1,829 | scitldr |
Many tasks in natural language understanding require learning relationships between two sequences for various tasks such as natural language inference, paraphrasing and entailment. These aforementioned tasks are similar in nature, yet they are often modeled individually. Knowledge transfer can be effective for closely ... | A dynamic bagging methods approach to avoiding negatve transfer in neural network few-shot transfer learning | 1,830 | scitldr |
Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment. Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenient or harmful to a patient (e.g. biopsy, x-ray). In such scenarios, in ... | A novel matrix completion based algorithm to model disease progression with events | 1,831 | scitldr |
Multilingual Neural Machine Translation (NMT) systems are capable of translating between multiple source and target languages within a single system. An important indicator of generalization within these systems is the quality of zero-shot translation - translating between language pairs that the system has never seen ... | Simple similarity constraints on top of multilingual NMT enables high quality translation between unseen language pairs for the first time. | 1,832 | scitldr |
We prove the precise scaling, at finite depth and width, for the mean and variance of the neural tangent kernel (NTK) in a randomly initialized ReLU network. The standard deviation is exponential in the ratio of network depth to width. Thus, even in the limit of infinite overparameterization, the NTK is not determinist... | The neural tangent kernel in a randomly initialized ReLU net is non-trivial fluctuations as long as the depth and width are comparable. | 1,833 | scitldr |
Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bas... | We propose new tensor decompositions and associated regularizers to obtain state of the art performances on temporal knowledge base completion. | 1,834 | scitldr |
The conventional approach to solving the recommendation problem greedily ranks individual document candidates by prediction scores. However, this method fails to optimize the slate as a whole, and hence, often struggles to capture biases caused by the page layout and document interdepedencies. The slate recommendation ... | We used a CVAE type model structure to learn to directly generate slates/whole pages for recommendation systems. | 1,835 | scitldr |
Neural networks for structured data like graphs have been studied extensively in recent years. To date, the bulk of research activity has focused mainly on static graphs. However, most real-world networks are dynamic since their topology tends to change over time. Predicting the evolution of dynamic graphs is a task of... | Combining graph neural networks and the RNN graph generative model, we propose a novel architecture that is able to learn from a sequence of evolving graphs and predict the graph topology evolution for the future timesteps | 1,836 | scitldr |
As for knowledge-based question answering, a fundamental problem is to relax the assumption of answerable questions from simple questions to compound questions. Traditional approaches firstly detect topic entity mentioned in questions, then traverse the knowledge graph to find relations as a multi-hop path to answers, ... | We propose a learning-to-decompose agent that helps simple-question answerers to answer compound question over knowledge graph. | 1,837 | scitldr |
Energy based models outputs unmormalized log-probability values given datasamples. Such a estimation is essential in a variety of application problems suchas sample generation, denoising, sample restoration, outlier detection, Bayesianreasoning, and many more. However, standard maximum likelihood training iscomputation... | Learned energy based model with score matching | 1,838 | scitldr |
A restricted Boltzmann machine (RBM) learns a probabilistic distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismisses potentially important structural information in the original tensor (multi-... | Propose a general tensor-based RBM model which can compress the model greatly at the same keep a strong model expression capacity | 1,839 | scitldr |
Autonomous driving is still considered as an “unsolved problem” given its inherent important variability and that many processes associated with its development like vehicle control and scenes recognition remain open issues. Despite reinforcement learning algorithms have achieved notable in games and some robotic manip... | An actor-critic reinforcement learning approach with multi-step returns applied to autonomous driving with Carla simulator. | 1,840 | scitldr |
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual in... | We propose new methods for evaluating and quantifying the quality of synthetic GAN distributions from the perspective of classification tasks | 1,841 | scitldr |
The goal of survival clustering is to map subjects (e.g., users in a social network, patients in a medical study) to $K$ clusters ranging from low-risk to high-risk. Existing survival methods assume the presence of clear \textit{end-of-life} signals or introduce them artificially using a pre-defined timeout. In this pa... | The goal of survival clustering is to map subjects into clusters. Without end-of-life signals, this is a challenging task. To address this task we propose a new loss function by modifying the Kuiper statistics. | 1,842 | scitldr |
Bayesian optimization (BO) is a popular methodology to tune the hyperparameters of expensive black-box functions. Despite its success, standard BO focuses on a single task at a time and is not designed to leverage information from related functions, such as tuning performance metrics of the same algorithm across multip... | We show how using semi-parametric prior estimations can speed up HPO significantly across datasets and metrics. | 1,843 | scitldr |
We propose Pure CapsNets (P-CapsNets) without routing procedures. Specifically, we make three modifications to CapsNets. First, we remove routing procedures from CapsNets based on the observation that the coupling coefficients can be learned implicitly. Second, we replace the convolutional layers in CapsNets to improve... | Routing procedures are not necessary for CapsNets | 1,844 | scitldr |
A recent line of work has studied the statistical properties of neural networks to great success from a {\it mean field theory} perspective, making and verifying very precise predictions of neural network behavior and test time performance. In this paper, we build upon these works to explore two methods for taming th... | By setting the width or the initialization variance of each layer differently, we can actually subdue gradient explosion problems in residual networks (with fully connected layers and no batchnorm). A mathematical theory is developed that not only tells you how to do it, but also surprisingly is able to predict, after ... | 1,845 | scitldr |
We explore the collaborative multi-agent setting where a team of deep reinforcement learning agents attempt to solve a shared task in partially observable environments. In this scenario, learning an effective communication protocol is key. We propose a communication protocol that allows for targeted communication, wher... | Targeted communication in multi-agent cooperative reinforcement learning | 1,846 | scitldr |
It is difficult for the beginners of etching latte art to make well-balanced patterns by using two fluids with different viscosities such as foamed milk and syrup. Even though making etching latte art while watching making videos which show the procedure, it is difficult to keep balance. Thus well-balanced etching latt... | We have developed an etching latte art support system which projects the making procedure directly onto a cappuccino to help the beginners to make well-balanced etching latte art. | 1,847 | scitldr |
We focus on temporal self-supervision for GAN-based video generation tasks. While adversarial training successfully yields generative models for a variety of areas, temporal relationship in the generated data is much less explored. This is crucial for sequential generation tasks, e.g. video super-resolution and unpaire... | We propose temporal self-supervisions for learning stable temporal functions with GANs. | 1,848 | scitldr |
The scarcity of labeled training data often prohibits the internationalization of NLP models to multiple languages. Cross-lingual understanding has made progress in this area using language universal representations. However, most current approaches focus on the problem as one of aligning language and do not address th... | Semi-supervised Cross-lingual Document Classification | 1,849 | scitldr |
A distinct commonality between HMMs and RNNs is that they both learn hidden representations for sequential data. In addition, it has been noted that the backward computation of the Baum-Welch algorithm for HMMs is a special case of the back propagation algorithm used for neural networks . Do these observations suggest ... | Are HMMs a special case of RNNs? We investigate a series of architectural transformations between HMMs and RNNs, both through theoretical derivations and empirical hybridization and provide new insights. | 1,850 | scitldr |
We present an information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference. This framework emphasizes the need to consider latent-variable models along two dimensions: the ability to reconstruct inputs (distortion) and the communicatio... | We provide an information theoretic and experimental analysis of state-of-the-art variational autoencoders. | 1,851 | scitldr |
Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been pro... | We develop theoretical foundations for the expressive power of GNNs and design a provably most powerful GNN. | 1,852 | scitldr |
We introduce MTLAB, a new algorithm for learning multiple related tasks with strong theoretical guarantees. Its key idea is to perform learning sequentially over the data of all tasks, without interruptions or restarts at task boundaries. Predictors for individual tasks are derived from this process by an additional on... | A new algorithm for online multi-task learning that learns without restarts at the task borders | 1,853 | scitldr |
Recent work has exhibited the surprising cross-lingual abilities of multilingual BERT (M-BERT) -- surprising since it is trained without any cross-lingual objective and with no aligned data. In this work, we provide a comprehensive study of the contribution of different components in M-BERT to its cross-lingual ability... | Cross-Lingual Ability of Multilingual BERT: An Empirical Study | 1,854 | scitldr |
We analyze the dynamics of training deep ReLU networks and their implications on generalization capability. Using a teacher-student setting, we discovered a novel relationship between the gradient received by hidden student nodes and the activations of teacher nodes for deep ReLU networks. With this relationship and th... | A theoretical framework for deep ReLU network that can explains multiple puzzling phenomena like over-parameterization, implicit regularization, lottery tickets, etc. | 1,855 | scitldr |
State-of-the-art methods for learning cross-lingual word embeddings have relied on bilingual dictionaries or parallel corpora. Recent studies showed that the need for parallel data supervision can be alleviated with character-level information. While these methods showed encouraging , they are not on par with their sup... | Aligning languages without the Rosetta Stone: with no parallel data, we construct bilingual dictionaries using adversarial training, cross-domain local scaling, and an accurate proxy criterion for cross-validation. | 1,856 | scitldr |
Questions that require counting a variety of objects in images remain a major challenge in visual question answering (VQA). The most common approaches to VQA involve either classifying answers based on fixed length representations of both the image and question or summing fractional counts estimated from each section o... | We perform counting for visual question answering; our model produces interpretable outputs by counting directly from detected objects. | 1,857 | scitldr |
Graphs are fundamental data structures required to model many important real-world data, from knowledge graphs, physical and social interactions to molecules and proteins. In this paper, we study the problem of learning generative models of graphs from a dataset of graphs of interest. After learning, these models can b... | We study the graph generation problem and propose a powerful deep generative model capable of generating arbitrary graphs. | 1,858 | scitldr |
We introduce a neural network that represents sentences by composing their words according to induced binary parse trees. We use Tree-LSTM as our composition function, applied along a tree structure found by a fully differentiable natural language chart parser. Our model simultaneously optimises both the composition fu... | Represent sentences by composing them with Tree-LSTMs according to automatically induced parse trees. | 1,859 | scitldr |
Pruning neural networks for wiring length efficiency is considered. Three techniques are proposed and experimentally tested: distance-based regularization, nested-rank pruning, and layer-by-layer bipartite matching. The first two algorithms are used in the training and pruning phases, respectively, and the third is use... | Three new algorithms with ablation studies to prune neural network to optimize for wiring length, as opposed to number of remaining weights. | 1,860 | scitldr |
Given a video and a sentence, the goal of weakly-supervised video moment retrieval is to locate the video segment which is described by the sentence without having access to temporal annotations during training. Instead, a model must learn how to identify the correct segment (i.e. moment) when only being provided with ... | Weakly-Supervised Text-Based Video Moment Retrieval | 1,861 | scitldr |
In machine learning tasks, overtting frequently crops up when the number of samples of target domain is insufficient, for the generalization ability of the classifier is poor in this circumstance. To solve this problem, transfer learning utilizes the knowledge of similar domains to improve the robustness of the learner. ... | How to use stacked generalization to improve the performance of existing transfer learning algorithms when limited labeled data is available. | 1,862 | scitldr |
Deep learning has achieved astonishing on many tasks with large amounts of data and generalization within the proximity of training data. For many important real-world applications, these requirements are unfeasible and additional prior knowledge on the task domain is required to overcome the ing problems. In particula... | This paper introduces a physics prior for Deep Learning and applies the resulting network topology for model-based control. | 1,863 | scitldr |
Many challenging prediction problems, from molecular optimization to program synthesis, involve creating complex structured objects as outputs. However, available training data may not be sufficient for a generative model to learn all possible complex transformations. By leveraging the idea that evaluation is easier th... | We improve generative models by proposing a meta-algorithm that filters new training data from the model's outputs. | 1,864 | scitldr |
The Boltzmann distribution is a natural model for many systems, from brains to materials and biomolecules, but is often of limited utility for fitting data because Monte Carlo algorithms are unable to simulate it in available time. This gap between the expressive capabilities and sampling practicalities of energy-based... | We use an unrolled simulator as an end-to-end differentiable model of protein structure and show it can (sometimes) hierarchically generalize to unseen fold topologies. | 1,865 | scitldr |
Progress in understanding how individual animals learn requires high-throughput standardized methods for behavioral training and ways of adapting training. During the course of training with hundreds or thousands of trials, an animal may change its underlying strategy abruptly, and capturing these changes requires real... | Automated mice training for neuroscience with online iterative latent strategy inference for behavior prediction | 1,866 | scitldr |
Recurrent neural networks (RNNs) are a powerful tool for modeling sequential data. Despite their widespread usage, understanding how RNNs solve complex problems remains elusive. Here, we characterize how popular RNN architectures perform document-level sentiment classification. Despite their theoretical capacity to imp... | We analyze recurrent networks trained on sentiment classification, and find that they all exhibit approximate line attractor dynamics when solving this task. | 1,867 | scitldr |
Parallel developments in neuroscience and deep learning have led to mutually productive exchanges, pushing our understanding of real and artificial neural networks in sensory and cognitive systems. However, this interaction between fields is less developed in the study of motor control. In this work, we develop a virtu... | We built a physical simulation of a rodent, trained it to solve a set of tasks, and analyzed the resulting networks. | 1,868 | scitldr |
We present Optimal Transport GAN (OT-GAN), a variant of generative adversarial nets minimizing a new metric measuring the distance between the generator distribution and the data distribution. This metric, which we call mini-batch energy distance, combines optimal transport in primal form with an energy distance define... | An extension of GANs combining optimal transport in primal form with an energy distance defined in an adversarially learned feature space. | 1,869 | scitldr |
We build a virtual agent for learning language in a 2D maze-like world. The agent sees images of the surrounding environment, listens to a virtual teacher, and takes actions to receive rewards. It interactively learns the teacher’s language from scratch based on two language use cases: sentence-directed navigation and ... | Training an agent in a 2D virtual world for grounded language acquisition and generalization. | 1,870 | scitldr |
Reinforcement learning algorithms, though successful, tend to over-fit to training environments, thereby hampering their application to the real-world. This paper proposes $\text{W}\text{R}^{2}\text{L}$ -- a robust reinforcement learning algorithm with significant robust performance on low and high-dimensional control ... | An RL algorithm that learns to be robust to changes in dynamics | 1,871 | scitldr |
Partially observable Markov decision processes (POMDPs) are a natural model for scenarios where one has to deal with incomplete knowledge and random events. Applications include, but are not limited to, robotics and motion planning. However, many relevant properties of POMDPs are either undecidable or very expensive to... | This paper provides a game-based abstraction scheme to compute provably sound policies for POMDPs. | 1,872 | scitldr |
In this paper we approach two relevant deep learning topics: i) tackling of graph structured input data and ii) a better understanding and analysis of deep networks and related learning algorithms. With this in mind we focus on the topological classification of reachability in a particular subset of planar graphs (Maze... | A toy dataset based on critical percolation in a planar graph provides an analytical window to the training dynamics of deep neural networks | 1,873 | scitldr |
While neural networks can be trained to map from one specific dataset to another, they usually do not learn a generalized transformation that can extrapolate accurately outside the space of training. For instance, a generative adversarial network (GAN) exclusively trained to transform images of cars from light to dark ... | We reframe the generation problem as one of editing existing points, and as a result extrapolate better than traditional GANs. | 1,874 | scitldr |
We present a representation for describing transition models in complex uncertain domains using relational rules. For any action, a rule selects a set of relevant objects and computes a distribution over properties of just those objects in the ing state given their properties in the previous state. An iterative greedy ... | A new approach that learns a representation for describing transition models in complex uncertaindomains using relational rules. | 1,875 | scitldr |
Differentiable planning network architecture has shown to be powerful in solving transfer planning tasks while possesses a simple end-to-end training feature. Many great planning architectures that have been proposed later in literature are inspired by this design principle in which a recursive network architecture is ... | We propose an end-to-end differentiable planning network for graphs. This can be applicable to many motion planning problems | 1,876 | scitldr |
We describe techniques for training high-quality image denoising models that require only single instances of corrupted images as training data. Inspired by a recent technique that removes the need for supervision through image pairs by employing networks with a "blind spot" in the receptive field, we address two of it... | We learn high-quality denoising using only single instances of corrupted images as training data. | 1,877 | scitldr |
Reinforcement learning (RL) agents improve through trial-and-error, but when reward is sparse and the agent cannot discover successful action sequences, learning stagnates. This has been a notable problem in training deep RL agents to perform web-based tasks, such as booking flights or replying to emails, where a singl... | We solve the sparse rewards problem on web UI tasks using exploration guided by demonstrations | 1,878 | scitldr |
Nowadays deep learning is one of the main topics in almost every field. It helped to get amazing in a great number of tasks. The main problem is that this kind of learning and consequently neural networks, that can be defined deep, are resource intensive. They need specialized hardware to perform a computation in a rea... | Embedded architecture for deep learning on optimized devices for face detection and emotion recognition | 1,879 | scitldr |
Word embedding is a useful approach to capture co-occurrence structures in a large corpus of text. In addition to the text data itself, we often have additional covariates associated with individual documents in the corpus---e.g. the demographic of the author, time and venue of publication, etc.---and we would like the... | Using the same embedding across covariates doesn't make sense, we show that a tensor decomposition algorithm learns sparse covariate-specific embeddings and naturally separable topics jointly and data-efficiently. | 1,880 | scitldr |
Deep Learning has received significant attention due to its impressive performance in many state-of-the-art learning tasks. Unfortunately, while very powerful, Deep Learning is not well understood theoretically and in particular only recently for the complexity of training deep neural networks have been obtained. In th... | Using linear programming we show that the computational complexity of approximate Deep Neural Network training depends polynomially on the data size for several architectures | 1,881 | scitldr |
The extended Kalman filter (EKF) is a classical signal processing algorithm which performs efficient approximate Bayesian inference in non-conjugate models by linearising the local measurement function, avoiding the need to compute intractable integrals when calculating the posterior. In some cases the EKF outperforms ... | We unify the extended Kalman filter (EKF) and the state space approach to power expectation propagation (PEP) by solving the intractable moment matching integrals in PEP via linearisation. This leads to a globally iterated extension of the EKF. | 1,882 | scitldr |
This paper explores the simplicity of learned neural networks under various settings: learned on real vs random data, varying size/architecture and using large minibatch size vs small minibatch size. The notion of simplicity used here is that of learnability i.e., how accurately can the prediction function of a neural ... | Exploring the Learnability of Learned Neural Networks | 1,883 | scitldr |
With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits. Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 do not tell us \emph{why} or \emph{how} a particular method is bett... | We propose a generalized evaluation methodology to interpret model biases, dataset biases, and their correlation. | 1,884 | scitldr |
The task of visually grounded dialog involves learning goal-oriented cooperative dialog between autonomous agents who exchange information about a scene through several rounds of questions and answers. We posit that requiring agents to adhere to rules of human language while also maximizing information exchange is an i... | Social agents learn to talk to each other in natural language towards a goal | 1,885 | scitldr |
Posterior collapse in Variational Autoencoders (VAEs) arises when the variational distribution closely matches the uninformative prior for a subset of latent variables. This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with ... | We show that posterior collapse in linear VAEs is caused entirely by marginal log-likelihood (not ELBO). Experiments on deep VAEs suggest a similar phenomenon is at play. | 1,886 | scitldr |
Transformers have achieved state-of-the-art on a variety of natural language processing tasks. Despite good performance, Transformers are still weak in long sentence modeling where the global attention map is too dispersed to capture valuable information. In such case, the local/token features that are also significant... | This paper propose a new model which combines multi scale information for sequence to sequence learning. | 1,887 | scitldr |
Training neural networks with verifiable robustness guarantees is challenging. Several existing approaches utilize linear relaxation based neural network output bounds under perturbation, but they can slow down training by a factor of hundreds depending on the underlying network architectures. Meanwhile, interval bound... | We propose a new certified adversarial training method, CROWN-IBP, that achieves state-of-the-art robustness for L_inf norm adversarial perturbations. | 1,888 | scitldr |
Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weigh... | In structured network pruning, fine-tuning a pruned model only gives comparable performance with training it from scratch. | 1,889 | scitldr |
Brushing techniques have a long history with the first interactive selection tools appearing in the 1990's. Since then, many additional techniques have been developed to address selection accuracy, scalability and flexibility issues. Selection is especially difficult in large datasets where many visual items tangle and... | Interactive technique to improve brushing in dense trajectory datasets by taking into account the shape of the brush. | 1,890 | scitldr |
Generative Adversarial Networks (GANs) can produce images of surprising complexity and realism, but are generally structured to sample from a single latent source ignoring the explicit spatial interaction between multiple entities that could be present in a scene. Capturing such complex interactions between different o... | We develop a novel approach to model object compositionality in images in a GAN framework. | 1,891 | scitldr |
Recent studies have highlighted adversarial examples as a ubiquitous threat to different neural network models and many downstream applications. Nonetheless, as unique data properties have inspired distinct and powerful learning principles, this paper aims to explore their potentials towards mitigating adversarial inpu... | Adversarial audio discrimination using temporal dependency | 1,892 | scitldr |
In order to alleviate the notorious mode collapse phenomenon in generative adversarial networks (GANs), we propose a novel training method of GANs in which certain fake samples can be reconsidered as real ones during the training process. This strategy can reduce the gradient value that generator receives in the region... | We propose a novel GAN training method by considering certain fake samples as real to alleviate mode collapse and stabilize training process. | 1,893 | scitldr |
We present a tool for Interactive Visual Exploration of Latent Space (IVELS) for model selection. Evaluating generative models of discrete sequences from a continuous latent space is a challenging problem, since their optimization involves multiple competing objective terms. We introduce a model-selection pipeline to c... | We present a visual tool to interactively explore the latent space of an auto-encoder for peptide sequences and their attributes. | 1,894 | scitldr |
Neural networks trained through stochastic gradient descent (SGD) have been around for more than 30 years, but they still escape our understanding. This paper takes an experimental approach, with a divide-and-conquer strategy in mind: we start by studying what happens in single neurons. While being the core building bl... | We report experiments providing strong evidence that a neuron behaves like a binary classifier during training and testing | 1,895 | scitldr |
Automatic classification of objects is one of the most important tasks in engineering and data mining applications. Although using more complex and advanced classifiers can help to improve the accuracy of classification systems, it can be done by analyzing data sets and their features for a particular problem. Feature ... | A method for enriching and combining features to improve classification accuracy | 1,896 | scitldr |
Recent improvements to Generative Adversarial Networks (GANs) have made it possible to generate realistic images in high resolution based on natural language descriptions such as image captions. Furthermore, conditional GANs allow us to control the image generation process through labels or even natural language descri... | Extend GAN architecture to obtain control over locations and identities of multiple objects within generated images. | 1,897 | scitldr |
The demand for abstractive dialog summary is growing in real-world applications. For example, customer service center or hospitals would like to summarize customer service interaction and doctor-patient interaction. However, few researchers explored abstractive summarization on dialogs due to the lack of suitable datas... | We propose a novel end-to-end model (SPNet) to incorporate semantic scaffolds for improving abstractive dialog summarization. | 1,898 | scitldr |
Knowledge bases (KB), both automatically and manually constructed, are often incomplete --- many valid facts can be inferred from the KB by synthesizing existing information. A popular approach to KB completion is to infer new relations by combinatory reasoning over the information found along other paths connecting a ... | We present a RL agent MINERVA which learns to walk on a knowledge graph and answer queries | 1,899 | scitldr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.