query stringlengths 273 149k | pos stringlengths 18 667 | idx int64 0 1.99k | task_name stringclasses 1
value |
|---|---|---|---|
Hierarchical Reinforcement Learning is a promising approach to long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Treating the skills as fixed can lead ... | We propose HiPPO, a stable Hierarchical Reinforcement Learning algorithm that can train several levels of the hierarchy simultaneously, giving good performance both in skill discovery and adaptation. | 900 | scitldr |
Learning can be framed as trying to encode the mutual information between input and output while discarding other information in the input. Since the distribution between input and output is unknown, also the true mutual information is. To quantify how difficult it is to learn a task, we calculate a observed mutual inf... | We take a step towards measuring learning task difficulty and demonstrate that in practice performance strongly depends on the match of the representation of the information and the model interpreting it. | 901 | scitldr |
Sepsis is a life-threatening complication from infection and a leading cause of mortality in hospitals. While early detection of sepsis improves patient outcomes, there is little consensus on exact treatment guidelines, and treating septic patients remains an open problem. In this work we present a new deep reinforceme... | We combine Multi-output Gaussian processes with deep recurrent Q-networks to learn optimal treatments for sepsis and show improved performance over standard deep reinforcement learning methods, | 902 | scitldr |
Unsupervised and semi-supervised learning are important problems that are especially challenging with complex data like natural images. Progress on these problems would accelerate if we had access to appropriate generative models under which to pose the associated inference tasks. Inspired by the success of Convolution... | We develop a new deep generative model for semi-supervised learning and propose a new Max-Min cross-entropy for training CNNs. | 903 | scitldr |
Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images. In this paper, we propose a simple technique to improve a generalization ability of deep RL agents by... | We propose a simple randomization technique for improving generalization in deep reinforcement learning across tasks with various unseen visual patterns. | 904 | scitldr |
Touch interactions with current mobile devices have limited expressiveness. Augmenting devices with additional degrees of freedom can add power to the interaction, and several augmentations have been proposed and tested. However, there is still little known about the effects of learning multiple sets of augmented inter... | Describes a study investigating interference, transfer, and retention of multiple mappings with the same set of chorded buttons | 905 | scitldr |
Neural networks are widely used in Natural Language Processing, yet despite their empirical successes, their behaviour is brittle: they are both over-sensitive to small input changes, and under-sensitive to deletions of large fractions of input text. This paper aims to tackle under-sensitivity in the context of natural... | Formal verification of a specification on a model's prediction undersensitivity using Interval Bound Propagation | 906 | scitldr |
Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for tra... | We propose a novel 8-bit format that eliminates the need for loss scaling, stochastic rounding, and other low precision techniques | 907 | scitldr |
Variational Auto Encoders (VAE) are capable of generating realistic images, sounds and video sequences. From practitioners point of view, we are usually interested in solving problems where tasks are learned sequentially, in a way that avoids revisiting all previous data at each stage. We address this problem by introd... | Novel algorithm for Incremental learning of VAE with fixed architecture | 908 | scitldr |
The field of few-shot learning has recently seen substantial advancements. Most of these advancements came from casting few-shot learning as a meta-learning problem. Model Agnostic Meta Learning or MAML is currently one of the best approaches for few-shot learning via meta-learning. MAML is simple, elegant and very pow... | MAML is great, but it has many problems, we solve many of those problems and as a result we learn most hyper parameters end to end, speed-up training and inference and set a new SOTA in few-shot learning | 909 | scitldr |
Community detection in graphs is of central importance in graph mining, machine learning and network science. Detecting overlapping communities is especially challenging, and remains an open problem. Motivated by the success of graph-based deep learning in other graph-related tasks, we study the applicability of this f... | Detecting overlapping communities in graphs using graph neural networks | 910 | scitldr |
Neural Architecture Search (NAS) aims to facilitate the design of deep networks for new tasks. Existing techniques rely on two stages: searching over the architecture space and validating the best architecture. NAS algorithms are currently compared solely based on their on the downstream task. While intuitive, this fai... | We empirically disprove a fundamental hypothesis of the widely-adopted weight sharing strategy in neural architecture search and explain why the state-of-the-arts NAS algorithms performs similarly to random search. | 911 | scitldr |
In this paper we use the geometric properties of the optimal transport (OT) problem and the Wasserstein distances to define a prior distribution for the latent space of an auto-encoder. We introduce Sliced-Wasserstein Auto-Encoders (SWAE), that enable one to shape the distribution of the latent space into any samplable... | In this paper we use the sliced-Wasserstein distance to shape the latent distribution of an auto-encoder into any samplable prior distribution. | 912 | scitldr |
The Hamiltonian formalism plays a central role in classical and quantum physics. Hamiltonians are the main tool for modelling the continuous time evolution of systems with conserved quantities, and they come equipped with many useful properties, like time reversibility and smooth interpolation in time. These properties... | We introduce a class of generative models that reliably learn Hamiltonian dynamics from high-dimensional observations. The learnt Hamiltonian can be applied to sequence modeling or as a normalising flow. | 913 | scitldr |
Cloud Migration transforms customer’s data, application and services from original IT platform to one or more cloud en- vironment, with the goal of improving the performance of the IT system while reducing the IT management cost. The enterprise level Cloud Migration projects are generally com- plex, involves dynamicall... | In this short paper, we briefly introduce the advantages of using AI planning in Cloud Migration, a preliminary prototype, as well as the chal- lenges the requires attention from the planning and schedul- ing society. | 914 | scitldr |
Unsupervised domain adaptation aims to generalize the hypothesis trained in a source domain to an unlabeled target domain. One popular approach to this problem is to learn a domain-invariant representation for both domains. In this work, we study, theoretically and empirically, the explicit effect of the embedding on g... | A general upper bound on the target domain's risk that reflects the role of embedding-complexity. | 915 | scitldr |
The domain of time-series forecasting has been extensively studied because it is of fundamental importance in many real-life applications. Weather prediction, traffic flow forecasting or sales are compelling examples of sequential phenomena. Predictive models generally make use of the relations between past and future ... | In order to forecast multivariate stationary time-series we learn embeddings containing contextual features within a RNN; we apply the framework on public transportation data | 916 | scitldr |
The ability of an agent to {\em discover} its own learning objectives has long been considered a key ingredient for artificial general intelligence. Breakthroughs in autonomous decision making and reinforcement learning have primarily been in domains where the agent's goal is outlined and clear: such as playing a game ... | We investigate a framework for discovery: curating a large collection of predictions, which are used to construct the agent’s representation in partially observable domains. | 917 | scitldr |
Estimating the location where an image was taken based solely on the contents of the image is a challenging task, even for humans, as properly labeling an image in such a fashion relies heavily on contextual information, and is not as simple as identifying a single object in the image. Thus any methods which attempt to... | A global geolocation inferencing strategy with novel meshing strategy and demonstrating incorporating additional information can be used to improve the overall performance of a geolocation inference model. | 918 | scitldr |
Hierarchical Bayesian methods have the potential to unify many related tasks (e.g. k-shot classification, conditional, and unconditional generation) by framing each as inference within a single generative model. We show that existing approaches for learning such models can fail on expressive generative networks such as... | Technique for learning deep generative models with shared latent variables, applied to Omniglot with a PixelCNN decoder. | 919 | scitldr |
Common-sense or knowledge is required to understand natural language, but in most neural natural language understanding (NLU) systems, the requisite knowledge is indirectly acquired from static corpora. We develop a new reading architecture for the dynamic integration of explicit knowledge in NLU models. A new task-agn... | In this paper we present a task-agnostic reading architecture for the dynamic integration of explicit background knowledge in neural NLU models. | 920 | scitldr |
Mixed-precision arithmetic combining both single- and half-precision operands in the same operation have been successfully applied to train deep neural networks. Despite the advantages of mixed-precision arithmetic in terms of reducing the need for key resources like memory bandwidth or register file size, it has a lim... | Dynamic precision technique to train deep neural networks | 921 | scitldr |
We introduce the “inverse square root linear unit” (ISRLU) to speed up learning in deep neural networks. ISRLU has better performance than ELU but has many of the same benefits. ISRLU and ELU have similar curves and characteristics. Both have negative values, allowing them to push mean unit activation closer to zero, a... | We introduce the ISRLU activation function which is continuously differentiable and faster than ELU. The related ISRU replaces tanh & sigmoid. | 922 | scitldr |
A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leve... | We explore the problem of compositional generalization and propose a means for endowing neural network architectures with the ability to compose themselves to solve these problems. | 923 | scitldr |
The information bottleneck method provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label, while minimizing the amount of other, superfluous information in the representation. The original formulation, however, req... | We extend the information bottleneck method to the unsupervised multiview setting and show state of the art results on standard datasets | 924 | scitldr |
The biological plausibility of the backpropagation algorithm has long been doubted by neuroscientists. Two major reasons are that neurons would need to send two different types of signal in the forward and backward phases, and that pairs of neurons would need to communicate through symmetric bidirectional connections. ... | We describe a biologically plausible learning algorithm for fixed point recurrent networks without tied weights | 925 | scitldr |
In this paper we present, to the best of our knowledge, the first method to learn a generative model of 3D shapes from natural images in a fully unsupervised way. For example, we do not use any ground truth 3D or 2D annotations, stereo video, and ego-motion during the training. Our approach follows the general strategy... | We train a generative 3D model of shapes from natural images in an fully unsupervised way. | 926 | scitldr |
We focus on the problem of black-box adversarial attacks, where the aim is to generate adversarial examples using information limited to loss function evaluations of input-output pairs. We use Bayesian optimization (BO) to specifically cater to scenarios involving low query budgets to develop query efficient adversaria... | We show that a relatively simple black-box adversarial attack scheme using Bayesian optimization and dimension upsampling is preferable to existing methods when the number of available queries is very low. | 927 | scitldr |
State-of-the-art deep neural networks (DNNs) typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones. The compression of DNN ... | We revisit the simple idea of pruning connections of DNNs through $\ell_1$ regularization achieving state-of-the-art results on multiple datasets with theoretic guarantees. | 928 | scitldr |
Autonomy and adaptation of machines requires that they be able to measure their own errors. We consider the advantages and limitations of such an approach when a machine has to measure the error in a regression task. How can a machine measure the error of regression sub-components when it does not have the ground truth... | A non-parametric method to measure the error moments of regressors without ground truth can be used with biased regressors | 929 | scitldr |
We propose a new representation, one-pixel signature, that can be used to reveal the characteristics of the convolution neural networks (CNNs). Here, each CNN classifier is associated with a signature that is created by generating, pixel-by-pixel, an adversarial value that is the of the largest change to the class pred... | Cnvolutional neural networks characterization for backdoored classifier detection and understanding. | 930 | scitldr |
We extend the learning from demonstration paradigm by providing a method for learning unknown constraints shared across tasks, using demonstrations of the tasks, their cost functions, and knowledge of the system dynamics and control constraints. Given safe demonstrations, our method uses hit-and-run sampling to obtain ... | We can learn high-dimensional constraints from demonstrations by sampling unsafe trajectories and leveraging a known constraint parameterization. | 931 | scitldr |
We propose a metric-learning framework for computing distance-preserving maps that generate low-dimensional embeddings for a certain class of manifolds. We employ Siamese networks to solve the problem of least squares multidimensional scaling for generating mappings that preserve geodesic distances on the manifold. In ... | Parametric Manifold Learning with Neural Networks in a Geometric Framework | 932 | scitldr |
Exploration is a fundamental aspect of Reinforcement Learning, typically implemented using stochastic action-selection. Exploration, however, can be more efficient if directed toward gaining new world knowledge. Visit-counters have been proven useful both in practice and in theory for directed exploration. However, a m... | We propose a generalization of visit-counters that evaluate the propagating exploratory value over trajectories, enabling efficient exploration for model-free RL | 933 | scitldr |
Deep neuroevolution and deep reinforcement learning (deep RL) algorithms are two popular approaches to policy search. The former is widely applicable and rather stable, but suffers from low sample efficiency. By contrast, the latter is more sample efficient, but the most sample efficient variants are also rather unstab... | We propose a new combination of evolution strategy and deep reinforcement learning which takes the best of both worlds | 934 | scitldr |
Recent advances in deep reinforcement learning have made significant strides in performance on applications such as Go and Atari games. However, developing practical methods to balance exploration and exploitation in complex domains remains largely unsolved. Thompson Sampling and its extension to reinforcement learning... | An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling | 935 | scitldr |
Class labels have been empirically shown useful in improving the sample quality of generative adversarial nets (GANs). In this paper, we mathematically study the properties of the current variants of GANs that make use of class label information. With class aware gradient and cross-entropy decomposition, we reveal how ... | Understand how class labels help GAN training. Propose a new evaluation metric for generative models. | 936 | scitldr |
Modern neural networks are over-parametrized. In particular, each rectified linear hidden unit can be modified by a multiplicative factor by adjusting input and out- put weights, without changing the rest of the network. Inspired by the Sinkhorn-Knopp algorithm, we introduce a fast iterative method for minimizing the l... | Fast iterative algorithm to balance the energy of a network while staying in the same functional equivalence class | 937 | scitldr |
Reinforcement learning agents are typically trained and evaluated according to their performance averaged over some distribution of environment settings. But does the distribution over environment settings contain important biases, and do these lead to agents that fail in certain cases despite high average-case perform... | We find environment settings in which SOTA agents trained on navigation tasks display extreme failures suggesting failures in generalization. | 938 | scitldr |
In some misspecified settings, the posterior distribution in Bayesian statistics may lead to inconsistent estimates. To fix this issue, it has been suggested to replace the likelihood by a pseudo-likelihood, that is the exponential of a loss function enjoying suitable robustness properties. In this paper, we build a ps... | Robust Bayesian Estimation via Maximum Mean Discrepancy | 939 | scitldr |
The standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest. We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series. If parameterized by an enc... | We create an unbiased estimator for the log probability of latent variable models, extending such models to a larger scope of applications. | 940 | scitldr |
This paper makes two contributions towards understanding how the hyperparameters of stochastic gradient descent affect the final training loss and test accuracy of neural networks. First, we argue that stochastic gradient descent exhibits two regimes with different behaviours; a noise dominated regime which typically a... | Smaller batch sizes can outperform very large batches on the test set under constant step budgets and with properly tuned learning rate schedules. | 941 | scitldr |
Counterfactual Regret Minimization (CFR) is the most successful algorithm for finding approximate Nash equilibria in imperfect information games. However, CFR's reliance on full game-tree traversals limits its scalability and generality. Therefore, the game's state- and action-space is often abstracted (i.e. simplified... | Better Deep Reinforcement Learning algorithm to approximate Counterfactual Regret Minimization | 942 | scitldr |
While generative models have shown great success in generating high-dimensional samples conditional on low-dimensional descriptors (learning e.g. stroke thickness in MNIST, hair color in CelebA, or speaker identity in Wavenet), their generation out-of-sample poses fundamental problems. The conditional variational autoe... | Generates never seen data during training from a desired condition | 943 | scitldr |
We analyze speed of convergence to global optimum for gradient descent training a deep linear neural network by minimizing the L2 loss over whitened data. Convergence at a linear rate is guaranteed when the following hold: (i) dimensions of hidden layers are at least the minimum of the input and output dimensions; (ii)... | We analyze gradient descent for deep linear neural networks, providing a guarantee of convergence to global optimum at a linear rate. | 944 | scitldr |
One of the fundamental problems in supervised classification and in machine learning in general, is the modelling of non-parametric invariances that exist in data. Most prior art has focused on enforcing priors in the form of invariances to parametric nuisance transformations that are expected to be present in data. Ho... | A layer modelling local random connectomes in the cortex within deep networks capable of learning general non-parametric invariances from the data itself. | 945 | scitldr |
We propose a fully-convolutional conditional generative model, the latent transformation neural network (LTNN), capable of view synthesis using a light-weight neural network suited for real-time applications. In contrast to existing conditional generative models which incorporate conditioning information via concatenat... | We introduce an effective, general framework for incorporating conditioning information into inference-based generative models. | 946 | scitldr |
We describe three approaches to enabling an extremely computationally limited embedded scheduler to consider a small number of alternative activities based on resource availability. We consider the case where the scheduler is so computationally limited that it cannot backtrack search. The first two approaches precompil... | This paper describes three techniques to allow a non-backtracking, computationally limited scheduler to consider a small number of alternative activities based on resource availability. | 947 | scitldr |
Inspired by the modularity and the life-cycle of biological neurons,we introduce Continual Learning via Neural Pruning (CLNP), a new method aimed at lifelong learning in fixed capacity models based on the pruning of neurons of low activity. In this method, an L1 regulator is used to promote the presence of neurons of z... | We use simple and biologically motivated modifications of standard learning techniques to achieve state of the art performance on catastrophic forgetting benchmarks. | 948 | scitldr |
There has been an increasing use of neural networks for music information retrieval tasks. In this paper, we empirically investigate different ways of improving the performance of convolutional neural networks (CNNs) on spectral audio features. More specifically, we explore three aspects of CNN design: depth of the net... | Using deep learning techniques on singing voice related tasks. | 949 | scitldr |
The Softmax function is used in the final layer of nearly all existing sequence-to-sequence models for language generation. However, it is usually the slowest layer to compute which limits the vocabulary size to a subset of most frequent types; and it has a large memory footprint. We propose a general technique for rep... | Language generation using seq2seq models which produce word embeddings instead of a softmax based distribution over the vocabulary at each step enabling much faster training while maintaining generation quality | 950 | scitldr |
We propose a framework to understand the unprecedented performance and robustness of deep neural networks using field theory. Correlations between the weights within the same layer can be described by symmetries in that layer, and networks generalize better if such symmetries are broken to reduce the redundancies of th... | Closed form results for deep learning in the layer decoupling limit applicable to Residual Networks | 951 | scitldr |
During the last years, a remarkable breakthrough has been made in AI domain thanks to artificial deep neural networks that achieved a great success in many machine learning tasks in computer vision, natural language processing, speech recognition, malware detection and so on. However, they are highly vulnerable to easi... | In this paper, a new method we call Centered Initial Attack (CIA) is provided. It insures by construction the maximum perturbation to be smaller than a threshold fixed beforehand, without the clipping process. | 952 | scitldr |
Answering complex logical queries on large-scale incomplete knowledge graphs (KGs) is a fundamental yet challenging task. Recently, a promising approach to this problem has been to embed KG entities as well as the query into a vector space such that entities that answer the query are embedded close to the query. Howeve... | Answering a wide class of logical queries over knowledge graphs with box embeddings in vector space | 953 | scitldr |
Driven by the need for parallelizable hyperparameter optimization methods, this paper studies open loop search methods: sequences that are predetermined and can be generated before a single configuration is evaluated. Examples include grid search, uniform random search, low discrepancy sequences, and other sampling dis... | We address fully parallel hyperparameter optimization with Determinantal Point Processes. | 954 | scitldr |
Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data. However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accur... | A multi-level spectral approach to improving the quality and scalability of unsupervised graph embedding. | 955 | scitldr |
Deep generative models have been enjoying success in modeling continuous data. However it remains challenging to capture the representations for discrete structures with formal grammars and semantics, e.g., computer programs and molecular structures. How to generate both syntactically and semantically correct data stil... | A new generative model for discrete structured data. The proposed stochastic lazy attribute converts the offline semantic check into online guidance for stochastic decoding, which effectively addresses the constraints in syntax and semantics, and also achieves superior performance | 956 | scitldr |
Modeling informal inference in natural language is very challenging. With the recent availability of large annotated data, it has become feasible to train complex models such as neural networks to perform natural language inference (NLI), which have achieved state-of-the-art performance. Although there exist relatively... | the proposed models with external knowledge further improve the state of the art on the SNLI dataset. | 957 | scitldr |
High intra-class diversity and inter-class similarity is a characteristic of remote sensing scene image data sets currently posing significant difficulty for deep learning algorithms on classification tasks. To improve accuracy, post-classification methods have been proposed for smoothing of model predictions. However,... | Approach for improving prediction accuracy by learning deep features over neighboring scene images in satellite scene image analysis. | 958 | scitldr |
Sparsely available data points cause a numerical error on finite differences which hinder to modeling the dynamics of physical systems. The discretization error becomes even larger when the sparse data are irregularly distributed so that the data defined on an unstructured grid, making it hard to build deep learning mo... | We propose physics-aware difference graph networks designed to effectively learn spatial differences to modeling sparsely-observed dynamics. | 959 | scitldr |
Solving long-horizon sequential decision making tasks in environments with sparse rewards is a longstanding problem in reinforcement learning (RL) research. Hierarchical Reinforcement Learning (HRL) has held the promise to enhance the capabilities of RL agents via operation on different levels of temporal abstraction. ... | Learning Functionally Decomposed Hierarchies for Continuous Navigation Tasks | 960 | scitldr |
Sound correspondence patterns play a crucial role for linguistic reconstruction. Linguists use them to prove language relationship, to reconstruct proto-forms, and for classical phylogenetic reconstruction based on shared innovations. Cognate words which fail to conform with expected patterns can further point to vario... | The paper describes a new algorithm by which sound correspondence patterns for multiple languages can be inferred. | 961 | scitldr |
We describe Kernel RNN Learning (KeRNL), a reduced-rank, temporal eligibility trace-based approximation to backpropagation through time (BPTT) for training recurrent neural networks (RNNs) that gives competitive performance to BPTT on long time-dependence tasks. The approximation replaces a rank-4 gradient learning ten... | A biologically plausible learning rule for training recurrent neural networks | 962 | scitldr |
We present Deep Graph Infomax (DGI), a general approach for learning node representations within graph-structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high-level summaries of graphs---both derived using established graph convolutiona... | A new method for unsupervised representation learning on graphs, relying on maximizing mutual information between local and global representations in a graph. State-of-the-art results, competitive with supervised learning. | 963 | scitldr |
In many environments only a tiny subset of all states yield high reward. In these cases, few of the interactions with the environment provide a relevant learning signal. Hence, we may want to preferentially train on those high-reward states and the probable trajectories leading to them. To this end, we advocate for the... | A backward model of previous (state, action) given the next state, i.e. P(s_t, a_t | s_{t+1}), can be used to simulate additional trajectories terminating at states of interest! Improves RL learning efficiency. | 964 | scitldr |
Prepositions are among the most frequent words. Good prepositional representation is of great syntactic and semantic interest in computational linguistics. Existing methods on preposition representation either treat prepositions as content words (e.g., word2vec and GloVe) or depend heavily on external linguistic resour... | This work is about tensor-based method for preposition representation training. | 965 | scitldr |
A promising class of generative models maps points from a simple distribution to a complex distribution through an invertible neural network. Likelihood-based training of these models requires restricting their architectures to allow cheap computation of Jacobian determinants. Alternatively, the Jacobian trace can be u... | We use continuous time dynamics to define a generative model with exact likelihoods and efficient sampling that is parameterized by unrestricted neural networks. | 966 | scitldr |
We propose a non-adversarial feature matching-based approach to train generative models. Our approach, Generative Feature Matching Networks (GFMN), leverages pretrained neural networks such as autoencoders and ConvNet classifiers to perform feature extraction. We perform an extensive number of experiments with differen... | A new non-adversarial feature matching-based approach to train generative models that achieves state-of-the-art results. | 967 | scitldr |
We propose a novel architecture for k-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our m... | A novel architecture for few-shot classification capable of dealing with uncertainty. | 968 | scitldr |
We show that the output of a (residual) CNN with an appropriate prior over the weights and biases is a GP in the limit of infinitely many convolutional filters, extending similar for dense networks. For a CNN, the equivalent kernel can be computed exactly and, unlike "deep kernels", has very few parameters: only the hy... | We show that CNNs and ResNets with appropriate priors on the parameters are Gaussian processes in the limit of infinitely many convolutional filters. | 969 | scitldr |
We present an end-to-end design methodology for efficient deep learning deployment. Unlike previous methods that separately optimize the neural network architecture, pruning policy, and quantization policy, we jointly optimize them in an end-to-end manner. To deal with the larger design space it brings, we train a quan... | We present an end-to-end design methodology for efficient deep learning deployment. | 970 | scitldr |
Distributed optimization is vital in solving large-scale machine learning problems. A widely-shared feature of distributed optimization techniques is the requirement that all nodes complete their assigned tasks in each computational epoch before the system can proceed to the next epoch. In such settings, slow nodes, ca... | Accelerate distributed optimization by exploiting stragglers. | 971 | scitldr |
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according t... | Enabled by a novel differentiable renderer, we propose a new metric that has real-world implications for evaluating adversarial machine learning algorithms, resolving the lack of realism of the existing metric based on pixel norms. | 972 | scitldr |
Alternatives to recurrent neural networks, in particular, architectures based on attention or convolutions, have been gaining momentum for processing input sequences. In spite of their relevance, the computational properties of these alternatives have not yet been fully explored. We study the computational power of two... | We show that the Transformer architecture and the Neural GPU are Turing complete. | 973 | scitldr |
It is usually hard for a learning system to predict correctly on the rare events, and there is no exception for segmentation algorithms. Therefore, we hope to build an alarm system to set off alarms when the segmentation is possibly unsatisfactory. One plausible solution is to project the segmentation into a low dimens... | We use VAE to capture the shape feature for automatic segmentation evaluation | 974 | scitldr |
The linear transformations in converged deep networks show fast eigenvalue decay. The distribution of eigenvalues looks like a Heavy-tail distribution, where the vast majority of eigenvalues is small, but not actually zero, and only a few spikes of large eigenvalues exist. We use a stochastic approximator to generate h... | We investigate the eigenvalues of the linear layers in deep networks and show that the distributions develop heavy-tail behavior during training. | 975 | scitldr |
Capturing long-range feature relations has been a central issue on convolutional neural networks(CNNs). To tackle this, attempts to integrate end-to-end trainable attention module on CNNs are widespread. Main goal of these works is to adjust feature maps considering spatial-channel correlation inside a convolution laye... | We propose a new type of end-to-end trainable attention module, which applies global weight balances among layers by utilizing co-propagating RNN with CNN. | 976 | scitldr |
We seek to auto-generate stronger input features for ML methods faced with limited training data. Biological neural nets (BNNs) excel at fast learning, implying that they extract highly informative features. In particular, the insect olfactory network learns new odors very rapidly, by means of three key elements: A com... | Features auto-generated by the bio-mimetic MothNet model significantly improve the test accuracy of standard ML methods on vectorized MNIST. The MothNet-generated features also outperform standard feature generators. | 977 | scitldr |
We present an approach for anytime predictions in deep neural networks (DNNs). For each test sample, an anytime predictor produces a coarse quickly, and then continues to refine it until the test-time computational budget is depleted. Such predictors can address the growing computational problem of DNNs by automaticall... | By focusing more on the final predictions in anytime predictors (such as the very recent Multi-Scale-DenseNets), we make small anytime models to outperform large ones that don't have such focus. | 978 | scitldr |
Computational imaging systems jointly design computation and hardware to retrieve information which is not traditionally accessible with standard imaging systems. Recently, critical aspects such as experimental design and image priors are optimized through deep neural networks formed by the unrolled iterations of class... | We propose a memory-efficient learning procedure that exploits the reversibility of the network’s layers to enable data-driven design for large-scale computational imaging. | 979 | scitldr |
Existing multi-agent reinforcement learning (MARL) communication methods have relied on a trusted third party (TTP) to distribute reward to agents, leaving them inapplicable in peer-to-peer environments. This paper proposes reward distribution using {\em Neuron as an Agent} (NaaA) in MARL without a TTP with two key ide... | Neuron as an Agent (NaaA) enable us to train multi-agent communication without a trusted third party. | 980 | scitldr |
Increasing model size when pretraining natural language representations often in improved performance on downstream tasks. However, at some point further model increases become harder due to GPU/TPU memory limitations, longer training times, and unexpected model degradation. To address these problems, we present two pa... | A new pretraining method that establishes new state-of-the-art results on the GLUE, RACE, and SQuAD benchmarks while having fewer parameters compared to BERT-large. | 981 | scitldr |
Structured tabular data is the most commonly used form of data in industry according to a Kaggle ML and DS Survey. Gradient Boosting Trees, Support Vector Machine, Random Forest, and Logistic Regression are typically used for classification tasks on tabular data. The recent work of Super Characters method using two-dim... | Deep learning on structured tabular data using two-dimensional word embedding with fine-tuned ImageNet pre-trained CNN model. | 982 | scitldr |
Predictive coding, within theoretical neuroscience, and variational autoencoders, within machine learning, both involve latent Gaussian models and variational inference. While these areas share a common origin, they have evolved largely independently. We outline connections and contrasts between these areas, using thei... | connections between predictive coding and VAEs + new frontiers | 983 | scitldr |
Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Rec... | Novel variants of optimization methods that combine the benefits of both adaptive and non-adaptive methods. | 984 | scitldr |
An important problem that arises in reinforcement learning and Monte Carlo methods is estimating quantities defined by the stationary distribution of a Markov chain. In many real-world applications, access to the underlying transition operator is limited to a fixed set of data that has already been collected, without a... | In this paper, we proposed a novel algorithm, GenDICE, for general stationary distribution correction estimation, which can handle both discounted and average off-policy evaluation on multiple behavior-agnostic samples. | 985 | scitldr |
The power of neural networks lies in their ability to generalize to unseen data, yet the underlying reasons for this phenomenon remain elusive. Numerous rigorous attempts have been made to explain generalization, but available bounds are still quite loose, and analysis does not always lead to true understanding. The go... | An intuitive empirical and visual exploration of the generalization properties of deep neural networks. | 986 | scitldr |
We argue that symmetry is an important consideration in addressing the problem of systematicity and investigate two forms of symmetry relevant to symbolic processes. We implement this approach in terms of convolution and show that it can be used to achieve effective generalisation in three toy problems: rule learning, ... | We use convolution to make neural networks behave more like symbolic systems. | 987 | scitldr |
Key equatorial climate phenomena such as QBO and ENSO have never been adequately explained as deterministic processes. This in spite of recent research showing growing evidence of predictable behavior. This study applies the fundamental Laplace tidal equations with simplifying assumptions along the equator — i.e. no Co... | Analytical Formulation of Equatorial Standing Wave Phenomena: Application to QBO and ENSO | 988 | scitldr |
In the field of Continual Learning, the objective is to learn several tasks one after the other without access to the data from previous tasks. Several solutions have been proposed to tackle this problem but they usually assume that the user knows which of the tasks to perform at test time on a particular sample, or re... | We propose to train an Invertible Neural Network for each class to perform class-by-class Continual Learning. | 989 | scitldr |
Human-computer conversation systems have attracted much attention in Natural Language Processing. Conversation systems can be roughly divided into two categories: retrieval-based and generation-based systems. Retrieval systems search a user-issued utterance (namely a query) in a large conversational repository and retu... | A novel ensemble of retrieval-based and generation-based for open-domain conversation systems. | 990 | scitldr |
Deep neural networks trained on a wide range of datasets demonstrate impressive transferability. Deep features appear general in that they are applicable to many datasets and tasks. Such property is in prevalent use in real-world applications. A neural network pretrained on large datasets, such as ImageNet, can signifi... | Understand transferability from the perspectives of improved generalization, optimization and the feasibility of transferability. | 991 | scitldr |
We address the following question: How redundant is the parameterisation of ReLU networks? Specifically, we consider transformations of the weight space which leave the function implemented by the network intact. Two such transformations are known for feed-forward architectures: permutation of neurons within a layer, a... | We prove that there exist ReLU networks whose parameters are almost uniquely determined by the function they implement. | 992 | scitldr |
A general problem that received considerable recent attention is how to perform multiple tasks in the same network, maximizing both efficiency and prediction accuracy. A popular approach consists of a multi-branch architecture on top of a shared backbone, jointly trained on a weighted sum of losses. However, in many ca... | We propose a top-down modulation network for multi-task learning applications with several advantages over current schemes. | 993 | scitldr |
Clustering algorithms have wide applications and play an important role in data analysis fields including time series data analysis. The performance of a clustering algorithm depends on the features extracted from the data. However, in time series analysis, there has been a problem that the conventional methods based o... | Novel time series data clustring algorithm based on dynamical system features. | 994 | scitldr |
Two important topics in deep learning both involve incorporating humans into the modeling process: Model priors transfer information from humans to a model by regularizing the model's parameters; Model attributions transfer information from a model to humans by explaining the model's behavior. Previous work has taken i... | A method for encouraging axiomatic feature attributions of a deep model to match human intuition. | 995 | scitldr |
Recurrent neural networks (RNNs) have shown excellent performance in processing sequence data. However, they are both complex and memory intensive due to their recursive nature. These limitations make RNNs difficult to embed on mobile devices requiring real-time processes with limited hardware resources. To address the... | We propose high-performance LSTMs with binary/ternary weights, that can greatly reduce implementation complexity | 996 | scitldr |
This paper addresses unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with labeled support images for few-shot recognition in testing. We use a new GAN-like deep architecture aimed at unsupervised learning of an image representation which will encode latent obje... | We address the problem of unsupervised few-shot object recognition, where all training images are unlabeled and do not share classes with test images. | 997 | scitldr |
Existing black-box attacks on deep neural networks (DNNs) so far have largely focused on transferability, where an adversarial instance generated for a locally trained model can “transfer” to attack other learning models. In this paper, we propose novel Gradient Estimation black-box attacks for adversaries with query a... | Query-based black-box attacks on deep neural networks with adversarial success rates matching white-box attacks | 998 | scitldr |
We propose a novel subgraph image representation for classification of network fragments with the target being their parent networks. The graph image representation is based on 2D image embeddings of adjacency matrices. We use this image representation in two modes. First, as the input to a machine learning algorithm. ... | We convert subgraphs into structured images and classify them using 1. deep learning and 2. transfer learning (Caffe) and achieve stunning results. | 999 | scitldr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.