source
stringlengths
200
2.98k
target
stringlengths
18
668
In order to mimic the human ability of continual acquisition and transfer of knowledge across various tasks, a learning system needs the capability for life-long learning, effectively utilizing the previously acquired skills. As such, the key challenge is to transfer and generalize the knowledge learned from one task t...
A new method uses statistical leverage score information to measure the importance of the data samples in every task and adopts frequent directions approach to enable a life-long learning property.
Convolutional neural networks (CNNs) are inherently equivariant to translation. Efforts to embed other forms of equivariance have concentrated solely on rotation. We expand the notion of equivariance in CNNs through the Polar Transformer Network (PTN). PTN combines ideas from the Spatial Transformer Network (STN) and c...
We learn feature maps invariant to translation, and equivariant to rotation and scale.
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task. Bayesian hierarchical modeling provides a theoretical framework for formalizing meta-learning as inference for a set of parameters that are shared across tasks. Here, we reformulat...
A specific gradient-based meta-learning algorithm, MAML, is equivalent to an inference procedure in a hierarchical Bayesian model. We use this connection to improve MAML via methods from approximate inference and curvature estimation.
This work provides an automatic machine learning (AutoML) modelling architecture called Autostacker. Autostacker improves the prediction accuracy of machine learning baselines by utilizing an innovative hierarchical stacking architecture and an efficient parameter search algorithm. Neither prior domain knowledge about ...
Automate machine learning system with efficient search algorithm and innovative structure to provide better model baselines.
Surrogate models can be used to accelerate approximate Bayesian computation (ABC). In one such framework the discrepancy between simulated and observed data is modelled with a Gaussian process. So far principled strategies have been proposed only for sequential selection of the simulation locations. To address this lim...
We propose principled batch Bayesian experimental design strategies and a method for uncertainty quantification of the posterior summaries in a Gaussian process surrogate-based approximate Bayesian computation framework.
Randomly initialized first-order optimization algorithms are the method of choice for solving many high-dimensional nonconvex problems in machine learning, yet general theoretical guarantees cannot rule out convergence to critical points of poor objective value. For some highly structured nonconvex problems however, th...
We provide an efficient convergence rate for gradient descent on the complete orthogonal dictionary learning objective based on a geometric analysis.
Coding theory is a central discipline underpinning wireline and wireless modems that are the workhorses of the information age. Progress in coding theory is largely driven by individual human ingenuity with sporadic breakthroughs over the past century. In this paper we study whether it is possible to automate the disco...
We show that creatively designed and trained RNN architectures can decode well known sequential codes and achieve close to optimal performances.
Adam is shown not being able to converge to the optimal solution in certain cases. Researchers recently propose several algorithms to avoid the issue of non-convergence of Adam, but their efficiency turns out to be unsatisfactory in practice. In this paper, we provide a new insight into the non-convergence issue of Ada...
We analysis and solve the non-convergence issue of Adam.
Most domain adaptation methods consider the problem of transferring knowledge to the target domain from a single source dataset. However, in practical applications, we typically have access to multiple sources. In this paper we propose the first approach for Multi-Source Domain Adaptation (MSDA) based on Generative Adv...
In this paper we propose generative method for multisource domain adaptation based on decomposition of content, style and domain factors.
Inferring the most likely configuration for a subset of variables of a joint distribution given the remaining ones – which we refer to as co-generation – is an important challenge that is computationally demanding for all but the simplest settings. This task has received a considerable amount of attention, particularly...
Using annealed importance sampling on the co-generation problem.
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any d...
We develop a new likelihood-free parameter estimation method that is equivalent to maximum likelihood under some conditions
While most approaches to the problem of Inverse Reinforcement Learning (IRL) focus on estimating a reward function that best explains an expert agent’s policy or demonstrated behavior on a control task, it is often the case that such behavior is more succinctly represented by a simple reward combined with a set of hard...
Our method infers constraints on task execution by leveraging the principle of maximum entropy to quantify how demonstrations differ from expected, un-constrained behavior.
The success of reinforcement learning in the real world has been limited to instrumented laboratory scenarios, often requiring arduous human supervision to enable continuous learning. In this work, we discuss the required elements of a robotic system that can continually and autonomously improve with data collected in ...
System to learn robotic tasks in the real world with reinforcement learning without instrumentation
We study the problem of cross-lingual voice conversion in non-parallel speech corpora and one-shot learning setting. Most prior work require either parallel speech corpora or enough amount of training data from a target speaker. However, we convert an arbitrary sentences of an arbitrary source speaker to target speaker...
We use a Variational Autoencoder to separate style and content, and achieve voice conversion by modifying style embedding and decoding. We investigate using a multi-language speech corpus and investigate its effects.
We propose an end-to-end-trainable attention module for convolutional neural network (CNN) architectures built for image classification. The module takes as input the 2D feature vector maps which form the intermediate representations of the input image at different stages in the CNN pipeline, and outputs a 2D matrix of...
The paper proposes a method for forcing CNNs to leverage spatial attention in learning more object-centric representations that perform better in various respects.
Recurrent neural network(RNN) is an effective neural network in solving very complex supervised and unsupervised tasks.There has been a significant improvement in RNN field such as natural language processing, speech processing, computer vision and other multiple domains. This paper deals with RNN application on differ...
Recurrent neural networks for Cybersecurity use-cases
Anatomical studies demonstrate that brain reformats input information to generate reliable responses for performing computations. However, it remains unclear how neural circuits encode complex spatio-temporal patterns. We show that neural dynamics are strongly influenced by the phase alignment between the input and the...
Input Structuring along Chaos for Stability
Generative adversarial networks are a learning framework that rely on training a discriminator to estimate a measure of difference between a target and generated distributions. GANs, as normally formulated, rely on the generated samples being completely differentiable w.r.t. the generative parameters, and thus do not w...
We address training GANs with discrete data by formulating a policy gradient that generalizes across f-divergences
Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates. The high variance problem is particularly exasperated in problems with long horizons or high-dimensional action spaces. To mitigate this issue, we derive a bias-free action-dependent ba...
Action-dependent baselines can be bias-free and yield greater variance reduction than state-only dependent baselines for policy gradient methods.
The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches. The problem is further exacerbated when supervised learning is applied to a number of correlated tasks simultaneously since the amount of labels required scales with the number of tasks. To mitigate this concern...
We propose an active multitask learning algorithm that achieves knowledge transfer between tasks.
Detection of photo manipulation relies on subtle statistical traces, notoriously removed by aggressive lossy compression employed online. We demonstrate that end-to-end modeling of complex photo dissemination channels allows for codec optimization with explicit provenance objectives. We design a lightweight trainable l...
We learn an efficient lossy image codec that can be optimized to facilitate reliable photo manipulation detection at fractional cost in payload/quality and even at low bitrates.
Recurrent Neural Networks have long been the dominating choice for sequence modeling. However, it severely suffers from two issues: impotent in capturing very long-term dependencies and unable to parallelize the sequential computation procedure. Therefore, many non-recurrent sequence models that are built on convolutio...
This paper proposes an effective generic sequence model which leverages the strengths of both RNNs and Multi-head attention.
Many tasks in natural language processing and related domains require high precision output that obeys dataset-specific constraints. This level of fine-grained control can be difficult to obtain in large-scale neural network models. In this work, we propose a structured latent-variable approach that adds discrete contr...
A structured latent-variable approach that adds discrete control states within a standard autoregressive neural paradigm to provide arbitrary grounding of internal model decisions, without sacrificing any representational power of neural models.
Suppose a deep classification model is trained with samples that need to be kept private for privacy or confidentiality reasons. In this setting, can an adversary obtain the private samples if the classification model is given to the adversary? We call this reverse engineering against the classification model the Class...
Estimation of training data distribution from trained classifier using GAN.
The goal of standard compressive sensing is to estimate an unknown vector from linear measurements under the assumption of sparsity in some basis. Recently, it has been shown that significantly fewer measurements may be required if the sparsity assumption is replaced by the assumption that the unknown vector lies near ...
We establish that the scaling laws derived in (Bora et al., 2017) are optimal or near-optimal in the absence of further assumptions.
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experime...
meta-learn a learning algorithm capable of causal reasoning
Sentiment classification is an active research area with several applications including analysis of political opinions, classifying comments, movie reviews, news reviews and product reviews. To employ rule based sentiment classification, we require sentiment lexicons. However, manual construction of sentiment lexicon i...
Corpus based Algorithm is developed generate Amharic Sentiment lexicon relying on corpus
Optimistic initialisation is an effective strategy for efficient exploration in reinforcement learning (RL). In the tabular case, all provably efficient model-free algorithms rely on it. However, model-free deep RL algorithms do not use optimistic initialisation despite taking inspiration from these provably efficient ...
We augment the Q-value estimates with a count-based bonus that ensures optimism during action selection and bootstrapping, even if the Q-value estimates are pessimistic.
Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning...
We propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework.
In many settings, it is desirable to learn decision-making and control policies through learning or from expert demonstrations. The most common approaches under this framework are Behaviour Cloning (BC), and Inverse Reinforcement Learning (IRL). Recent methods for IRL have demonstrated the capacity to learn effective p...
Distribution matching through divergence minimization provides a common ground for comparing adversarial Maximum-Entropy Inverse Reinforcement Learning methods to Behaviour Cloning.
Value-based methods constitute a fundamental methodology in planning and deep reinforcement learning (RL). In this paper, we propose to exploit the underlying structures of the state-action value function, i.e., Q function, for both planning and deep RL. In particular, if the underlying system dynamics lead to some glo...
We propose a generic framework that allows for exploiting the low-rank structure in both planning and deep reinforcement learning.
Learned representations of source code enable various software developer tools, e.g., to detect bugs or to predict program properties. At the core of code representations often are word embeddings of identifier names in source code, because identifiers account for the majority of source code vocabulary and convey impor...
A benchmark to evaluate neural embeddings of identifiers in source code.
Generative adversarial nets (GANs) are widely used to learn the data sampling process and their performance may heavily depend on the loss functions, given a limited computational budget. This study revisits MMD-GAN that uses the maximum mean discrepancy (MMD) as the loss function for GAN and makes two contributions. F...
Rearranging the terms in maximum mean discrepancy yields a much better loss function for the discriminator of generative adversarial nets
Deep neural networks have shown incredible performance for inference tasks in a variety of domains. Unfortunately, most current deep networks are enormous cloud-based structures that require significant storage space, which limits scaling of deep learning as a service (DLaaS) and use for on-device augmented intelligenc...
This paper finds algorithms that directly use lossless compressed representations of deep feedforward networks, to perform inference without full decompression.
Generative adversarial networks (GANs) form a generative modeling approach known for producing appealing samples, but they are notably difficult to train. One common way to tackle this issue has been to propose new formulations of the GAN objective. Yet, surprisingly few studies have looked at optimization methods desi...
We cast GANs in the variational inequality framework and import techniques from this literature to optimize GANs better; we give algorithmic extensions and empirically test their performance for training GANs.
In order to efficiently learn with small amount of data on new tasks, meta-learning transfers knowledge learned from previous tasks to the new ones. However, a critical challenge in meta-learning is the task heterogeneity which cannot be well handled by traditional globally shared meta-learning methods. In addition, cu...
Addressing task heterogeneity problem in meta-learning by introducing meta-knowledge graph
In this paper, a deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs (base experts) with diverse capabilities, e.g., these base deep CNNs are sequentially trained to recognize a set of object classes in an easy-to-hard way according ...
A deep boosting algorithm is developed to learn more discriminative ensemble classifier by seamlessly combining a set of base deep CNNs.
We present a method for translating music across musical instruments and styles. This method is based on unsupervised training of a multi-domain wavenet autoencoder, with a shared encoder and a domain-independent latent space that is trained end-to-end on waveforms. Employing a diverse training dataset and large net ca...
An automatic method for converting music between instruments and styles
Most existing defenses against adversarial attacks only consider robustness to L_p-bounded distortions. In reality, the specific attack is rarely known in advance and adversaries are free to modify images in ways which lie outside any fixed distortion model; for example, adversarial rotations lie outside the set of L_p...
We propose several new attacks and a methodology to measure robustness against unforeseen adversarial attacks.
Deep neural networks (DNNs) have witnessed as a powerful approach in this year by solving long-standing Artificial intelligence (AI) supervised and unsupervised tasks exists in natural language processing, speech processing, computer vision and others. In this paper, we attempt to apply DNNs on three different cyber s...
Deep-Net: Deep Neural Network for Cyber Security Use Cases
In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations. Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives. On the other hand,...
We learn a space of motor primitives from unannotated robot demonstrations, and show these primitives are semantically meaningful and can be composed for new robot tasks.
Using modern deep learning models to make predictions on time series data from wearable sensors generally requires large amounts of labeled data. However, labeling these large datasets can be both cumbersome and costly. In this paper, we apply weak supervision to time series data, and programmatically label a dataset f...
We demonstrate the feasibility of a weakly supervised time series classification approach for wearable sensor data.
Learning semantic correspondence between the structured data (e.g., slot-value pairs) and associated texts is a core problem for many downstream NLP applications, e.g., data-to-text generation. Recent neural generation methods require to use large scale training data. However, the collected data-text pairs for training...
We propose a local-to-global alignment framework to learn semantic correspondences from noisy data-text pairs with weak supervision
Imitation learning algorithms provide a simple and straightforward approach for training control policies via standard supervised learning methods. By maximizing the likelihood of good actions provided by an expert demonstrator, supervised imitation learning can produce effective policies without the algorithmic comple...
Learning how to reach goals from scratch by using imitation learning with data relabeling
Neural networks have recently shown excellent performance on numerous classi- fication tasks. These networks often have a large number of parameters and thus require much data to train. When the number of training data points is small, however, a network with high flexibility will quickly overfit the training data, res...
In the paper, we proposed an ensemble method called InterBoost for training neural networks for small-sample classification. The method has better generalization performance than other ensemble methods, and reduces variances significantly.
Interpreting neural networks is a crucial and challenging task in machine learning. In this paper, we develop a novel framework for detecting statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights. Depending on the desired interactions, our method can a...
We detect statistical interactions captured by a feedforward multilayer neural network by directly interpreting its learned weights.
The neural linear model is a simple adaptive Bayesian linear regression method that has recently been used in a number of problems ranging from Bayesian optimization to reinforcement learning. Despite its apparent successes in these settings, to the best of our knowledge there has been no systematic exploration of its ...
We benchmark the neural linear model on the UCI and UCI "gap" datasets.
The reproducibility of reinforcement-learning research has been highlighted as a key challenge area in the field. In this paper, we present a case study in reproducing the results of one groundbreaking algorithm, AlphaZero, a reinforcement learning system that learns how to play Go at a superhuman level given only the ...
We reproduced AlphaZero on Google Cloud Platform
Generative adversarial networks (GANs) train implicit generative models through solving minimax problems. Such minimax problems are known as nonconvex- nonconcave, for which the dynamics of first-order methods are not well understood. In this paper, we consider GANs in the type of the integral probability metrics (IPMs...
We establish global convergence to optimality for IPM-based GANs where the generator is an overparametrized neural network.
We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram. Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-s...
We develop efficient multi-scale approximate attributed network embedding procedures with provable properties.
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this pap...
A detailed empirical study in few-shot classification that revealing challenges in standard evaluation setting and showing a new direction.
Temporal logics are useful for describing dynamic system behavior, and have been successfully used as a language for goal definitions during task planning. Prior works on inferring temporal logic specifications have focused on "summarizing" the input dataset -- i.e., finding specifications that are satisfied by all pla...
We present a Bayesian inference model to infer contrastive explanations (as LTL specifications) describing how two sets of plan traces differ.
This work tackles the problem of characterizing and understanding the decision boundaries of neural networks with piece-wise linear non-linearity activations. We use tropical geometry, a new development in the area of algebraic geometry, to provide a characterization of the decision boundaries of a simple neural networ...
Tropical geometry can be leveraged to represent the decision boundaries of neural networks and bring to light interesting insights.
First-order methods such as stochastic gradient descent (SGD) are currently the standard algorithm for training deep neural networks. Second-order methods, despite their better convergence rate, are rarely used in practice due to the pro- hibitive computational cost in calculating the second-order information. In this ...
A novel Gram-Gauss-Newton method to train neural networks, inspired by neural tangent kernel and Gauss-Newton method, with fast convergence speed both theoretically and experimentally.
Recent pretrained sentence encoders achieve state of the art results on language understanding tasks, but does this mean they have implicit knowledge of syntactic structures? We introduce a grammatically annotated development set for the Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2018), which we use to ...
We investigate the implicit syntactic knowledge of sentence embeddings using a new analysis set of grammatically annotated sentences with acceptability judgments.
When considering simultaneously a finite number of tasks, multi-output learning enables one to account for the similarities of the tasks via appropriate regularizers. We propose a generalization of the classical setting to a continuum of tasks by using vector-valued RKHSs.
We propose an extension of multi-output learning to a continuum of tasks using operator-valued kernels.
We analyze the joint probability distribution on the lengths of the vectors of hidden variables in different layers of a fully connected deep network, when the weights and biases are chosen randomly according to Gaussian distributions, and the input is binary-valued. We show that, if the activation function satisf...
We prove that, for activation functions satisfying some conditions, as a deep network gets wide, the lengths of the vectors of hidden variables converge to a length map.
Data augmentation is one of the most effective approaches for improving the accuracy of modern machine learning models, and it is also indispensable to train a deep model for meta-learning. However, most current data augmentation implementations applied in meta-learning are the same as those used in the conventional im...
We propose a data augmentation approach for meta-learning and prove that it is valid.
In this paper, we present a general framework for distilling expectations with respect to the Bayesian posterior distribution of a deep neural network, significantly extending prior work on a method known as ``Bayesian Dark Knowledge. " Our generalized framework applies to the case of classification models and takes a...
A general framework for distilling Bayesian posterior expectations for deep neural networks.
Variational Autoencoders (VAEs) have proven to be powerful latent variable models. How- ever, the form of the approximate posterior can limit the expressiveness of the model. Categorical distributions are flexible and useful building blocks for example in neural memory layers. We introduce the Hierarchical Discrete Var...
In this paper, we introduce a discrete hierarchy of categorical latent variables that we train using the Concrete/Gumbel-Softmax relaxation and we derive an upper bound for the absolute difference between the unbiased and the biased objective.
In this paper, we propose a novel technique for improving the stochastic gradient descent (SGD) method to train deep networks, which we term \emph{PowerSGD}. The proposed PowerSGD method simply raises the stochastic gradient to a certain power $\gamma\in[0,1]$ during iterations and introduces only one additional parame...
We propose a new class of optimizers for accelerated non-convex optimization via a nonlinear gradient transformation.
We aim to build complex humanoid agents that integrate perception, motor control, and memory. In this work, we partly factor this problem into low-level motor control from proprioception and high-level coordination of the low-level skills informed by vision. We develop an architecture capable of surprisingly flexible, ...
Solve tasks involving vision-guided humanoid locomotion, reusing locomotion behavior from motion capture data.
The gap between the empirical success of deep learning and the lack of strong theoretical guarantees calls for studying simpler models. By observing that a ReLU neuron is a product of a linear function with a gate (the latter determines whether the neuron is active or not), where both share a jointly trained weight vec...
We propose Gated Linear Unit networks — a model that performs similarly to ReLU networks on real data while being much easier to analyze theoretically.
Machine learning systems often encounter Out-of-Distribution (OoD) errors when dealing with testing data coming from a different distribution from the one used for training. With their growing use in critical applications, it becomes important to develop systems that are able to accurately quantify its predictive uncer...
We propose an architecture search method to identify a distribution of architectures and use it to construct a Bayesian ensemble for outlier detection.
Modern applications from Autonomous Vehicles to Video Surveillance generate massive amounts of image data. In this work we propose a novel image outlier detection approach (IOD for short) that leverages the cutting-edge image classifier to discover outliers without using any labeled outlier. We observe that although in...
A novel approach that detects outliers from image data, while preserving the classification accuracy of image classification
This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources. We design a Dynamic Point-cloud Convolution (D-Conv) operator as the core component of CloudLSTMs, which performs convolution directly over point-clouds and ext...
This paper introduces CloudLSTM, a new branch of recurrent neural models tailored to forecasting over data streams generated by geospatial point-cloud sources.
Knowledge Graphs (KG), composed of entities and relations, provide a structured representation of knowledge. For easy access to statistical approaches on relational data, multiple methods to embed a KG as components of R^d have been introduced. We propose TransINT, a novel and interpretable KG embedding method that iso...
We propose TransINT, a novel and interpretable KG embedding method that isomorphically preserves the implication ordering among relations in the embedding space in an explainable, robust, and geometrically coherent way.
Unsupervised domain adaptive object detection aims to learn a robust detector on the domain shift circumstance, where the training (source) domain is label-rich with bounding box annotations, while the testing (target) domain is label-agnostic and the feature distributions between training and testing domains are dissi...
We introduce a new gradient detach based complementary objective training strategy for domain adaptive object detection.
Convolutional neural networks (CNN) have become the most successful and popular approach in many vision-related domains. While CNNs are particularly well-suited for capturing a proper hierarchy of concepts from real-world images, they are limited to domains where data is abundant. Recent attempts have looked into mitig...
We propose a novel approach for connecting task-specific networks in a multi-task learning setting based on recent residual network advances.
Zero-Shot Learning (ZSL) is a classification task where some classes referred as unseen classes have no labeled training images. Instead, we only have side information (or description) about seen and unseen classes, often in the form of semantic or descriptive attributes. Lack of training images from a set of classes r...
How to use cross-entropy loss for zero shot learning with soft labeling on unseen classes : a simple and effective solution that achieves state-of-the-art performance on five ZSL benchmark datasets.
In complex tasks, such as those with large combinatorial action spaces, random exploration may be too inefficient to achieve meaningful learning progress. In this work, we use a curriculum of progressively growing action spaces to accelerate learning. We assume the environment is out of our control, but that the agent ...
Progressively growing the available action space is a great curriculum for learning agents
Recently, researchers have discovered that the state-of-the-art object classifiers can be fooled easily by small perturbations in the input unnoticeable to human eyes. It is known that an attacker can generate strong adversarial examples if she knows the classifier parameters . Conversely, a defender can robustify the...
A game-theoretic solution to adversarial attacks and defenses.
Supervised learning with irregularly sampled time series have been a challenge to Machine Learning methods due to the obstacle of dealing with irregular time intervals. Some papers introduced recently recurrent neural network models that deals with irregularity, but most of them rely on complex mechanisms to achieve a ...
A novel method to create dense descriptors of time (Time Embeddings) to make simple models understand temporal structures
Community detection in graphs can be solved via spectral methods or posterior inference under certain probabilistic graphical models. Focusing on random graph families such as the stochastic block model, recent research has unified both approaches and identified both statistical and computational detection thresholds i...
We propose a novel graph neural network architecture based on the non-backtracking matrix defined over the edge adjacencies and demonstrate its effectiveness in community detection tasks on graphs.
Residual networks (Resnets) have become a prominent architecture in deep learning. However, a comprehensive understanding of Resnets is still a topic of ongoing research. A recent view argues that Resnets perform iterative refinement of features. We attempt to further expose properties of this aspect. To this end, we s...
Residual connections really perform iterative inference
We develop end-to-end learned reconstructions for lensless mask-based cameras, including an experimental system for capturing aligned lensless and lensed images for training. Various reconstruction methods are explored, on a scale from classic iterative approaches (based on the physical imaging model) to deep learned...
We improve the reconstruction time and quality on an experimental mask-based lensless imager using an end-to-end learning approach which incorporates knowledge of the imaging model.
Deep learning, a rebranding of deep neural network research works, has achieved a remarkable success in recent years. With multiple hidden layers, deep learning models aim at computing the hierarchical feature representations of the observational data. Meanwhile, due to its severe disadvantages in data consumption, com...
We introduce a new representation learning model, namely “Sample-Ensemble Genetic Evolutionary Network” (SEGEN), which can serve as an alternative approach to deep learning models.
How can we teach artificial agents to use human language flexibly to solve problems in a real-world environment? We have one example in nature of agents being able to solve this problem: human babies eventually learn to use human language to solve problems, and they are taught with an adult human-in-the-loop. Unfortuna...
We propose to use meta-learning for more efficient language learning, via a kind of 'domain randomization'.
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization. This allows us to explicitly search for schedules that achieve good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rates, the hypergradient, a...
MARTHE: a new method to fit task-specific learning rate schedules from the perspective of hyperparameter optimization
Recent years have witnessed some exciting developments in the domain of generating images from scene-based text descriptions. These approaches have primarily focused on generating images from a static text description and are limited to generating images in a single pass. They are unable to generate an image interactiv...
Interactively generating image from incrementally growing scene graphs in multiple steps using GANs while preserving the contents of image generated in previous steps
In some important computer vision domains, such as medical or hyperspectral imaging, we care about the classification of tiny objects in large images. However, most Convolutional Neural Networks (CNNs) for image classification were developed using biased datasets that contain large objects, in mostly central image posi...
We study low- and very-low-signal-to-noise classification scenarios, where objects that correlate with class label occupy tiny proportion of the entire image (e.g. medical or hyperspectral imaging).
Recent trends of incorporating attention mechanisms in vision have led researchers to reconsider the supremacy of convolutional layers as a primary building block. Beyond helping CNNs to handle long-range dependencies, Ramachandran et al. (2019) showed that attention can completely replace convolution and achieve state...
A self-attention layer can perform convolution and often learns to do so in practice.
We introduce a “learning-based” algorithm for the low-rank decomposition problem: given an $n \times d$ matrix $A$, and a parameter $k$, compute a rank-$k$ matrix $A'$ that minimizes the approximation loss $||A- A'||_F$. The algorithm uses a training set of input matrices in order to optimize its performance. Specifica...
Learning-based algorithms can improve upon the performance of classical algorithms for the low-rank approximation problem while retaining the worst-case guarantee.
Neural conversational models are widely used in applications like personal assistants and chat bots. These models seem to give better performance when operating on word level. However, for fusion languages like French, Russian and Polish vocabulary size sometimes become infeasible since most of the words have lots of w...
Proposed architecture to solve morphological agreement task
This paper proposes the use of spectral element methods \citep{canuto_spectral_1988} for fast and accurate training of Neural Ordinary Differential Equations (ODE-Nets; \citealp{Chen2018NeuralOD}) for system identification. This is achieved by expressing their dynamics as a truncated series of Legendre polynomials. The...
This paper proposes the use of spectral element methods for fast and accurate training of Neural Ordinary Differential Equations for system identification.
Exploration in sparse reward reinforcement learning remains an open challenge. Many state-of-the-art methods use intrinsic motivation to complement the sparse extrinsic reward signal, giving the agent more opportunities to receive feedback during exploration. Commonly these signals are added as bonus rewards, which res...
A new intrinsic reward signal based on successor features and a novel way to combine extrinsic and intrinsic reward.
Meta-Reinforcement learning approaches aim to develop learning procedures that can adapt quickly to a distribution of tasks with the help of a few examples. Developing efficient exploration strategies capable of finding the most useful samples becomes critical in such settings. Existing approaches to finding efficient ...
We propose to use a separate exploration policy to collect the pre-adaptation trajectories in MAML. We also show that using a self-supervised objective in the inner loop leads to more stable training and much better performance.
The “Supersymmetric Artificial Neural Network” in deep learning (denoted (x; θ, bar{θ})Tw), espouses the importance of considering biological constraints in the aim of further generalizing backward propagation. Looking at the progression of ‘solution geometries’; going from SO(n) representation (such as Perceptron l...
Generalizing backward propagation, using formal methods from supersymmetry.
Regularization-based continual learning approaches generally prevent catastrophic forgetting by augmenting the training loss with an auxiliary objective. However in most practical optimization scenarios with noisy data and/or gradients, it is possible that stochastic gradient descent can inadvertently change critical p...
Regularizing the optimization trajectory with the Fisher information of old tasks reduces catastrophic forgetting greatly
We study the problem of generating source code in a strongly typed, Java-like programming language, given a label (for example a set of API calls or types) carrying a small amount of information about the code that is desired. The generated programs are expected to respect a `"realistic" relationship between progra...
We give a method for generating type-safe programs in a Java-like language, given a small amount of syntactic information about the desired code.
We propose an approach for sequence modeling based on autoregressive normalizing flows. Each autoregressive transform, acting across time, serves as a moving reference frame for modeling higher-level dynamics. This technique provides a simple, general-purpose method for improving sequence modeling, with connections to ...
We show how autoregressive flows can be used to improve sequential latent variable models.
It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. We di...
Adversarial examples can fool YouTube's copyright detection system
Equilibrium Propagation (EP) is a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT), but with a learning rule local in space. Given an input x and associated target y, EP proceeds in two phases: in the first phase neu...
We propose a continual version of Equilibrium Propagation, where neuron and synapse dynamics occur simultaneously throughout the second phase, with theoretical guarantees and numerical simulations.
There are two main lines of research on visual reasoning: neural module network (NMN) with explicit multi-hop reasoning through handcrafted neural modules, and monolithic network with implicit reasoning in the latent feature space. The former excels in interpretability and compositionality, while the latter usually ach...
We propose a new Meta Module Network to resolve some of the restrictions of previous Neural Module Network to achieve strong performance on realistic visual reasoning dataset.
We propose a new perspective on adversarial attacks against deep reinforcement learning agents. Our main contribution is CopyCAT, a targeted attack able to consistently lure an agent into following an outsider's policy. It is pre-computed, therefore fast inferred, and could thus be usable in a real-time scenario. We sh...
We propose a new attack for taking full control of neural policies in realistic settings.
Cold-start and efficiency issues of the Top-k recommendation are critical to large-scale recommender systems. Previous hybrid recommendation methods are effective to deal with the cold-start issues by extracting real latent factors of cold-start items(users) from side information, but they still suffer low efficiency i...
It can generate effective hash codes for efficient cold-start recommendation and meanwhile provide a feasible marketing strategy.
Recent efforts to combine Representation Learning with Formal Methods, commonly known as the Neuro-Symbolic Methods, have given rise to a new trend of applying rich neural architectures to solve classical combinatorial optimization problems. In this paper, we propose a neural framework that can learn to solve the Circu...
We propose a neural framework that can learn to solve the Circuit Satisfiability problem from (unlabeled) circuit instances.
Sequence generation models such as recurrent networks can be trained with a diverse set of learning algorithms. For example, maximum likelihood learning is simple and efficient, yet suffers from the exposure bias problem. Reinforcement learning like policy gradient addresses the problem but can have prohibitively poor ...
A unified perspective of various learning algorithms for sequence generation, such as MLE, RL, RAML, data noising, etc.
We are reporting the SHINRA project, a project for structuring Wikipedia with collaborative construction scheme. The goal of the project is to create a huge and well-structured knowledge base to be used in NLP applications, such as QA, Dialogue systems and explainable NLP systems. It is created based on a scheme of ”Re...
We introduce a "Resource by Collaborative Construction" scheme to create KB, structured Wikipedia
Recent image super-resolution(SR) studies leverage very deep convolutional neural networks and the rich hierarchical features they offered, which leads to better reconstruction performance than conventional methods. However, the small receptive fields in the up-sampling and reconstruction process of those models stop t...
A state-of-the-art model based on global reasoning for image super-resolution