source stringlengths 200 2.98k | target stringlengths 18 668 |
|---|---|
We present an approach for expanding taxonomies with synonyms, or aliases. We target large shopping taxonomies, with thousands of nodes. A comprehensive set of entity aliases is an important component of identifying entities in unstructured text such as product reviews or search queries. Our method consists of two stag... | We use machine learning to generate synonyms for large shopping taxonomies. |
Relational reasoning, the ability to model interactions and relations between objects, is valuable for robust multi-object tracking and pivotal for trajectory prediction. In this paper, we propose MOHART, a class-agnostic, end-to-end multi-object tracking and trajectory prediction algorithm, which explicitly accounts f... | MOHART uses a self-attention mechanism to perform relational reasoning in multi-object tracking. |
We investigate the combination of actor-critic reinforcement learning algorithms with uniform large-scale experience replay and propose solutions for two challenges: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning. We employ those insights to accelerate hyper-paramet... | We investigate and propose solutions for two challenges in reinforcement learning: (a) efficient actor-critic learning with experience replay (b) stability of very off-policy learning. |
Stochastic gradient descent (SGD) is the workhorse of modern machine learning. Sometimes, there are many different potential gradient estimators that can be used. When so, choosing the one with the best tradeoff between cost and variance is important. This paper analyzes the convergence rates of SGD as a function of ti... | We propose a gradient estimator selection algorithm with the aim on improving optimization efficiency. |
We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design. The discrepancy between the minimax and maximin objective values could serve as a proxy for the difficulties that the alternating gradient descent encounters in the optimization of GANs. In this work, we... | We study the problem of alleviating the instability issue in the GAN training procedure via new architecture design, with theoretical guarantees. |
Nesterov SGD is widely used for training modern neural networks and other machine learning models. Yet, its advantages over SGD have not been theoretically clarified. Indeed, as we show in this paper, both theoretically and empirically, Nesterov SGD with any parameter selection does not in general provide acceleratio... | This work proves the non-acceleration of Nesterov SGD with any hyper-parameters, and proposes new algorithm which provably accelerates SGD in the over-parameterized setting. |
We propose the Fixed Grouping Layer (FGL); a novel feedforward layer designed to incorporate the inductive bias of structured smoothness into a deep learning model. FGL achieves this goal by connecting nodes across layers based on spatial similarity. The use of structured smoothness, as implemented by FGL, is motivated... | A feedforward layer to incorporate structured smoothness into a deep learning model |
Convolutional networks are not aware of an object's geometric variations, which leads to inefficient utilization of model and data capacity. To overcome this issue, recent works on deformation modeling seek to spatially reconfigure the data towards a common arrangement such that semantic recognition suffers less from d... | Don't deform your convolutions -- deform your kernels. |
Inferring the structural properties of a protein from its amino acid sequence is a challenging yet important problem in biology. Structures are not known for the vast majority of protein sequences, but structure is critical for understanding function. Existing approaches for detecting structural similarity between prot... | We present a method for learning protein sequence embedding models using structural information in the form of global structural similarity between proteins and within protein residue-residue contacts. |
Inspired by the adaptation phenomenon of biological neuronal firing, we propose regularity normalization: a reparameterization of the activation in the neural network that take into account the statistical regularity in the implicit space. By considering the neural network optimization process as a model selection prob... | Considering neural network optimization process as a model selection problem, we introduce a biological plausible normalization method that extracts statistical regularity under MDL principle to tackle imbalanced and limited data issue. |
In this paper we study generative modeling via autoencoders while using the elegant geometric properties of the optimal transport (OT) problem and the Wasserstein distances. We introduce Sliced-Wasserstein Autoencoders (SWAE), which are generative models that enable one to shape the distribution of the latent space int... | "Generative modeling with no need for adversarial training" |
The goal of imitation learning (IL) is to learn a good policy from high-quality demonstrations. However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs. IL in such situations can be challenging, especially when the lev... | We propose an imitation learning method to learn from diverse-quality demonstrations collected by demonstrators with different level of expertise. |
With a growing number of available services, each having slightly different parameters, preconditions and effects, automated planning on general semantic services become highly relevant. However, most exiting planners only consider PDDL, or if they claim to use OWL-S, they usually translate it to PDDL, losing much of t... | Describing a semantic heuristics which builds upon an OWL-S service description and uses word and sentence distance measures to evaluate the usefulness of services for a given goal. |
In colored graphs, node classes are often associated with either their neighbors class or with information not incorporated in the graph associated with each node. We here propose that node classes are also associated with topological features of the nodes. We use this association to improve Graph machine learning in g... | Topology-Based Graph Convolutional Network (GCN) |
Flies and mice are species separated by 600 million years of evolution, yet have evolved olfactory systems that share many similarities in their anatomic and functional organization. What functions do these shared anatomical and functional features serve, and are they optimal for odor sensing? In this study, we address... | Artificial neural networks evolved the same structures present in the olfactory systems of flies and mice after being trained to classify odors |
The recent direction of unpaired image-to-image translation is on one hand very exciting as it alleviates the big burden in obtaining label-intensive pixel-to-pixel supervision, but it is on the other hand not fully satisfactory due to the presence of artifacts and degenerated transformations. In this paper, we take a ... | Smooth regularization over sample graph for unpaired image-to-image translation results in significantly improved consistency |
Multiview stereo aims to reconstruct scene depth from images acquired by a camera under arbitrary motion. Recent methods address this problem through deep learning, which can utilize semantic cues to deal with challenges such as textureless and reflective regions. In this paper, we present a convolutional neural networ... | A convolution neural network for multi-view stereo matching whose design is inspired by best practices of traditional geometry-based approaches |
The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact. Doing so presents a challenging black-box optimization problem characterized by the large-batch, low round setting due to the need for labor-intensive wet lab evaluations. In response, we propose u... | We augment model-free policy learning with a sequence-level surrogate reward functions and count-based visitation bonus and demonstrate effectiveness in the large batch, low-round regime seen in designing DNA and protein sequences. |
Achieving machine intelligence requires a smooth integration of perception and reasoning, yet models developed to date tend to specialize in one or the other; sophisticated manipulation of symbols acquired from rich perceptual spaces has so far proved elusive. Consider a visual arithmetic task, where the goal is to car... | We use reinforcement learning to train an agent to solve a set of visual arithmetic tasks using provided pre-trained perceptual modules and transformations of internal representations created by those modules. |
Animals develop novel skills not only through the interaction with the environment but also from the influence of the others. In this work we model the social influence into the scheme of reinforcement learning, enabling the agents to learn both from the environment and from their peers. Specifically, we first define a... | A new RL algorithm called Interior Policy Differentiation is proposed to learn a collection of diverse policies for a given primal task. |
Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application.
However, we currently lack quantitative me... | An empirical evaluation on generative adversarial networks |
Knowledge bases (KB) are often represented as a collection of facts in the form (HEAD, PREDICATE, TAIL), where HEAD and TAIL are entities while PREDICATE is a binary relationship that links the two. It is a well-known fact that knowledge bases are far from complete, and hence the plethora of research on KB completion m... | Prediction of numerical attribute values associated with entities in knowledge bases. |
We propose procedures for evaluating and strengthening contextual embedding alignment and show that they are useful in analyzing and improving multilingual BERT. In particular, after our proposed alignment procedure, BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model, remarkab... | We propose procedures for evaluating and strengthening contextual embedding alignment and show that they both improve multilingual BERT's zero-shot XNLI transfer and provide useful insights into the model. |
Neural networks have reached outstanding performance for solving various ill-posed inverse problems in imaging. However, drawbacks of end-to-end learning approaches in comparison to classical variational methods are the requirement of expensive retraining for even slightly different problem statements and the lack of p... | We use neural networks trained for image denoising as plug-and-play priors in energy minimization algorithms for image reconstruction problems with provable convergence. |
Knowledge graph embedding research has overlooked the problem of probability calibration. We show popular embedding models are indeed uncalibrated. That means probability estimates associated to predicted triples are unreliable. We present a novel method to calibrate a model when ground truth negatives are not availabl... | We propose a novel method to calibrate knowledge graph embedding models without the need of negative examples. |
As an emerging topic in face recognition, designing margin-based loss functions can increase the feature margin between different classes for enhanced discriminability. More recently, absorbing the idea of mining-based strategies is adopted to emphasize the misclassified samples and achieve promising results. However, ... | A novel Adaptive Curriculum Learning loss for deep face recognition |
Adversarial examples are perturbed inputs designed to fool machine learning models. Adversarial training injects such examples into training data to increase robustness. To scale this technique to large datasets, perturbations are crafted using fast single-step methods that maximize a linear approximation of the model'... | Adversarial training with single-step methods overfits, and remains vulnerable to simple black-box and white-box attacks. We show that including adversarial examples from multiple sources helps defend against black-box attacks. |
An adversarial feature learning (AFL) is a powerful framework to learn representations invariant to a nuisance attribute, which uses an adversarial game between a feature extractor and a categorical attribute classifier. It theoretically sounds in term of it maximize conditional entropy between attribute and representa... | This paper proposes a new approach to incorporating desired invariance to representations learning, based on the observations that the current state-of-the-art AFL has practical issues. |
We study the properties of common loss surfaces through their Hessian matrix. In particular, in the context of deep learning, we empirically show that the spectrum of the Hessian is composed of two parts: (1) the bulk centered near zero, (2) and outliers away from the bulk. We present numerical evidence and mathematica... | The loss surface is *very* degenerate, and there are no barriers between large batch and small batch solutions. |
The allocation of computation resources in the backbone is a crucial issue in object detection. However, classification allocation pattern is usually adopted directly to object detector, which is proved to be sub-optimal. In order to reallocate the engaged computation resources in a more efficient way, we present CR-NA... | We propose CR-NAS to reallocate engaged computation resources in different resolution and spatial position. |
Uncertainty is a very important feature of the intelligence and helps the brain become a flexible, creative and powerful intelligent system. The crossbar-based neuromorphic computing chips, in which the computing is mainly performed by analog circuits, have the uncertainty and can be used to imitate the brain. However,... | A training method that can make deep learning algorithms work better on neuromorphic computing chips with uncertainty |
An important property of image classification systems in the real world is that they both accurately classify objects from target classes (``knowns'') and safely reject unknown objects (``unknowns'') that belong to classes not present in the training data. Unfortunately, although the strong generalization ability of ex... | A CNN architecture that can effective rejects the unknowns in test objects |
Multivariate time series with missing values are common in areas such as healthcare and finance, and have grown in number and complexity over the years. This raises the question whether deep learning methodologies can outperform classical data imputation methods in this domain. However, naive applications of deep learn... | We perform amortized variational inference on a latent Gaussian process model to achieve superior imputation performance on multivariate time series with missing data. |
In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and gen... | We perform massive experimental studies characterizing the relationships between Jacobian norms, linear regions, and generalization. |
Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use param... | Parameter space noise allows reinforcement learning algorithms to explore by perturbing parameters instead of actions, often leading to significantly improved exploration performance. |
Pre-trained deep neural network language models such as ELMo, GPT, BERT and XLNet have recently achieved state-of-the-art performance on a variety of language understanding tasks. However, their size makes them impractical for a number of scenarios, especially on mobile and edge devices. In particular, the input word e... | We present novel distillation techniques that enable training student models with different vocabularies and compress BERT by 60x with minor performance drop. |
Human reasoning involves recognising common underlying principles across many examples by utilising variables. The by-products of such reasoning are invariants that capture patterns across examples such as "if someone went somewhere then they are there" without mentioning specific people or places. Humans learn what va... | End-to-end learning of invariant representations with variables across examples such as if someone went somewhere then they are there. |
We propose a new learning-based approach to solve ill-posed inverse problems in imaging. We address the case where ground truth training samples are rare and the problem is severely ill-posed---both because of the underlying physics and because we can only get few measurements. This setting is common in geophysical ima... | We solve ill-posed inverse problems with scarce ground truth examples by estimating an ensemble of random projections of the model instead of the model itself. |
Designing architectures for deep neural networks requires expert knowledge and substantial computation time. We propose a technique to accelerate architecture selection by learning an auxiliary HyperNet that generates the weights of a main model conditioned on that model's architecture. By comparing the relative valida... | A technique for accelerating neural architecture selection by approximating the weights of each candidate architecture instead of training them individually. |
Textual entailment (or NLI) data has proven useful as pretraining data for tasks requiring language understanding, even when building on an already-pretrained model like RoBERTa. The standard protocol for collecting NLI was not designed for the creation of pretraining data, and it is likely far from ideal for this purp... | We propose four new ways of collecting NLI data. Some help slightly as pretraining data, all help reduce annotation artifacts. |
Predicting the future in real-world settings, particularly from raw sensory observations such as images, is exceptionally challenging. Real-world events can be stochastic and unpredictable, and the high dimensionality and complexity of natural images requires the predictive model to build an intricate understanding of ... | Stochastic variational video prediction in real-world settings. |
In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures.
In this paper, we cast the multi-agen... | Modeling complex multi-agent interactions under multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents’ policies. |
Plagiarism and text reuse become more available with the Internet development. Therefore it is important to check scientific papers for the fact of cheating, especially in Academia. Existing systems of plagiarism detection show the good performance and have a huge source databases. Thus now it is not enough just to cop... | A system for cross-lingual (English-Russian) plagiarism detection |
Data parallelism has become a dominant method to scale Deep Neural Network (DNN) training across multiple nodes. Since the synchronization of the local models or gradients can be a bottleneck for large-scale distributed training, compressing communication traffic has gained widespread attention recently. Among seve... | We proposed an implementation to accelerate DNN data parallel training by reducing communication bandwidth requirement. |
Backpropagation is driving today's artificial neural networks. However, despite extensive research, it remains unclear if the brain implements this algorithm. Among neuroscientists, reinforcement learning (RL) algorithms are often seen as a realistic alternative. However, the convergence rate of such learning scales po... | Perturbations can be used to learn feedback weights on large fully connected and convolutional networks. |
Most deep neural networks (DNNs) require complex models to achieve high performance. Parameter quantization is widely used for reducing the implementation complexities. Previous studies on quantization were mostly based on extensive simulation using training data. We choose a different approach and attempt to measure t... | We suggest the sufficient number of bits for representing weights of DNNs and the optimum bits are conservative when solving real problems. |
Inspired by the combination of feedforward and iterative computations in the visual cortex, and taking advantage of the ability of denoising autoencoders to estimate the score of a joint distribution, we propose a novel approach to iterative inference for capturing and exploiting the complex joint distribution of outpu... | Refining segmentation proposals by performing iterative inference with conditional denoising autoencoders. |
Inspired by the success of self attention mechanism and Transformer architecture
in sequence transduction and image generation applications, we propose novel self
attention-based architectures to improve the performance of adversarial latent code-
based schemes in text generation. Adversarial latent code-based text ... | We propose a self-attention based GAN architecture for unconditional text generation and improve on previous adversarial code-based results. |
Watermarks have been used for various purposes. Recently, researchers started to look into using them for deep neural networks. Some works try to hide attack triggers on their adversarial samples when attacking neural networks and others want to watermark neural networks to prove their ownership against plagiarism. Imp... | We propose a novel watermark encoder-decoder neural networks. They perform a cooperative game to define their own watermarking scheme. People do not need to design watermarking methods any more. |
We derive an unbiased estimator for expectations over discrete random variables based on sampling without replacement, which reduces variance as it avoids duplicate samples. We show that our estimator can be derived as the Rao-Blackwellization of three different estimators. Combining our estimator with REINFORCE, we ob... | We derive a low-variance, unbiased gradient estimator for expectations over discrete random variables based on sampling without replacement |
We introduce a parameter sharing scheme, in which different layers of a convolutional neural network (CNN) are defined by a learned linear combination of parameter tensors from a global bank of templates. Restricting the number of templates yields a flexible hybridization of traditional CNNs and recurrent networks. ... | We propose a method that enables CNN folding to create recurrent connections |
Gradient clipping is a widely-used technique in the training of deep networks, and is generally motivated from an optimisation lens: informally, it controls the dynamics of iterates, thus enhancing the rate of convergence to a local minimum. This intuition has been made precise in a line of recent works, which show tha... | Gradient clipping doesn't endow robustness to label noise, but a simple loss-based variant does. |
Among deep generative models, flow-based models, simply referred as \emph{flow}s in this paper, differ from other models in that they provide tractable likelihood. Besides being an evaluation metric of synthesized data, flows are supposed to be robust against out-of-distribution~(OoD) inputs since they do not discard ... | show experimental evidences about the weak correlation between flows' likelihoods and image semantics. |
This paper investigates strategies that defend against adversarial-example attacks on image-classification systems by transforming the inputs before feeding them to the system. Specifically, we study applying image transformations such as bit-depth reduction, JPEG compression, total variance minimization, and image qui... | We apply a model-agnostic defense strategy against adversarial examples and achieve 60% white-box accuracy and 90% black-box accuracy against major attack algorithms. |
In this paper, we propose the Asynchronous Accelerated Nonuniform Randomized Block Coordinate Descent algorithm (A2BCD). We prove A2BCD converges linearly to a solution of the convex minimization problem at the same rate as NU_ACDM, so long as the maximum delay is not too large. This is the first asynchronous Nesterov... | We prove the first-ever convergence proof of an asynchronous accelerated algorithm that attains a speedup. |
A framework for efficient Bayesian inference in probabilistic programs is introduced by embedding a sampler inside a variational posterior approximation. Its strength lies in both ease of implementation and automatically tuning sampler parameters to speed up mixing time. Several strategies to approximate the evidence l... | We embed SG-MCMC samplers inside a variational approximation |
The point estimates of ReLU classification networks, arguably the most widely used neural network architecture, have recently been shown to have arbitrarily high confidence far away from the training data. This architecture is thus not robust, e.g., against out-of-distribution data. Approximate Bayesian posteriors on t... | We argue theoretically that by simply assuming the weights of a ReLU network to be Gaussian distributed (without even a Bayesian formalism) could fix this issue; for a more calibrated uncertainty, a simple Bayesian method could already be sufficient. |
Word alignments are useful for tasks like statistical and neural machine translation (NMT) and annotation projection. Statistical word aligners perform well, as do methods that extract alignments jointly with translations in NMT. However, most approaches require parallel training data and quality decreases as less trai... | We use representations trained without any parallel data for creating word alignments. |
Recent studies have shown the vulnerability of reinforcement learning (RL) models in noisy settings. The sources of noises differ across scenarios. For instance, in practice, the observed reward channel is often subject to noise (e.g., when observed rewards are collected through sensors), and thus observed rewards may ... | A new approach for learning with noisy rewards in reinforcement learning |
Training recurrent neural networks (RNNs) on long sequences using backpropagation through time (BPTT) remains a fundamental challenge.
It has been shown that adding a local unsupervised loss term into the optimization objective makes the training of RNNs on long sequences more effective.
While the importance of an ... | This paper focuses upon a traditionally overlooked mechanism -- an architecture with explicitly designed private and shared hidden units designed to mitigate the detrimental influence of the auxiliary unsupervised loss over the main supervised task. |
We investigate multi-task learning approaches which use a shared feature representation for all tasks. To better understand the transfer of task information, we study an architecture with a shared module for all tasks and a separate output module for each task. We study the theory of this setting on linear and ReLU-act... | A Theoretical Study of Multi-Task Learning with Practical Implications for Improving Multi-Task Training and Transfer Learning |
We review three limitations of BLEU and ROUGE – the most popular metrics
used to assess reference summaries against hypothesis summaries, come up with
criteria for what a good metric should behave like and propose concrete ways to
assess the performance of a metric in detail and show the potential of Transformers-ba... | New method for assessing the quaility of similarity evaluators and showing potential of Transformer-based language models in replacing BLEU and ROUGE. |
In this paper, we explore meta-learning for few-shot text classification. Meta-learning has shown strong performance in computer vision, where low-level patterns are transferable across learning tasks. However, directly applying this approach to text is challenging–lexical features highly informative for one task maybe... | Meta-learning methods used for vision, directly applied to NLP, perform worse than nearest neighbors on new classes; we can do better with distributional signatures. |
The description of neural computations in the field of neuroscience relies on two competing views: (i) a classical single-cell view that relates the activity of individual neurons to sensory or behavioural variables, and focuses on how different cell classes map onto computations; (ii) a more recent population view tha... | A theoretical analysis of a new class of RNNs, trained on neuroscience tasks, allows us to identify the role of dynamical dimensionality and cell classes in neural computations. |
We propose the fusion discriminator, a single unified framework for incorporating conditional information into a generative adversarial network (GAN) for a variety of distinct structured prediction tasks, including image synthesis, semantic segmentation, and depth estimation. Much like commonly used convolutional neura... | We propose the fusion discriminator, a novel architecture for incorporating conditional information into the discriminator of GANs for structured prediction tasks. |
Model-based reinforcement learning (MBRL) has been shown to be a powerful framework for data-efficiently learning control of continuous tasks. Recent work in MBRL has mostly focused on using more advanced function approximators and planning schemes, leaving the general framework virtually unchanged since its conception... | We define, explore, and begin to address the objective mismatch issue in model-based reinforcement learning. |
There has recently been a heated debate (e.g. Schwartz-Ziv & Tishby (2017), Saxe et al. (2018), Noshad et al. (2018), Goldfeld et al. (2018)) about measuring the information flow in Deep Neural Networks using techniques from information theory. It is claimed that Deep Neural Networks in general have good generalization... | We give a detailed explanation of the trajectories in the information plane and investigate its usage for neural network design (pruning) |
Formal understanding of the inductive bias behind deep convolutional networks, i.e. the relation between the network's architectural features and the functions it is able to model, is limited. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning, and use it for obt... | Employing quantum entanglement measures for quantifying correlations in deep learning, and using the connection to fit the deep network's architecture to correlations in the data. |
Deep learning algorithms are increasingly used in modeling chemical processes. However, black box predictions without rationales have limited used in practical applications, such as drug design. To this end, we learn to identify molecular substructures -- rationales -- that are associated with the target chemical prope... | We use a reinforcement learning over molecular graphs to generate rationales for interpretable molecular property prediction. |
In recent years, substantial progress has been made on graph convolutional networks (GCN). In this paper, for the first time, we theoretically analyze the connections between GCN and matrix factorization (MF), and unify GCN as matrix factorization with co-training and unitization. Moreover, under the guidance of this t... | We unify graph convolutional networks as co-training and unitized matrix factorization. |
Molecular graph generation is a fundamental problem for drug discovery and has been attracting growing attention. The problem is challenging since it requires not only generating chemically valid molecular structures but also optimizing their chemical properties in the meantime. Inspired by the recent progress in deep ... | A flow-based autoregressive model for molecular graph generation. Reaching state-of-the-art results on molecule generation and properties optimization. |
We study two types of preconditioners and preconditioned stochastic gradient descent (SGD) methods in a unified framework. We call the first one the Newton type due to its close relationship to the Newton method, and the second one the Fisher type as its preconditioner is closely related to the inverse of Fisher inform... | We propose a new framework for preconditioner learning, derive new forms of preconditioners and learning methods, and reveal the relationship to methods like RMSProp, Adam, Adagrad, ESGD, KFAC, batch normalization, etc. |
We present EDA: easy data augmentation techniques for boosting performance on text classification tasks. EDA consists of four simple but powerful operations: synonym replacement, random insertion, random swap, and random deletion. On five text classification tasks, we show that EDA improves performance for both convolu... | Simple text augmentation techniques can significantly boost performance on text classification tasks, especially for small datasets. |
We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available. Existing physical scene understanding methods require either object state supervision, or do not integ... | We propose a model that is able to perform physical parameter estimation of systems from video, where the differential equations governing the scene dynamics are known, but labeled states or objects are not available. |
This paper proposes ASAL, a new pool based active learning method that generates high entropy samples. Instead of directly annotating the synthetic samples, ASAL searches similar samples from the pool and includes them for training. Hence, the quality of new samples is high and annotations are reliable. ASAL is parti... | ASAL is a pool based active learning method that generates high entropy samples and retrieves matching samples from the pool in sub-linear time. |
We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different result due to random factors in the training process, which include random ... | We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. |
Unsupervised domain adaptation aims to generalize the hypothesis trained in a source domain to an unlabeled target domain. One popular approach to this problem is to learn domain-invariant embeddings for both domains. In this work, we study, theoretically and empirically, the effect of the embedding complexity on gener... | We study the effect of the embedding complexity in learning domain-invariant representations and develop a strategy that mitigates sensitivity to it. |
We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER). Specifically, two variants of DATNet, i.e., DATNet-F and DATNet-P, are proposed to explore effective feature fusion between high and low resource. To address the noisy and imbalanc... | We propose a new architecture termed Dual Adversarial Transfer Network (DATNet) for addressing low-resource Named Entity Recognition (NER) and achieve new state-of-the-art performances on CoNLL and Twitter NER. |
Generative adversarial networks (GANs) evolved into one of the most successful unsupervised techniques for generating realistic images. Even though it has recently been shown that GAN training converges, GAN models often end up in local Nash equilibria that are associated with mode collapse or otherwise fail to model t... | Coulomb GANs can optimally learn a distribution by posing the distribution learning problem as optimizing a potential field |
Predicting properties of nodes in a graph is an important problem with applications in a variety of domains. Graph-based Semi Supervised Learning (SSL) methods aim to address this problem by labeling a small subset of the nodes as seeds, and then utilizing the graph structure to predict label scores for the rest of the... | We propose a confidence based Graph Convolutional Network for Semi-Supervised Learning. |
In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researc... | We propose ImageNet-C to measure classifier corruption robustness and ImageNet-P to measure perturbation robustness |
The paper explores a novel methodology in source code obfuscation through the application of text-based recurrent neural network network (RNN) encoder-decoder models in ciphertext generation and key generation. Sequence-to-sequence models
are incorporated into the model architecture to generate obfuscated code, genera... | Obfuscate code using seq2seq networks, and execute using the obfuscated code and key pair |
We propose a method for joint image and per-pixel annotation synthesis with GAN. We demonstrate that GAN has good high-level representation of target data that can be easily projected to semantic segmentation masks. This method can be used to create a training dataset for teaching separate semantic segmentation network... | GAN-based method for joint image and per-pixel annotation synthesis |
Vision-Language Navigation (VLN) is the task where an agent is commanded to navigate in photo-realistic unknown environments with natural language instructions. Previous research on VLN is primarily conducted on the Room-to-Room (R2R) dataset with only English instructions. The ultimate goal of VLN, however, is to se... | We introduce a new task and dataset on cross-lingual vision-language navigation, and propose a general cross-lingual VLN framework for the task. |
Deep generative models have advanced the state-of-the-art in semi-supervised classification, however their capacity for deriving useful discriminative features in a completely unsupervised fashion for classification in difficult real-world data sets, where adequate manifold separation is required has not been adequatel... | Unsupervised classification via deep generative modeling with controllable feature learning evaluated in a difficult real world task |
Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability. However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage. Two fundam... | Models Representation Learning over dynamic graphs as latent hidden process bridging two observed processes of Topological Evolution of and Interactions on dynamic graphs. |
With the success of modern machine learning, it is becoming increasingly important to understand and control how learning algorithms interact. Unfortunately, negative results from game theory show there is little hope of understanding or controlling general n-player games. We therefore introduce smooth markets (SM-game... | We introduce a class of n-player games suited to gradient-based methods. |
While counter machines have received little attention in theoretical computer science since the 1960s, they have recently achieved a newfound relevance to the field of natural language processing (NLP). Recent work has suggested that some strong-performing recurrent neural networks utilize their memory as counters. Thu... | We study the class of formal languages acceptable by real-time counter automata, a model of computation related to some types of recurrent neural networks. |
Attentional, RNN-based encoder-decoder models for abstractive summarization have achieved good performance on short input and output sequences. For longer documents and summaries however these models often include repetitive and incoherent phrases. We introduce a neural network model with a novel intra-attention that a... | A summarization model combining a new intra-attention and reinforcement learning method to increase summary ROUGE scores and quality for long sequences. |
Knowledge Distillation (KD) is a common method for transferring the ``knowledge'' learned by one machine learning model (the teacher) into another model (the student), where typically, the teacher has a greater capacity (e.g., more parameters or higher bit-widths). To our knowledge, existing methods overlook the fact t... | We study whether and how adaptive data augmentation and knowledge distillation can be leveraged simultaneously in a synergistic manner for better training student networks. |
Neural network models have shown excellent fluency and performance when applied to abstractive summarization. Many approaches to neural abstractive summarization involve the introduction of significant inductive bias, such as pointer-generator architectures, coverage, and partially extractive procedures, designed to mi... | We introduce a simple procedure to repurpose pre-trained transformer-based language models to perform abstractive summarization well. |
This paper addresses the problem of incremental domain adaptation (IDA). We assume each domain comes sequentially, and that we could only access data in the current domain. The goal of IDA is to build a unified model performing well on all the encountered domains. We propose to augment a recurrent neural network (RNN)... | We present a neural memory-based architecture for incremental domain adaptation, and provide theoretical and empirical results. |
In compressed sensing, a primary problem to solve is to reconstruct a high dimensional sparse signal from a small number of observations. In this work, we develop a new sparse signal recovery algorithm using reinforcement learning (RL) and Monte CarloTree Search (MCTS). Similarly to orthogonal matching pursuit (OMP), o... | Formulating sparse signal recovery as a sequential decision making problem, we develop a method based on RL and MCTS that learns a policy to discover the support of the sparse signal. |
Many real-world sequential decision-making problems can be formulated as optimal control with high-dimensional observations and unknown dynamics. A promising approach is to embed the high-dimensional observations into a lower-dimensional latent representation space, estimate the latent dynamics model, then utilize this... | Learning embedding for control with high-dimensional observations |
The interplay between inter-neuronal network topology and cognition has been studied deeply by connectomics researchers and network scientists, which is crucial towards understanding the remarkable efficacy of biological neural networks. Curiously, the deep learning revolution that revived neural networks has not paid ... | Initial findings in the intersection of network neuroscience and deep learning. C. Elegans and a mouse visual cortex learn to recognize handwritten digits. |
Convolutional neural networks and recurrent neural networks are designed with network structures well suited to the nature of spacial and sequential data respectively. However, the structure of standard feed-forward neural networks (FNNs) is simply a stack of fully connected layers, regardless of the feature correlatio... | An unsupervised structure learning method for Parsimonious Deep Feed-forward Networks. |
Bayesian inference is used extensively to infer and to quantify the uncertainty in a field of interest from a measurement of a related field when the two are linked by a mathematical model. Despite its many applications, Bayesian inference faces challenges when inferring fields that have discrete representations of lar... | Using GANs as priors for efficient Bayesian inference of complex fields. |
We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods. The class semantics of Omniglot is invariably “characters” and the class semantics of miniImage... | Omniglot and miniImageNet are too simple for few-shot learning because we can solve them without using labels during meta-evaluation, as demonstrated with a method called centroid networks |
We learn to identify decision states, namely the parsimonious set of states where decisions meaningfully affect the future states an agent can reach in an environment. We utilize the VIC framework, which maximizes an agent’s `empowerment’, ie the ability to reliably reach a diverse set of states -- and formulate a sand... | Identify decision states (where agent can take actions that matter) without reward supervision, use it for transfer. |
Inverse problems are ubiquitous in natural sciences and refer to the challenging task of inferring complex and potentially multi-modal posterior distributions over hidden parameters given a set of observations. Typically, a model of the physical process in the form of differential equations is available but leads to in... | An approach to combine variational inference and Bayesian optimisation to solve complicated inverse problems |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.