source listlengths 1 34 | source_labels listlengths 1 34 | rouge_scores listlengths 1 31 | target listlengths 1 1 | title stringlengths 8 153 | id stringlengths 9 11 | keywords listlengths 0 7 |
|---|---|---|---|---|---|---|
[
"Generative Adversarial Networks (GANs) have proven to be a powerful framework for learning to draw samples from complex distributions.",
"However, GANs are also notoriously difficult to train, with mode collapse and oscillations a common problem.",
"We hypothesize that this is at least in part due to the evolu... | [
1,
0,
0,
0,
0,
0,
0
] | [
0.22222221777777784,
0.07999999539200027,
0.04761904425170092,
0,
0.06060605663911872,
0,
0.051282047731755674
] | [
"Generative Adversarial Network Training is a Continual Learning Problem."
] | Generative Adversarial Network Training is a Continual Learning Problem | SJzuHiA9tQ | [
"generative adversarial network training continual learning problem"
] |
[
"Many problems with large-scale labeled training data have been impressively solved by deep learning.",
"However, Unseen Class Categorization (UCC) with minimal information provided about target classes is the most commonly encountered setting in industry, which remains a challenging research problem in machine l... | [
0,
0,
0,
0,
1
] | [
0.07407406908093313,
0.048780483474122935,
0,
0.10344827238406672,
0.39999999561250005
] | [
"A unified frame for both few-shot learning and zero-shot learning based on network reparameterization"
] | Network Reparameterization for Unseen Class Categorization | rJeyV2AcKX | [
"network reparameterization"
] |
[
"Proteins are ubiquitous molecules whose function in biological processes is determined by their 3D structure.\n",
"Experimental identification of a protein's structure can be time-consuming, prohibitively expensive, and not always possible. \n",
"Alternatively, protein folding can be modeled using computationa... | [
0,
0,
0,
1,
0
] | [
0.05882352442906617,
0.11764705384083066,
0.05263157396121931,
0.399999995392,
0.39024389751338495
] | [
"GraphQA is a graph-based method for protein Quality Assessment that improves the state-of-the-art for both hand-engineered and representation-learning approaches"
] | GraphQA: Protein Model Quality Assessment using Graph Convolutional Network | HyxgBerKwB | [
"quality assessment",
"graphqa",
"protein"
] |
[
"We study the problem of training machine learning models incrementally using active learning with access to imperfect or noisy oracles.",
"We specifically consider the setting of batch active learning, in which multiple samples are selected as opposed to a single sample as in classical settings so as to reduce t... | [
1,
0,
0,
0,
0,
0
] | [
0.42857142361678,
0.319999995032,
0.14285713790249452,
0.15789473206371207,
0.2325581345592213
] | [
"We address the active learning in batch setting with noisy oracles and use model uncertainty to encode the decision quality of active learning algorithm during acquisition."
] | Learning in Confusion: Batch Active Learning with Noisy Oracle | SJxIkkSKwB | [
"noisy oracle",
"active learning",
"batch"
] |
[
"Artistic style transfer is the problem of synthesizing an image with content similar to a given image and style similar to another.",
"Although recent feed-forward neural networks can generate stylized images in real-time, these models produce a single stylization given a pair of style/content images, and the us... | [
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.23999999596800003,
0,
0.2608695609829868,
0.06249999658203144,
0.05555555242283968,
0,
0.07692307298816588,
0.06666666308888908
] | [
"Stochastic style transfer with adjustable features. "
] | Adjustable Real-time Style Transfer | HJg4E8IFdE | [
"style transfer",
"adjustable"
] |
[
"Recent work has shown that deep reinforcement-learning agents can learn to follow language-like instructions from infrequent environment rewards.",
"However, this places on environment designers the onus of designing language-conditional reward functions which may not be easily or tractably implemented as the co... | [
0,
0,
0,
0,
1,
0,
0
] | [
0.23529411266435996,
0.0476190429024948,
0.22222221763950628,
0.16666666172839517,
0.2424242374288339,
0.1463414586555623,
0.17647058325259532
] | [
"We propose AGILE, a framework for training agents to perform instructions from examples of respective goal-states."
] | Learning to Understand Goal Specifications by Modelling Reward | H1xsSjC9Ym | [] |
[
"We present Multitask Soft Option Learning (MSOL), a hierarchical multi-task framework based on Planning-as-Inference.",
"MSOL extends the concept of Options, using separate variational posteriors for each task, regularized by a shared prior.",
"The learned soft-options are temporally extended, allowing a highe... | [
0,
0,
0,
1,
0
] | [
0.05405404934989084,
0.14634145848899482,
0.043478255869565795,
0.173913038478261,
0.14999999511250017
] | [
"In Hierarchical RL, we introduce the notion of a 'soft', i.e. adaptable, option and show that this helps learning in multitask settings."
] | Multitask Soft Option Learning | BkeDGJBKvB | [
"soft",
"learn",
"multitask",
"option"
] |
[
"We propose an algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning.",
"The learning objective is achieved by providing signal to the latent encoding/embedding in VAE without changing its main bac... | [
1,
0,
0,
0,
0,
0
] | [
0.5882352897404844,
0.16666666242283962,
0.06666666202222254,
0.22222221852839513,
0.23076922588757406,
0.23529411326989624
] | [
"Learning a controllable generative model by performing latent representation disentanglement learning."
] | Guided variational autoencoder for disentanglement learning | SygaYANFPr | [
"disentanglement learning"
] |
[
"Neural language models (NLMs) are generative, and they model the distribution of grammatical sentences.",
"Trained on huge corpus, NLMs are pushing the limit of modeling accuracy.",
"Besides, they have also been applied to supervised learning tasks that decode text, e.g., automatic speech recognition (ASR).",
... | [
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.2727272680991736,
0.09999999520000023,
0.14285713877551035,
0.07407406990397829,
0.06451612520291386,
0,
0.07407406990397829,
0.07692307266272212,
0
] | [
"Enhance the language model for supervised learning task "
] | Large Margin Neural Language Models | H1g-gk5EuQ | [
"language model"
] |
[
"Conventionally, convolutional neural networks (CNNs) process different images with the same set of filters.",
"However, the variations in images pose a challenge to this fashion.",
"In this paper, we propose to generate sample-specific filters for convolutional layers in the forward pass.",
"Since the filter... | [
0,
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.14285713785714302,
0.15999999507200013,
0.46666666168888893,
0.17647058339100358,
0.06896551224732497,
0.06896551224732497,
0.24999999507812506,
0.0799999950720003,
0.06060605572084521
] | [
"dynamically generate filters conditioned on the input image for CNNs in each forward pass "
] | Learning to Generate Filters for Convolutional Neural Networks | rJa90ceAb | [
"generate filter"
] |
[
"We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths.",
"Compared to conventional anytime networks only with the depth controllability, the increased architectural diversity leads to higher resource utilization and consequent performance i... | [
1,
0,
0
] | [
0.999999995,
0.09090908600206637,
0.15384614884944134
] | [
"We propose a new anytime neural network which allows partial evaluation by subnetworks with different widths as well as depths."
] | Doubly Nested Network for Resource-Efficient Inference | SygAlRLvoX | [
"network"
] |
[
"We propose a new model for making generalizable and diverse retrosynthetic reaction predictions.",
"Given a target compound, the task is to predict the likely chemical reactants to produce the target.",
"This generative task can be framed as a sequence-to-sequence problem by using the SMILES representations of... | [
1,
0,
0,
0,
0,
0
] | [
0.999999995,
0.07692307192307725,
0.06666666175555591,
0.10810810355003672,
0.2499999951757813,
0.1860465074094106
] | [
"We propose a new model for making generalizable and diverse retrosynthetic reaction predictions."
] | Learning to Make Generalizable and Diverse Predictions for Retrosynthesis | BygfrANKvB | [
"make generalizable diverse",
"prediction"
] |
[
"Unsupervised learning of disentangled representations is an open problem in machine learning.",
"The Disentanglement-PyTorch library is developed to facilitate research, implementation, and testing of new variational algorithms.",
"In this modular library, neural architectures, dimensionality of the latent spa... | [
0,
1,
0,
0,
0,
0,
0
] | [
0.21052631091412755,
0.3478260824196598,
0.12121211753902673,
0.09090908628099197,
0,
0,
0.10256409930309017
] | [
"Disentanglement-PyTorch is a library for variational representation learning"
] | Variational Learning with Disentanglement-PyTorch | rJgUsFYnir | [
"disentanglement-pytorch",
"learning",
"variational"
] |
[
"Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL).",
"While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment.",
"To address this p... | [
1,
0,
0,
0,
0
] | [
0.12499999507812519,
0.055555550802469544,
0,
0.06060605572084521,
0.12121211632690562
] | [
"We extend recent insights related to softmax consistency to achieve state-of-the-art results in continuous control."
] | Trust-PCL: An Off-Policy Trust Region Method for Continuous Control | HyrCWeWCb | [
"continuous control"
] |
[
"Deep reinforcement learning has achieved many recent successes, but our understanding of its strengths and limitations is hampered by the lack of rich environments in which we can fully characterize optimal behavior, and correspondingly diagnose individual actions against such a characterization. \n\n",
"Here we... | [
0,
1,
0,
0,
0,
0,
0
] | [
0.23999999502222233,
0.2857142807760142,
0.19230768790680483,
0.09523809246031753,
0.19999999580000008,
0.19607842706651296,
0.2608695602184416
] | [
"We adapt a family of combinatorial games with tunable difficulty and an optimal policy expressible as linear network, developing it as a rich environment for reinforcement learning, showing contrasts in performance with supervised learning, and analyzing multiagent learning and generalization. "
] | Can Deep Reinforcement Learning solve Erdos-Selfridge-Spencer Games? | HkCnm-bAb | [
"reinforcement learning",
"game"
] |
[
"Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand.",
"Several methodologies to estimate model uncertainty have been proposed, but these methodologies constrain either how the neural network is trained or constructed.",
"We present ... | [
0,
0,
0,
0,
1
] | [
0.2068965470154579,
0.06249999548828158,
0.16216215798392997,
0.11764705444636694,
0.22222221739369008
] | [
"An add-on method for deep learning to detect outliers during prediction-time"
] | ODIN: Outlier Detection In Neural Networks | rkGqLoR5tX | [] |
[
"This work introduces a simple network for producing character aware word embeddings.",
"Position agnostic and position aware character embeddings are combined to produce an embedding vector for each word.",
"The learned word representations are shown to be very sparse and facilitate improved results on languag... | [
0,
1,
0,
0
] | [
0.18181817719008275,
0.3157894687396123,
0.16326530122448993,
0.1666666618055557
] | [
"A fully connected architecture is used to produce word embeddings from character representations, outperforms traditional embeddings and provides insight into sparsity and dropout."
] | A Simple Fully Connected Network for Composing Word Embeddings from Characters | rJ8rHkWRb | [
"word embedding character",
"fully connect"
] |
[
"Neural networks with low-precision weights and activations offer compelling\n",
"efficiency advantages over their full-precision equivalents.",
"The two most\n",
"frequently discussed benefits of quantization are reduced memory consumption,\n",
"and a faster forward pass when implemented with efficient bit... | [
0,
0,
0,
0,
0,
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.11428571046530626,
0,
0,
0.05714285332244924,
0.11111110709876558,
0.2222222182098766,
0.324324320146092,
0.058823525813149015,
0.1290322553590011,
0.054054049875822095,
0.15789473252077574,
0.312499996953125,
0.18749999695312503,
0.16666666265432112,
0.2051282006837608,
0.1621621579... | [
"We conduct adversarial attacks against binarized neural networks and show that we reduce the impact of the strongest attacks, while maintaining comparable accuracy in a black-box setting"
] | Attacking Binarized Neural Networks | HkTEFfZRb | [
"binarize neural network",
"attack"
] |
[
"Unsupervised bilingual dictionary induction (UBDI) is useful for unsupervised machine translation and for cross-lingual transfer of models into low-resource languages.",
"One approach to UBDI is to align word vector spaces in different languages using Generative adversarial networks (GANs) with linear generators... | [
0,
0,
0,
1,
0,
0
] | [
0.09999999501250025,
0.1666666617447918,
0.21621621130752386,
0.44999999501250004,
0.18867924049839813,
0.10810810319941586
] | [
"An empirical investigation of GAN-based alignment of word vector spaces, focusing on cases, where linear transformations provably exist, but training is unstable."
] | Empirical observations on the instability of aligning word vector spaces with GANs | SJxbps09K7 | [
"word vector space",
"empirical"
] |
[
"Owing to the ubiquity of computer software, software vulnerability detection (SVD) has become an important problem in the software industry and in the field of computer security.",
"One of the most crucial issues in SVD is coping with the scarcity of labeled vulnerabilities in projects that require the laborious... | [
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.2264150895550019,
0.2807017494613728,
0.24561403016312722,
0.24999999510204088,
0.16666666168888902,
0.21818181331570258,
0.8405797051711826,
0.11764705414840465
] | [
"Our aim in this paper is to propose a new approach for tackling the problem of transfer learning from labeled to unlabeled software projects in the context of SVD in order to resolve the mode collapsing problem faced in previous approaches."
] | Dual-Component Deep Domain Adaptation: A New Approach for Cross Project Software Vulnerability Detection | BkglepEFDS | [
"new approach",
"software",
"project"
] |
[
"Representation learning is one of the foundations of Deep Learning and allowed important improvements on several Machine Learning tasks, such as Neural Machine Translation, Question Answering and Speech Recognition.",
"Recent works have proposed new methods for learning representations for nodes and edges in gra... | [
0,
0,
0,
1,
0
] | [
0.08333332834201419,
0.05405404934989084,
0.2307692258357989,
0.6333333286055556,
0.3124999953955079
] | [
"A faster method for generating node embeddings that employs a number of permutations over a node's immediate neighborhood as context to generate its representation."
] | Fast Node Embeddings: Learning Ego-Centric Representations | SJyfrl-0b | [
"node embedding",
"representation",
"fast"
] |
[
"Orthogonal recurrent neural networks address the vanishing gradient problem by parameterizing the recurrent connections using an orthogonal matrix.",
"This class of models is particularly effective to solve tasks that require the memorization of long sequences.",
"We propose an alternative solution based on ex... | [
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.1538461490072322,
0.20512820028928347,
0.27027026556610667,
0.34615384122041426,
0.44897958685547695,
0.114285709779592,
0.38297871840651887,
0.173913038478261
] | [
"We show how to initialize recurrent architectures with the closed-form solution of a linear autoencoder for sequences. We show the advantages of this approach compared to orthogonal RNNs."
] | Autoencoder-based Initialization for Recurrent Neural Networks with a Linear Memory | BkgM7xHYwH | [
"linear",
"recurrent"
] |
[
"This paper improves upon the line of research that formulates named entity recognition (NER) as a sequence-labeling problem.",
"We use so-called black-box long short-term memory (LSTM) encoders to achieve state-of-the-art results while providing insightful understanding of what the auto-regressive model learns w... | [
0,
1,
0,
0
] | [
0.1025640975936886,
0.24489795428571431,
0.13333332878333348,
0.2399999951280001
] | [
"We provide insightful understanding of sequence-labeling NER and propose to use two types of cross structures, both of which bring theoretical and empirical improvements."
] | Understanding and Improving Sequence-Labeling NER with Self-Attentive LSTMs | rklNwjCcYm | [
"sequence-labeling ner"
] |
[
"Knowledge Graph Embedding (KGE) is the task of jointly learning entity and relation embeddings for a given knowledge graph.",
"Existing methods for learning KGEs can be seen as a two-stage process where",
"(a) entities and relations in the knowledge graph are represented using some linear algebraic structures ... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.2580645113839751,
0.07999999500800031,
0.14285713795918387,
0.12121211658402223,
0.14634145927424164,
0.23076922579881665,
0.1851851817283951,
0.1999999952000001,
0.04444444053333368,
0.2758620641141499,
0.10810810372534715
] | [
"We present a theoretically proven generative model of knowledge graph embedding. "
] | RelWalk -- A Latent Variable Model Approach to Knowledge Graph Embedding | SkxbDsR9Ym | [
"knowledge graph embed",
"model"
] |
[
"Currently the only techniques for sharing governance of a deep learning model are homomorphic encryption and secure multiparty computation.",
"Unfortunately, neither of these techniques is applicable to the training of large neural networks due to their large computational and communication overheads.",
"As a ... | [
0,
0,
0,
1,
0,
0,
0
] | [
0.06060605572084521,
0.18181817693296617,
0,
0.24999999625000005,
0.24999999545000007,
0.14545454165950425,
0.055555550802469544
] | [
"We study empirically how hard it is to recover missing parts of trained models"
] | Scaling shared model governance via model splitting | H1xEtoRqtQ | [
"model"
] |
[
"This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference.",
"Unlike the existing methods on domain transfer through deep generative models, such as StarGAN (Choi et al., 2017) and UFDN (Liu et al., 2018), the variat... | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.9411764655882354,
0.13333332863209893,
0,
0.1578947318975071,
0.17391303881852566,
0.07999999564800023,
0.09523809041950139,
0.04444443974321038,
0,
0
] | [
"This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference"
] | Variational Domain Adaptation | ByeLmn0qtX | [
"variational domain adaptation"
] |
[
"We propose a new method to train neural networks based on a novel combination of adversarial training and provable defenses.",
"The key idea is to model training as a procedure which includes both, the verifier and the adversary.",
"In every iteration, the verifier aims to certify the network using convex rela... | [
1,
0,
0,
0,
0
] | [
0.5853658486853064,
0.2564102514924393,
0.04444443944691414,
0.47457626650962376,
0.1960784264667437
] | [
"We propose a novel combination of adversarial training and provable defenses which produces a model with state-of-the-art accuracy and certified robustness on CIFAR-10. "
] | Adversarial Training and Provable Defenses: Bridging the Gap | SJxSDxrKDr | [
"adversarial training provable defense"
] |
[
"Learning tasks on source code (i.e., formal languages) have been considered recently, but most work has tried to transfer natural language methods and does not capitalize on the unique opportunities offered by code's known syntax.",
"For example, long-range dependencies induced by using the same variable or func... | [
0,
0,
1,
0,
0,
0,
0
] | [
0.14285713826530627,
0,
0.22727272231404969,
0.19512194622248677,
0.17857142397959197,
0.1568627403306422,
0.11428570938775531
] | [
"Programs have structure that can be represented as graphs, and graph neural networks can learn to find bugs on such graphs"
] | Learning to Represent Programs with Graphs | BJOFETxR- | [
"graph",
"learn",
"program",
"represent"
] |
[
"Overfitting is an ubiquitous problem in neural network training and usually mitigated using a holdout data set.\n",
"Here we challenge this rationale and investigate criteria for overfitting without using a holdout data set.\n",
"Specifically, we train a model for a fixed number of epochs multiple times with v... | [
0,
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.0740740696296299,
0.30769230316568047,
0.12903225394380866,
0,
0,
0.6666666617687076,
0.2999999950500001,
0.0714285670663268,
0.09999999505000023
] | [
"We introduce and analyze several criteria for detecting overfitting."
] | Overfitting Detection of Deep Neural Networks without a Hold Out Set | B1lKtjA9FQ | [
"overfitte"
] |
[
"Partially observable Markov decision processes (POMDPs) are a widely-used framework to model decision-making with uncertainty about the environment and under stochastic outcome.",
"In conventional POMDP models, the observations that the agent receives originate from fixed known distribution.",
"However, in a v... | [
0,
0,
0,
0,
0,
1,
0,
0
] | [
0.16216215734112505,
0,
0.1666666618055557,
0.18181817685950424,
0.10256409783037496,
0.37837837355734116,
0.15789473206371207,
0.29999999531250005
] | [
"We develop a point-based value iteration solver for POMDPs with active perception and planning tasks."
] | Perception-Aware Point-Based Value Iteration for Partially Observable Markov Decision Processes | S1lTg3RcFm | [
"point-based value iteration"
] |
[
"Deep neural networks, in particular convolutional neural networks, have become highly effective tools for compressing images and solving inverse problems including denoising, inpainting, and reconstruction from few and noisy measurements.",
"This success can be attributed in part to their ability to represent an... | [
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.11538461038461562,
0.19047618575963732,
0.07547169311498787,
0.3214285664540817,
0.1363636315289258,
0.11764705382545197,
0.0952380905215422,
0.1785714235969389,
0.1568627400999617
] | [
"We introduce an underparameterized, nonconvolutional, and simple deep neural network that can, without training, effectively represent natural images and solve image processing tasks like compression and denoising competitively."
] | Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks | rylV-2C9KQ | [
"deep",
"network",
"image"
] |
[
"In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU).",
"We give an algorithm to train a ReLU DNN with one hidden layer to {\\em global optimality} with runtime polynomial in the data size albeit exponential in the input dimension.",
... | [
0,
1,
0,
0,
0,
0,
0
] | [
0.22222221755829916,
0.2666666617555556,
0.17543859167743936,
0.18518518052126212,
0.18666666171022236,
0.1999999951125001,
0.09302325250405635
] | [
"This paper 1) characterizes functions representable by ReLU DNNs, 2) formally studies the benefit of depth in such architectures, 3) gives an algorithm to implement empirical risk minimization to global optimality for two layer ReLU nets."
] | Understanding Deep Neural Networks with Rectified Linear Units | B1J_rgWRW | [] |
[
"The backpropagation of error algorithm (BP) is often said to be impossible to implement in a real brain.",
"The recent success of deep networks in machine learning and AI, however, has inspired a number of proposals for understanding how the brain might learn across multiple layers, and hence how it might implem... | [
0,
0,
0,
0,
0,
0,
0,
1
] | [
0,
0.13636363261363646,
0.1538461497961868,
0.2631578906232687,
0.21052631091412755,
0,
0.1499999960125001,
0.2962962914677641
] | [
"Benchmarks for biologically plausible learning algorithms on complex datasets and architectures"
] | Assessing the scalability of biologically-motivated deep learning algorithms and architectures | BypdvewVM | [
"architecture",
"algorithm"
] |
[
"Deep neural networks (DNNs) usually contain millions, maybe billions, of parameters/weights, making both storage and computation very expensive.",
"This has motivated a large body of work to reduce the complexity of the neural network by using sparsity-inducing regularizers. ",
"Another well-known approach fo... | [
0,
0,
0,
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.055555550555556006,
0.1621621571658146,
0.14634145848899482,
0.05128204631163757,
0.1025640975936886,
0.12499999531250018,
0.16666666166666683,
0,
0.13953487885343446,
0.19354838297606666,
0.18604650676041115
] | [
"We have proposed using the recent GrOWL regularizer for simultaneous parameter sparsity and tying in DNN learning. "
] | LEARNING TO SHARE: SIMULTANEOUS PARAMETER TYING AND SPARSIFICATION IN DEEP LEARNING | rypT3fb0b | [
"simultaneous parameter",
"learning"
] |
[
"Weight-sharing—the simultaneous optimization of multiple neural networks using the same parameters—has emerged as a key component of state-of-the-art neural architecture search.",
"However, its success is poorly understood and often found to be surprising.",
"We argue that, rather than just being an optimizati... | [
1,
0,
0,
0,
0,
0,
0
] | [
0.42424241935720847,
0.07692307195266304,
0.32653060816326535,
0.16666666253472232,
0.2142857092857144,
0.23809523365079371,
0.19512194672218927
] | [
"An analysis of the learning and optimization structures of architecture search in neural networks and beyond."
] | On Weight-Sharing and Bilevel Optimization in Architecture Search | HJgRCyHFDr | [
"architecture search",
"optimization"
] |
[
"Deep latent variable models have seen recent success in many data domains.",
"Lossless compression is an application of these models which, despite having the potential to be highly useful, has yet to be implemented in a practical manner.",
"We present '`Bits Back with ANS' (BB-ANS), a scheme to perform lossle... | [
0,
0,
1,
0,
0,
0
] | [
0,
0.15789473218836578,
0.23529411280276827,
0.19512194672218927,
0.08888888460246935,
0.08333332847222251
] | [
"We do lossless compression of large image datasets using a VAE, beat existing compression algorithms."
] | Practical lossless compression with latent variables using bits back coding | ryE98iR5tm | [
"lossless compression"
] |
[
"Hyperparameter tuning is arguably the most important ingredient for obtaining state of art performance in deep networks. ",
"We focus on hyperparameters that are related to the optimization algorithm, e.g. learning rates, which have a large impact on the training speed and the resulting accuracy.",
"Typical... | [
0,
0,
0,
0,
1,
0
] | [
0,
0.066666663888889,
0,
0.15384615073964505,
0.2399999968,
0
] | [
"Bayesian optimization based online hyperparameter optimization."
] | Dynamically Learning the Learning Rates: Online Hyperparameter Optimization | HJtPtdqQG | [
"online hyperparameter optimization"
] |
[
"Multi-hop text-based question-answering is a current challenge in machine comprehension. \n",
"This task requires to sequentially integrate facts from multiple passages to answer complex natural language questions.\n",
"In this paper, we propose a novel architecture, called the Latent Question Reformulation Ne... | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.1111111068672841,
0,
0.9259259209533608,
0.055555551311728714,
0.1025640979618674,
0.0930232509464578,
0.055555551311728714,
0.29268292207019636,
0.08163264806330726,
0.19999999531250012
] | [
"In this paper, we propose the Latent Question Reformulation Network (LQR-net), a multi-hop and parallel attentive network designed for question-answering tasks that require reasoning capabilities."
] | Latent Question Reformulation and Information Accumulation for Multi-Hop Machine Reading | S1x63TEYvr | [
"latent question reformulation",
"multi-hop"
] |
[
"We propose a method to automatically compute the importance of features at every observation in time series, by simulating counterfactual trajectories given previous observations.",
"We define the importance of each observation as the change in the model output caused by replacing the observation with a generate... | [
1,
0,
0,
0
] | [
0.21621621165814472,
0.12499999517578143,
0.08333332836805586,
0.04444444033580285
] | [
"Explaining Multivariate Time Series Models by finding important observations in time using Counterfactuals"
] | Explaining Time Series by Counterfactuals | HygDF1rYDB | [
"counterfactual",
"time series",
"explain"
] |
[
"This paper addresses unsupervised domain adaptation, the setting where labeled training data is available on a source domain, but the goal is to have good performance on a target domain with only unlabeled data.",
"Like much of previous work, we seek to align the learned representations of the source and target ... | [
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.19999999580000008,
0.12903225331945908,
0.1999999952000001,
0.07692307195266304,
0.19354838235171706,
0.09090908595041348,
0.22222221728395072,
0.15999999500800016
] | [
"We use self-supervision on both domain to align them for unsupervised domain adaptation."
] | Unsupervised Domain Adaptation through Self-Supervision | S1lF8xHYwS | [
"domain adaptation",
"self-supervision"
] |
[
"We introduce simple, efficient algorithms for computing a MinHash of a probability distribution, suitable for both sparse and dense data, with equivalent running times to the state of the art for both cases.",
"The collision probability of these algorithms is a new measure of the similarity of positive vectors w... | [
0,
1,
0,
0
] | [
0.21276595255771855,
0.30769230269559505,
0.15384614884944134,
0.3043478211720227
] | [
"The minimum of a set of exponentially distributed hashes has a very useful collision probability that generalizes the Jaccard Index to probability distributions."
] | Maximally Consistent Sampling and the Jaccard Index of Probability Distributions | BkOswnc5z | [
"jaccard index probability distribution"
] |
[
"Recently, progress has been made towards improving relational reasoning in machine learning field.",
"Among existing models, graph neural networks (GNNs) is one of the most effective approaches for multi-hop relational reasoning.",
"In fact, multi-hop relational reasoning is indispensable in many natural langu... | [
0,
0,
0,
1,
0,
0,
0
] | [
0.06666666175555591,
0.2285714235755103,
0.17647058323529427,
0.26666666196543215,
0.07999999564800023,
0.05128204636423453,
0.21621621124908705
] | [
"A graph neural network model with parameters generated from natural languages, which can perform multi-hop reasoning. "
] | Graph Neural Networks with Generated Parameters for Relation Extraction | SkgzYiRqtX | [
"graph neural network",
"generate",
"parameter"
] |
[
"Off-Policy Actor-Critic (Off-PAC) methods have proven successful in a variety of continuous control tasks.",
"Normally, the critic’s action-value function is updated using temporal-difference, and the critic in turn provides a loss for the actor that trains it to take actions with higher expected return.",
"In... | [
0,
0,
1,
0,
0,
0
] | [
0.05882352456747445,
0.12499999513888908,
0.22727272231404969,
0.1538461491124262,
0.15789473185595584,
0.20833332847222233
] | [
"We present Meta-Critic, an auxiliary critic module for off-policy actor-critic methods that can be meta-learned online during single task learning."
] | Online Meta-Critic Learning for Off-Policy Actor-Critic Methods | H1lKd6NYPS | [
"off-policy actor-critic method",
"learning",
"online",
"meta-critic"
] |
[
"Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data.",
"Nevertheless, these networks often generalize well in practice.",
"It has also been observed that trained networks can often be ``compressed to much smaller representations.",
"The purpose of this... | [
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.1176470541003462,
0.06896551324613578,
0.05405404914536203,
0,
0.26086956025519853,
0.10526315295013873,
0,
0.09999999501250025
] | [
"We obtain non-vacuous generalization bounds on ImageNet-scale deep neural networks by combining an original PAC-Bayes bound and an off-the-shelf neural network compression method."
] | Non-vacuous Generalization Bounds at the ImageNet Scale: a PAC-Bayesian Compression Approach | BJgqqsAct7 | [
"non-vacuous generalization bound",
"compression"
] |
[
"Adversarial examples can be defined as inputs to a model which induce a mistake -- where the model output is different than that of an oracle, perhaps in surprising or malicious ways.",
"Original models of adversarial attacks are primarily studied in the context of classification and computer vision tasks.",
"... | [
0,
0,
0,
1,
0,
0,
0
] | [
0.18181817685950424,
0.24390243426531835,
0.22641508935564264,
0.3333333283420139,
0.3018867874688502,
0.1224489745939194,
0.23333332847222235
] | [
"We propose an alternative measure for determining effectiveness of adversarial attacks in NLP models according to a distance measure-based method like incremental L2-gain in control theory."
] | Adversarial Gain | HkgGWM3som | [
"adversarial"
] |
[
"We propose a Warped Residual Network (WarpNet) using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network.",
"We apply a perturbation theory on residual networks and decouple the interactions between residual units.",
... | [
1,
0,
0,
0,
0,
0
] | [
0.9642857092857144,
0.23809523365079371,
0.23255813499188757,
0.07692307195266304,
0.2711864356908935,
0.18181817681983486
] | [
"We propose the Warped Residual Network using a parallelizable warp operator for forward and backward propagation to distant layers that trains faster than the original residual neural network. "
] | Decoupling the Layers in Residual Networks | SyMvJrdaW | [
"residual network",
"layer"
] |
[
"A plethora of methods attempting to explain predictions of black-box models have been proposed by the Explainable Artificial Intelligence (XAI) community.",
"Yet, measuring the quality of the generated explanations is largely unexplored, making quantitative comparisons non-trivial.",
"In this work, we propose ... | [
0,
0,
1,
0,
0,
0
] | [
0.14634145841760873,
0.05714285234285754,
0.37499999507812504,
0.10526315295013873,
0.22857142377142864,
0.11428570948571448
] | [
"We propose a suite of metrics that capture desired properties of explainability algorithms and use it to objectively compare and evaluate such methods"
] | On Evaluating Explainability Algorithms | B1xBAA4FwH | [
"explainability algorithm",
"evaluate"
] |
[
"Neural networks are known to produce unexpected results on inputs that are far from the training distribution.",
"One approach to tackle this problem is to detect the samples on which the trained network can not answer reliably.",
"ODIN is a recently proposed method for out-of-distribution detection that does ... | [
0,
0,
0,
0,
1
] | [
0.12121211621671278,
0.11428570928979613,
0.2926829219750149,
0.1333333284222224,
0.29999999511250003
] | [
"A recent out-of-distribution detection method helps to measure the confidence of RNN predictions for some NLP tasks"
] | On the Confidence of Neural Network Predictions for some NLP Tasks | HJf2ds2ssm | [
"prediction nlp task",
"confidence"
] |
[
"Some recent work has shown separation between the expressive power of depth-2 and depth-3 neural networks.",
"These separation results are shown by constructing functions and input distributions, so that the function is well-approximable by a depth-3 neural network of polynomial size but it cannot be well-approx... | [
0,
0,
0,
1,
0
] | [
0.2399999953920001,
0.09302325250405635,
0.08333332864583361,
0.2777777740277778,
0.18749999595703135
] | [
"depth-2-vs-3 separation for sigmoidal neural networks over general distributions"
] | Depth separation and weight-width trade-offs for sigmoidal neural networks | SJICXeWAb | [
"sigmoidal neural network",
"separation"
] |
[
"The smallest eigenvectors of the graph Laplacian are well-known to provide a succinct representation of the geometry of a weighted graph.",
"In reinforcement learning (RL), where the weighted graph may be interpreted as the state transition process induced by a behavior policy acting on the environment, approxim... | [
0,
0,
0,
0,
0,
0,
0,
1
] | [
0.2790697627690644,
0.2711864357138754,
0.07547169311498787,
0,
0.4999999950781251,
0.23255813486208768,
0.24390243452706728,
0.5862068915755053
] | [
"We propose a scalable method to approximate the eigenvectors of the Laplacian in the reinforcement learning context and we show that the learned representations can improve the performance of an RL agent."
] | The Laplacian in RL: Learning Representations with Efficient Approximations | HJlNpoA5YQ | [
"learn representation",
"rl",
"laplacian"
] |
[
"Our work offers a new method for domain translation from semantic label maps\n",
"and Computer Graphic (CG) simulation edge map images to photo-realistic im-\n",
"ages.",
"We train a Generative Adversarial Network (GAN) in a conditional way to\n",
"generate a photo-realistic version of a given CG scene.",
... | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.0952380905215422,
0.3157894688088643,
0.1052631530193908,
0,
0,
0.09999999520000023,
0.08695651720226867,
0.1052631530193908,
0.23529411266435996
] | [
"Simulation to real images translation and video generation"
] | S-Flow GAN | BJxqohNFPB | [] |
[
"Deep neural networks are widely used in various domains, but the prohibitive computational complexity prevents their deployment on mobile devices.",
"Numerous model compression algorithms have been proposed, however, it is often difficult and time-consuming to choose proper hyper-parameters to obtain an efficien... | [
0,
0,
1,
0,
0,
0
] | [
0.19999999500000015,
0.23809523310657604,
0.5142857093877552,
0.26086956030245756,
0.2999999950000001,
0.06666666222222252
] | [
"We propose PocketFlow, an automated framework for model compression and acceleration, to facilitate deep learning models' deployment on mobile devices."
] | PocketFlow: An Automated Framework for Compressing and Accelerating Deep Neural Networks | H1fWoYhdim | [
"pocketflow automate framework",
"deep"
] |
[
"Generative models provide a way to model structure in complex distributions and have been shown to be useful for many tasks of practical interest.",
"However, current techniques for training generative models require access to fully-observed samples.",
"In many settings, it is expensive or even impossible to o... | [
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.06249999595703151,
0.09523809034013632,
0.14814814370370383,
0.0740740696296299,
0,
0,
0,
0
] | [
"How to learn GANs from noisy, distorted, partial observations"
] | AmbientGAN: Generative models from lossy measurements | Hy7fDog0b | [] |
[
"Random Matrix Theory (RMT) is applied to analyze the weight matrices of Deep Neural Networks (DNNs), including both production quality, pre-trained models such as AlexNet and Inception, and smaller models trained from scratch, such as LeNet5 and a miniature-AlexNet. ",
"Empirical and theoretical results clearly... | [
1,
0,
0,
0,
0,
0,
0
] | [
0.1666666618055557,
0.09677418873569223,
0.07272726776859538,
0.08163264806330726,
0.12499999500868073,
0.1538461492439186,
0.1568627400999617
] | [
"See the abstract. (For the revision, the paper is identical, except for a 59 page Supplementary Material, which can serve as a stand-along technical report version of the paper.)"
] | Traditional and Heavy Tailed Self Regularization in Neural Network Models | SJeFNoRcFQ | [] |
[
"We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting.",
"The proposed attention mechanism is based on recent methods to visually explain predictions made by DNNs.",
"We apply the proposed explanation-based attention to MNIST and SVHN cla... | [
1,
0,
0,
0
] | [
0.999999995,
0.17647058325259532,
0.2758620642568371,
0.13953487885343446
] | [
"We introduce an attention mechanism to improve feature extraction for deep active learning (AL) in the semi-supervised setting."
] | Explanation-Based Attention for Semi-Supervised Deep Active Learning | SyxKiVmedV | [
"deep active learning",
"semi-supervised",
"attention"
] |
[
"We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces.",
"We present an algorithm for calculations of the objective function's barcodes of minima. ",
"Our experiments confirm two principal observations: (1) the barcodes of minima are located in a small lower part o... | [
1,
0,
0,
0
] | [
0.999999995,
0.14814814315500704,
0.08888888460246935,
0.14814814315500704
] | [
"We apply canonical forms of gradient complexes (barcodes) to explore neural networks loss surfaces."
] | Barcodes as summary of objective functions' topology | S1gwC1StwS | [
"barcode"
] |
[
"\nNew types of compute hardware in development and entering the market hold the promise of revolutionizing deep learning in a manner as profound as GPUs.",
"However, existing software frameworks and training algorithms for deep learning have yet to evolve to fully leverage the capability of the new wave of silic... | [
0,
0,
0,
1,
0,
0
] | [
0,
0.12499999570312517,
0.11428571020408178,
0.14285713826530627,
0.07843136939638615,
0
] | [
"Using asynchronous gradient updates to accelerate dynamic neural network training"
] | AMPNet: Asynchronous Model-Parallel Training for Dynamic Neural Networks | HJnQJXbC- | [
"dynamic neural network",
"asynchronous",
"training"
] |
[
"In cooperative multi-agent reinforcement learning (MARL), how to design a suitable reward signal to accelerate learning and stabilize convergence is a critical problem.",
"The global reward signal assigns the same global reward to all agents without distinguishing their contributions, while the local reward sign... | [
0,
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.19607842660515198,
0.2807017494244384,
0.07692307210798846,
0.5106382933816206,
0.17391303908317593,
0.15999999528800013,
0.166666662092014,
0.1499999965125001,
0.21428570934311236
] | [
"We study reward design problem in cooperative MARL based on packet routing environments. The experimental results remind us to be careful to design the rewards, as they are really important to guide the agent behavior."
] | Reward Design in Cooperative Multi-agent Reinforcement Learning for Packet Routing | r15kjpHa- | [
"packet routing",
"cooperative",
"reward design"
] |
[
"Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized least squares solutions.",
"Along these lines, we present some extensions of the Neumann network, a recently introduced end-to-end l... | [
0,
0,
1,
0
] | [
0.2641509384122464,
0.11111110613854619,
0.3333333283420139,
0.3255813904813413
] | [
"Neumann networks are an end-to-end, sample-efficient learning approach to solving linear inverse problems in imaging that are compatible with the MSE optimal approach and admit an extension to patch-based learning."
] | Learning to Solve Linear Inverse Problems in Imaging with Neumann Networks | SyxYnQ398H | [
"solve linear inverse problem image",
"neumann network",
"learn"
] |
[
"End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework.",
"We propose the global-to-local memory pointer (GLMP) networks to address this issue.",
"In our model, a global memory encoder and a local memory decoder are pr... | [
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.26086956030245756,
0.10526315357340738,
0.5714285667120181,
0.24999999545000007,
0.11111110709876558,
0.26086956030245756,
0.0952380905215422,
0.11320754217159153
] | [
"GLMP: Global memory encoder (context RNN, global pointer) and local memory decoder (sketch RNN, local pointer) that share external knowledge (MemNN) are proposed to strengthen response generation in task-oriented dialogue."
] | Global-to-local Memory Pointer Networks for Task-Oriented Dialogue | ryxnHhRqFm | [
"task-oriented dialogue",
"memory",
"pointer"
] |
[
"The checkerboard phenomenon is one of the well-known visual artifacts in the computer vision field.",
"The origins and solutions of checkerboard artifacts in the pixel space have been studied for a long time, but their effects on the gradient space have rarely been investigated.",
"In this paper, we revisit th... | [
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.052631574293629226,
0.20408162765514384,
0.217391299357278,
0.27906976250946464,
0.1463414585603809,
0.23255813460248795,
0.09523809034013632,
0.08695651674858251
] | [
"We propose a novel aritificial checkerboard enhancer (ACE) module which guides attacks to a pre-specified pixel space and successfully defends it with a simple padding operation."
] | ACE: Artificial Checkerboard Enhancer to Induce and Evade Adversarial Attacks | BJlc6iA5YX | [
"checkerboard enhancer",
"ace",
"attack"
] |
[
"Min-max formulations have attracted great attention in the ML community due to the rise of deep generative models and adversarial methods, and understanding the dynamics of (stochastic) gradient algorithms for solving such formulations has been a grand challenge.",
"As a first step, we restrict to bilinear zero-... | [
0,
1,
0,
0
] | [
0.26415093869704526,
0.4186046461871282,
0.2941176422145329,
0.1714285665306124
] | [
"We systematically analyze the convergence behaviour of popular gradient algorithms for solving bilinear games, with both simultaneous and alternating updates."
] | Convergence Behaviour of Some Gradient-Based Methods on Bilinear Zero-Sum Games | SJlVY04FwH | [
"convergence behaviour",
"bilinear",
"game"
] |
[
"Most approaches in generalized zero-shot learning rely on cross-modal mapping between an image feature space and a class embedding space or on generating artificial image features.",
"However, learning a shared cross-modal embedding by aligning the latent spaces of modality-specific autoencoders is shown to be p... | [
1,
0,
0,
0,
0
] | [
0.47826086456521744,
0.35555555055802474,
0.2539682493323256,
0.34042552691715716,
0.24390243409875084
] | [
"We use VAEs to learn a shared latent space embedding between image features and attributes and thereby achieve state-of-the-art results in generalized zero-shot learning."
] | Cross-Linked Variational Autoencoders for Generalized Zero-Shot Learning | BkghJoRNO4 | [
"generalized zero-shot learning"
] |
[
"Intuitively, image classification should profit from using spatial information.",
"Recent work, however, suggests that this might be overrated in standard CNNs.",
"In this paper, we are pushing the envelope and aim to further investigate the reliance on and necessity of spatial information.",
"We propose and... | [
1,
0,
0,
0,
0,
0
] | [
0.18181817698347122,
0,
0.06249999517578163,
0.060606055831038036,
0.05263157444598377,
0.16216215760409072
] | [
"Spatial information at last layers is not necessary for a good classification accuracy."
] | Spatial Information is Overrated for Image Classification | H1l7AkrFPS | [
"spatial information",
"classification"
] |
[
"Disentangling underlying generative factors of a data distribution is important for interpretability and generalizable representations.",
"In this paper, we introduce two novel disentangling methods.",
"Our first method, Unlabeled Disentangling GAN (UD-GAN, unsupervised), decomposes the latent noise by genera... | [
0,
0,
0,
0,
0,
0,
0,
1
] | [
0.12903225306971924,
0,
0.08695651720226867,
0.0769230721893494,
0.05405404914536203,
0.275862064019025,
0.16666666172839517,
0.2777777728395062
] | [
"We use Siamese Networks to guide and disentangle the generation process in GANs without labeled data."
] | Unlabeled Disentangling of GANs with Guided Siamese Networks | H1e0-30qKm | [
"siamese network",
"guide",
"gan"
] |
[
"We present Predicted Variables, an approach to making machine learning (ML) a first class citizen in programming languages.\n",
"There is a growing divide in approaches to building systems: using human experts (e.g. programming) on the one hand, and using behavior learned from data (e.g. ML) on the other hand.",... | [
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.9444444394598766,
0.13333332863209893,
0.19999999508888905,
0.12903225311134256,
0.05882352441176514,
0.13333332863209893,
0.14999999511250017,
0.04444443974321038,
0.06451612407908468,
0,
0.06060605561065239,
0.1249999954253474
] | [
"We present Predicted Variables, an approach to making machine learning a first class citizen in programming languages."
] | Predicted Variables in Programming | B1epooR5FX | [
"predict variable",
"programming"
] |
[
"Much recent research has been devoted to video prediction and generation, but mostly for short-scale time horizons.",
"The hierarchical video prediction method by Villegas et al. (2017) is an example of a state of the art method for long term video prediction. ",
"However, their method has limited applicabil... | [
0,
0,
0,
1,
0,
0
] | [
0.19354838214360054,
0.2222222174691359,
0.09756097111243328,
0.33333332835555557,
0.1568627411149559,
0.1333333290469137
] | [
"We show ways to train a hierarchical video prediction model without needing pose labels."
] | Unsupervised Hierarchical Video Prediction | rkmtTJZCb | [
"hierarchical video prediction"
] |
[
"Combining information from different sensory modalities to execute goal directed actions is a key aspect of human intelligence.",
"Specifically, human agents are very easily able to translate the task communicated in one sensory domain (say vision) into a representation that enables them to complete this task wh... | [
0,
0,
0,
1,
0,
0,
0
] | [
0.12765956974196485,
0.21212120719467414,
0.313725485290273,
0.3870967692143601,
0.19512194707911967,
0.13636363186983486,
0.21818181319669433
] | [
"In this work, we study the problem of learning representations to identify novel objects by exploring objects using tactile sensing. Key point here is that the query is provided in image domain."
] | Classification in the dark using tactile exploration | B1lXGnRctX | [
"tactile"
] |
[
"Locality sensitive hashing schemes such as \\simhash provide compact representations of multisets from which similarity can be estimated.",
"However, in certain applications, we need to estimate the similarity of dynamically changing sets. ",
"In this case, we need the representation to be a homomorphism so t... | [
0,
0,
0,
0,
1,
0,
0,
0,
0
] | [
0.08333332864583361,
0.13333332888888905,
0.21428570931122462,
0.20689654673008337,
0.5312499950195313,
0.4067796560183855,
0.23728813059465684,
0.28985506754883433,
0.21739129981096417
] | [
"We employ linear homomorphic compression schemes to represent the sufficient statistics of a conditional random field model of coreference and this allows us to scale inference and improve speed by an order of magnitude."
] | Scaling Hierarchical Coreference with Homomorphic Compression | H1gwRx5T6Q | [
"coreference",
"homomorphic compression",
"scale"
] |
[
"Motivated by applications to unsupervised learning, we consider the problem of measuring mutual information.",
"Recent analysis has shown that naive kNN estimators of mutual information have serious statistical limitations motivating more refined methods.",
"In this paper we prove that serious statistical limi... | [
0,
0,
0,
0,
0,
0,
1,
0
] | [
0.3076923027218935,
0.2580645113839751,
0.07407406913580279,
0.2051282008678502,
0.17391303962192828,
0,
0.4285714236734694,
0.1714285669224491
] | [
"We give a theoretical analysis of the measurement and optimization of mutual information."
] | Formal Limitations on the Measurement of Mutual Information | BkedwoC5t7 | [
"measurement",
"mutual information"
] |
[
"In this paper, we propose a neural network framework called neuron hierarchical network (NHN), that evolves beyond the hierarchy in layers, and concentrates on the hierarchy of neurons.",
"We observe mass redundancy in the weights of both handcrafted and randomly searched architectures.",
"Inspired by the deve... | [
1,
0,
0,
0,
0
] | [
0.3018867874688502,
0.14285713841269856,
0.18518518019204405,
0.23999999507200007,
0.25454544954710745
] | [
"By breaking the layer hierarchy, we propose a 3-step approach to the construction of neuron-hierarchy networks that outperform NAS, SMASH and hierarchical representation with fewer parameters and shorter searching time."
] | Neuron Hierarchical Networks | rylxrsR9Fm | [
"network",
"hierarchical"
] |
[
"Simulation is a useful tool in situations where training data for machine learning models is costly to annotate or even hard to acquire.",
"In this work, we propose a reinforcement learning-based method for automatically adjusting the parameters of any (non-differentiable) simulator, thereby controlling the dist... | [
0,
1,
0,
0,
0,
0
] | [
0.266666661688889,
0.357142852244898,
0.24999999531250006,
0.13793103162901313,
0.16666666222222234,
0.19047618557823143
] | [
"We propose an algorithm that automatically adjusts parameters of a simulation engine to generate training data for a neural network such that validation accuracy is maximized."
] | Learning To Simulate | HJgkx2Aqt7 | [] |
[
"Modelling statistical relationships beyond the conditional mean is crucial in many settings.",
"Conditional density estimation (CDE) aims to learn the full conditional probability density from data.",
"Though highly expressive, neural network based CDE models can suffer from severe over-fitting when trained wi... | [
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.09090908595041348,
0.26086956030245756,
0.06666666222222252,
0.0740740694101512,
0.1874999957031251,
0.0769230721893494,
0.071428566836735,
0.11428571020408178
] | [
"A model-agnostic regularization scheme for neural network-based conditional density estimation."
] | Noise Regularization for Conditional Density Estimation | rygtPhVtDS | [
"conditional density estimation",
"regularization"
] |
[
"Unsupervised representation learning holds the promise of exploiting large amount of available unlabeled data to learn general representations.",
"A promising technique for unsupervised learning is the framework of Variational Auto-encoders (VAEs).",
"However, unsupervised representations learned by VAEs are s... | [
0,
0,
0,
0,
0,
1,
0
] | [
0.05882352441176514,
0.2666666617555556,
0.2666666617555556,
0.2564102514924393,
0.05714285214693921,
0.49999999501953135,
0.21052631084487544
] | [
"A patch-based bottleneck formulation in a VAE framework that learns unsupervised representations better suited for visual recognition."
] | PatchVAE: Learning Local Latent Codes for Recognition | r1x1kJHKDH | [
"learn",
"recognition"
] |
[
"Vanishing and exploding gradients are two of the main obstacles in training deep neural networks, especially in capturing long range dependencies in recurrent neural networks (RNNs).",
"In this paper, we present an efficient parametrization of the transition matrix of an RNN that allows us to stabilize the gradi... | [
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.12244897461057913,
0.40816326032486466,
0.2127659525033953,
0.08163264807996698,
0.21428570931122462,
0.07547169311498787,
0.24999999503472223,
0.24999999503472223
] | [
"To solve the gradient vanishing/exploding problems, we proprose an efficient parametrization of the transition matrix of RNN that loses no expressive power, converges faster and has good generalization."
] | Stabilizing Gradients for Deep Neural Networks via Efficient SVD Parameterization | SyL9u-WA- | [
"gradient",
"efficient"
] |
[
"Recent image style transferring methods achieved arbitrary stylization with input content and style images.",
"To transfer the style of an arbitrary image to a content image, these methods used a feed-forward network with a lowest-scaled feature transformer or a cascade of the networks with a feature transformer... | [
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.14285713788265325,
0.2926829221891732,
0.16216215734112505,
0.20689654673008337,
0.21052631101108046,
0.18604650708491086,
0.19354838210197725,
0.12499999501953145
] | [
"A paper suggesting a method to transform the style of images using deep neural networks."
] | Total Style Transfer with a Single Feed-Forward Network | BJ4AFsRcFQ | [
"style",
"network"
] |
[
"Beyond understanding what is being discussed, human communication requires an awareness of what someone is feeling.",
"One challenge for dialogue agents is recognizing feelings in the conversation partner and replying accordingly, a key communicative skill that is trivial for humans.",
"Research in this area i... | [
0,
0,
0,
1,
0,
0
] | [
0.049999995450000424,
0.16666666170138905,
0.21739129943289237,
0.3921568577470204,
0.15384614904615398,
0.2692307642307693
] | [
"We improve existing dialogue systems for responding to people sharing personal stories, incorporating emotion prediction representations and also release a new benchmark and dataset of empathetic dialogues."
] | I Know the Feeling: Learning to Converse with Empathy | HyesW2C9YQ | [] |
[
"Granger causality is a widely-used criterion for analyzing interactions in large-scale networks.",
"As most physical interactions are inherently nonlinear, we consider the problem of inferring the existence of pairwise Granger causality between nonlinearly interacting stochastic processes from their time series ... | [
0,
1,
0,
0,
0,
0,
0,
0
] | [
0.20689654687277062,
0.3636363588946281,
0.09999999511250024,
0.17021276133997296,
0,
0.10256409764628557,
0.04347825620983037,
0.21739129968809084
] | [
"A new recurrent neural network architecture for detecting pairwise Granger causality between nonlinearly interacting time series. "
] | Economy Statistical Recurrent Units For Inferring Nonlinear Granger Causality | SyxV9ANFDH | [
"granger causality",
"recurrent"
] |
[
"Graph convolutional networks (GCNs) are powerful deep neural networks for graph-structured data.",
"However, GCN computes nodes' representation recursively from their neighbors, making the receptive field size grow exponentially with the number of layers. ",
"Previous attempts on reducing the receptive field ... | [
0,
0,
0,
0,
0,
1,
0
] | [
0.18181817737373748,
0.13953487872363457,
0.2448979542357352,
0.3255813903515414,
0.05405404923301723,
0.35555555055802474,
0.14999999505000017
] | [
"A control variate based stochastic training algorithm for graph convolutional networks that the receptive field can be only two neighbors per node."
] | Stochastic Training of Graph Convolutional Networks | rylejExC- | [
"graph convolutional network",
"stochastic training"
] |
[
"Low bit-width integer weights and activations are very important for efficient inference, especially with respect to lower power consumption.",
"We propose to apply Monte Carlo methods and importance sampling to sparsify and quantize pre-trained neural networks without any retraining.",
"We obtain sparse, low ... | [
0,
1,
0,
0,
0,
0
] | [
0.06666666202222254,
0.41379309873959574,
0,
0,
0.10256409851413557,
0.18749999548828133
] | [
"Monte Carlo methods for quantizing pre-trained models without any additional training."
] | Instant Quantization of Neural Networks using Monte Carlo Methods | B1e5NySKwH | [
"monte carlo method"
] |
[
"We propose the Information Maximization Autoencoder (IMAE), an information theoretic approach to simultaneously learn continuous and discrete representations in an unsupervised setting.",
"Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a h... | [
1,
0,
0,
0
] | [
0.3529411717474049,
0.26666666255802474,
0.14999999561250013,
0.24999999561250005
] | [
"Information theoretical approach for unsupervised learning of unsupervised learning of a hybrid of discrete and continuous representations, "
] | INFORMATION MAXIMIZATION AUTO-ENCODING | SyVpB2RqFX | [
"information"
] |
[
"Learning rules for neural networks necessarily include some form of regularization.",
"Most regularization techniques are conceptualized and implemented in the space of parameters.",
"However, it is also possible to regularize in the space of functions.",
"Here, we propose to measure networks in an $L^2$ Hil... | [
0,
0,
0,
1,
0,
0,
0
] | [
0,
0.11111110666666683,
0.16666666222222234,
0.3076923027218935,
0.11320754221431137,
0.1481481432098767,
0.10810810355003672
] | [
"It's important to consider optimization in function space, not just parameter space. We introduce a learning rule that reduces distance traveled in function space, just like SGD limits distance traveled in parameter space."
] | Improving generalization by regularizing in $L^2$ function space | H1l8sz-AW | [
"function space"
] |
[
"Stochastic gradient descent (SGD), which dates back to the 1950s, is one of the most popular and effective approaches for performing stochastic optimization.",
"Research on SGD resurged recently in machine learning for optimizing convex loss functions and training nonconvex deep neural networks.",
"The theory ... | [
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.3076923027744905,
0.22222221723765442,
0.13953487893996774,
0.09090908616735562,
0.06249999501953165,
0.10256409764628557,
0.07999999564800023,
0.06896551239001224,
0.13953487893996774
] | [
"Convergence theory for biased (but consistent) gradient estimators in stochastic optimization and application to graph convolutional networks"
] | Stochastic Gradient Descent with Biased but Consistent Gradient Estimators | rygMWT4twS | [
"consistent gradient estimator",
"stochastic"
] |
[
"We consider the problem of uncertainty estimation in the context of (non-Bayesian) deep neural classification.",
"In this context, all known methods are based on extracting uncertainty signals from a trained network optimized to solve the classification problem at hand.",
"We demonstrate that such techniques t... | [
1,
0,
0,
0,
0,
0
] | [
0.3333333284222223,
0.2439024341701369,
0.11111110612654343,
0.24999999511250007,
0.1960784269281047,
0.16666666168209893
] | [
"We use snapshots from the training process to improve any uncertainty estimation method of a DNN classifier."
] | Bias-Reduced Uncertainty Estimation for Deep Neural Classifiers | SJfb5jCqKm | [
"uncertainty estimation",
"classifier"
] |
[
"Existing public face image datasets are strongly biased toward Caucasian faces, and other races (e.g., Latino) are significantly underrepresented.",
"The models trained from such datasets suffer from inconsistent classification accuracy, which limits the applicability of face analytic systems to non-White race g... | [
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.15789473185595584,
0.09999999505000023,
0.2926829219036289,
0.060606055647383326,
0.30303029807162535,
0.11428570928979613,
0.09756097068411684,
0.2777777727777779
] | [
"A new face image dataset for balanced race, gender, and age which can be used for bias measurement and mitigation"
] | FairFace: A Novel Face Attribute Dataset for Bias Measurement and Mitigation | S1xSSTNKDB | [
"bias measurement mitigation",
"dataset",
"face"
] |
[
"Dramatic advances in generative models have resulted in near photographic quality for artificially rendered faces, animals and other objects in the natural world.",
"In spite of such advances, a higher level understanding of vision and imagery does not arise from exhaustively modeling an object, but instead ide... | [
0,
0,
1,
0,
0,
0
] | [
0.13636363137396712,
0.1090909042247936,
0.6976744136289887,
0.25531914393843375,
0.27272726773760336,
0.24999999511250007
] | [
"We attempt to model the drawing process of fonts by building sequential generative models of vector graphics (SVGs), a highly structured representation of font characters."
] | A Learned Representation for Scalable Vector Graphics | rklf4IUtOE | [
"vector graphic",
"representation"
] |
[
"What can we learn about the functional organization of cortical microcircuits from large-scale recordings of neural activity? ",
"To obtain an explicit and interpretable model of time-dependent functional connections between neurons and to establish the dynamics of the cortical information flow, we develop 'dyn... | [
0,
1,
0
] | [
0.15789473196675916,
0.36734693382757183,
0.36734693382757183
] | [
"We develop 'dynamic neural relational inference', a variational autoencoder model that can explicitly and interpretably represent the hidden dynamic relations between neurons."
] | Unsupervised Discovery of Dynamic Neural Circuits | S1leV7t8IB | [
"dynamic neural"
] |
[
"DeePa is a deep learning framework that explores parallelism in all parallelizable dimensions to accelerate the training process of convolutional neural networks.",
"DeePa optimizes parallelism at the granularity of each individual layer in the network.",
"We present an elimination-based algorithm that finds a... | [
1,
0,
0,
0
] | [
0.541666661701389,
0.5263157851523547,
0.15384614940170957,
0.20833332836805565
] | [
"To the best of our knowledge, DeePa is the first deep learning framework that controls and optimizes the parallelism of CNNs in all parallelizable dimensions at the granularity of each layer."
] | Exploring the Hidden Dimension in Accelerating Convolutional Neural Networks | SJCPLLpaW | [
"dimension"
] |
[
"One can substitute each neuron in any neural network with a kernel machine and obtain a counterpart powered by kernel machines.",
"The new network inherits the expressive power and architecture of the original but works in a more intuitive way since each node enjoys the simple interpretation as a hyperplane (in ... | [
0,
0,
0,
1,
0,
0
] | [
0.18604650669551123,
0.14545454053553736,
0.19607842638985018,
0.27906976250946464,
0.22222221728395072,
0
] | [
"We combine kernel method with connectionist models and show that the resulting deep architectures can be trained layer-wise and have more transparent learning dynamics. "
] | Learning Backpropagation-Free Deep Architectures with Kernels | H1GLm2R9Km | [
"deep architecture",
"kernel"
] |
[
"Many notions of fairness may be expressed as linear constraints, and the resulting constrained objective is often optimized by transforming the problem into its Lagrangian dual with additive linear penalties.",
"In non-convex settings, the resulting problem may be difficult to solve as the Lagrangian is not guar... | [
0,
0,
1,
0,
0,
0
] | [
0.13043477784499072,
0.14999999505000017,
0.37209301838831804,
0.2173912995841211,
0.22222221722222232,
0.05714285214693921
] | [
"We propose a method to stochastically optimize second-order penalties and show how this may apply to training fairness-aware classifiers."
] | Stochastic Learning of Additive Second-Order Penalties with Applications to Fairness | Bke0rjR5F7 | [
"second-order penalty"
] |
[
"Training methods for deep networks are primarily variants on stochastic gradient descent. ",
"Techniques that use (approximate) second-order information are rarely used because of the computational cost and noise associated with those approaches in deep learning contexts. ",
"However, in this paper, we show ... | [
0,
0,
1,
0,
0,
0,
0,
0
] | [
0.048780483474122935,
0.3018867874688502,
0.36363635900826446,
0.3076923027218935,
0.19230768733727824,
0.2916666618055556,
0.048780483474122935,
0.25925925426611807
] | [
"We show that deep learning network derivatives have a low-rank structure, and this structure allows us to use second-order derivative information to calculate learning rates adaptively and in a computationally feasible manner."
] | Understanding and Exploiting the Low-Rank Structure of Deep Networks | ByJ7obb0b | [
"low-rank structure",
"deep",
"network"
] |
[
"The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations.",
"Nonetheless, min-max optimization beyond the purpose of AT has... | [
0,
1,
0,
0,
0,
0,
0
] | [
0.10256409875082198,
0.4137930989298454,
0.05405405010956932,
0.11111110709876558,
0.35714285255102046,
0.16666666265432112,
0
] | [
"A unified min-max optimization framework for adversarial attack and defense"
] | Towards A Unified Min-Max Framework for Adversarial Exploration and Robustness | S1eik6EtPB | [
"framework adversarial",
"min-max"
] |
[
"Most deep learning models rely on expressive high-dimensional representations to achieve good performance on tasks such as classification.",
"However, the high dimensionality of these representations makes them difficult to interpret and prone to over-fitting.",
"We propose a simple, intuitive and scalable dim... | [
0,
0,
1,
0,
0,
0
] | [
0.06666666175555591,
0.071428566454082,
0.16216215760409072,
0.06249999517578163,
0,
0.07692307192307725
] | [
"dimensionality reduction for cases where examples can be represented as soft probability distributions"
] | Dimensionality Reduction for Representing the Knowledge of Probabilistic Models | SygD-hCcF7 | [
"dimensionality reduction",
"represent"
] |
[
"Intrinsically motivated goal exploration algorithms enable machines to discover repertoires of policies that produce a diversity of effects in complex environments.",
"These exploration algorithms have been shown to allow real world robots to acquire skills such as tool use in high-dimensional continuous state a... | [
1,
0,
0,
0,
0,
0
] | [
0.3043478211720227,
0.039999995008000624,
0.043478255954631936,
0.18604650684694443,
0.1587301538825902,
0.1754385915297016
] | [
"We propose a novel Intrinsically Motivated Goal Exploration architecture with unsupervised learning of goal space representations, and evaluate how various implementations enable the discovery of a diversity of policies."
] | Unsupervised Learning of Goal Spaces for Intrinsically Motivated Goal Exploration | S1DWPP1A- | [
"intrinsically motivate goal exploration",
"learning",
"space"
] |
[
"One of the main challenges of deep learning methods is the choice of an appropriate training strategy.",
"In particular, additional steps, such as unsupervised pre-training, have been shown to greatly improve the performances of deep structures.",
"In this article, we propose an extra training step, called pos... | [
0,
0,
1,
0,
0,
0
] | [
0.24999999507812506,
0.1621621571658146,
0.648648643652301,
0.2399999953920001,
0.15789473185595584,
0
] | [
"We propose an additional training step, called post-training, which computes optimal weights for the last layer of the network."
] | Post-training for Deep Learning | H1O0KGC6b | [] |
[
"Natural language processing (NLP) models often require a massive number of parameters for word embeddings, resulting in a large storage or memory footprint.",
"Deploying neural NLP models to mobile devices requires compressing the word embeddings without any significant sacrifices in performance.",
"For this p... | [
0,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
0.06451612491155072,
0.370370365925926,
0.18181817698347122,
0.0869565169754256,
0.08333332864583361,
0.06060605663911872,
0.07692307239644997,
0.25806451200832464,
0.1666666619791668,
0.06896551296076128
] | [
"Compressing the word embeddings over 94% without hurting the performance."
] | Compressing Word Embeddings via Deep Compositional Code Learning | BJRZzFlRb | [
"word embedding",
"compress"
] |
[
"It is important to detect anomalous inputs when deploying machine learning systems.",
"The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples.",
"At the same time, diverse image and text data are available in enormous... | [
0,
0,
0,
0,
1,
0,
0,
0
] | [
0.04444444053333368,
0.07692307228550324,
0.12765957028519706,
0.13793102957788364,
0.23255813596538674,
0.18518518043209886,
0.13793102957788364,
0.03999999551200051
] | [
"OE teaches anomaly detectors to learn heuristics for detecting unseen anomalies; experiments are in classification, density estimation, and calibration in NLP and vision settings; we do not tune on test distribution samples, unlike previous work"
] | Deep Anomaly Detection with Outlier Exposure | HyxCxhRcY7 | [
"anomaly"
] |
[
"While generative neural networks can learn to transform a specific input dataset into a specific target dataset, they require having just such a paired set of input/output datasets.",
"For instance, to fool the discriminator, a generative adversarial network (GAN) exclusively trained to transform images of black... | [
0,
0,
1,
0,
0,
0,
0,
0,
0,
0
] | [
0.20833332834201398,
0.07272726786115735,
0.285714280733028,
0.09523809028344697,
0.13333332833580264,
0.13043477760869585,
0.16666666167534736,
0.12499999500868073,
0.2162162115120527,
0.1454545405884299
] | [
"A method for learning a transformation between one pair of source/target datasets and applying it a separate source dataset for which there is no target dataset"
] | Beyond GANs: Transforming without a Target Distribution | H1lDSCEYPH | [
"target"
] |
[
"In this paper, we propose to combine imitation and reinforcement learning via the idea of reward shaping using an oracle.",
"We study the effectiveness of the near- optimal cost-to-go oracle on the planning horizon and demonstrate that the cost- to-go oracle shortens the learner’s planning horizon as function of... | [
1,
0,
0,
0,
0
] | [
0.19999999555555567,
0.09523809256739742,
0.17391303856332718,
0.04651162433748,
0.17142856734693887
] | [
"Combining Imitation Learning and Reinforcement Learning to learn to outperform the expert"
] | TRUNCATED HORIZON POLICY SEARCH: COMBINING REINFORCEMENT LEARNING & IMITATION LEARNING | ryUlhzWCZ | [
"reinforcement learning",
"learn",
"imitation",
"combine"
] |
[
"Recently, Generative Adversarial Networks (GANs) have emerged as a popular alternative for modeling complex high dimensional distributions.",
"Most of the existing works implicitly assume that the clean samples from the target distribution are easily available.",
"However, in many applications, this assumption... | [
0,
0,
0,
1,
0,
0
] | [
0.06896551239001224,
0.0714285665306126,
0,
0.2857142816326531,
0.0714285665306126,
0.05128204702169661
] | [
"An unsupervised learning approach for separating two structured signals from their superposition"
] | Unsupervised Demixing of Structured Signals from Their Superposition Using GANs | BygbVL8KO4 | [
"structured signal superposition"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.