Unnamed: 0.1
int64
0
41k
Unnamed: 0
int64
0
41k
author
stringlengths
9
1.39k
id
stringlengths
11
18
summary
stringlengths
25
3.66k
title
stringlengths
4
258
year
int64
1.99k
2.02k
arxiv_url
stringlengths
32
39
info
stringlengths
523
3.18k
embeddings
stringlengths
16.9k
17.1k
100
100
['Peter Wittek', 'Sándor Darányi', 'Efstratios Kontopoulos', 'Theodoros Moysiadis', 'Ioannis Kompatsiaris']
1502.01753v1
Based on the Aristotelian concept of potentiality vs. actuality allowing for the study of energy and dynamics in language, we propose a field approach to lexical analysis. Falling back on the distributional hypothesis to statistically model word meaning, we used evolving fields as a metaphor to express time-dependent c...
Monitoring Term Drift Based on Semantic Consistency in an Evolving Vector Field
2,015
http://arxiv.org/pdf/1502.01753v1
Title Monitoring Term Drift Based Semantic Consistency Evolving Vector Field Summary Based Aristotelian concept potentiality v actuality allowing study energy dynamic language propose field approach lexical analysis Falling back distributional hypothesis statistically model word meaning used evolving field metaphor exp...
[0.030528267845511436, 0.04965444281697273, -0.031217029318213463, 0.022996678948402405, -0.03168976306915283, -0.0291778314858675, -0.01496312115341425, 0.028549542650580406, -0.07261703163385391, -0.051482826471328735, 0.042648691684007645, -0.005036256276071072, 0.02993091568350792, 0.008601434528827667, -0.00916272...
101
101
['Jan Chorowski', 'Navdeep Jaitly']
1612.02695v1
The recently proposed Sequence-to-Sequence (seq2seq) framework advocates replacing complex data processing pipelines, such as an entire automatic speech recognition system, with a single neural network trained in an end-to-end fashion. In this contribution, we analyse an attention-based seq2seq speech recognition syste...
Towards better decoding and language model integration in sequence to sequence models
2,016
http://arxiv.org/pdf/1612.02695v1
Title Towards better decoding language model integration sequence sequence model Summary recently proposed SequencetoSequence seq2seq framework advocate replacing complex data processing pipeline entire automatic speech recognition system single neural network trained endtoend fashion contribution analyse attentionbase...
[0.02097560651600361, 0.07567127794027328, 0.01599881425499916, 0.03510940074920654, -0.020172972232103348, 0.010558419860899448, -0.004460576921701431, -0.018668076023459435, -0.033789921551942825, -0.0422428734600544, -0.02375395968556404, -0.04202241078019142, 0.060631562024354935, 0.06385961920022964, 0.02806039713...
102
102
['Dzmitry Bahdanau', 'Kyunghyun Cho', 'Yoshua Bengio']
1409.0473v7
Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural ma...
Neural Machine Translation by Jointly Learning to Align and Translate
2,014
http://arxiv.org/pdf/1409.0473v7
Title Neural Machine Translation Jointly Learning Align Translate Summary Neural machine translation recently proposed approach machine translation Unlike traditional statistical machine translation neural machine translation aim building single neural network jointly tuned maximize translation performance model propos...
[0.016232138499617577, 0.019071346148848534, -0.006837559398263693, 0.06399498879909515, -0.04284335672855377, 0.006947672460228205, 0.02146477811038494, -0.015255913138389587, -0.04788481444120407, -0.0093612689524889, -0.029340293258428574, -0.021650521084666252, 0.03916667401790619, 0.009066462516784668, 0.031225515...
103
103
['Jean Pouget-Abadie', 'Dzmitry Bahdanau', 'Bart van Merrienboer', 'Kyunghyun Cho', 'Yoshua Bengio']
1409.1257v2
The authors of (Cho et al., 2014a) have shown that the recently introduced neural network translation systems suffer from a significant drop in translation quality when translating long sentences, unlike existing phrase-based translation systems. In this paper, we propose a way to address this issue by automatically se...
Overcoming the Curse of Sentence Length for Neural Machine Translation using Automatic Segmentation
2,014
http://arxiv.org/pdf/1409.1257v2
Title Overcoming Curse Sentence Length Neural Machine Translation using Automatic Segmentation Summary author Cho et al 2014a shown recently introduced neural network translation system suffer significant drop translation quality translating long sentence unlike existing phrasebased translation system paper propose way...
[0.020197276026010513, 0.029057249426841736, -0.0055040884763002396, 0.05541546642780304, -0.04097885265946388, 0.0034373069647699594, 0.04737241938710213, 0.004094036296010017, -0.045474689453840256, -0.0030410634353756905, 0.008909204974770546, 0.008727390319108963, 0.03498250991106033, 0.0009199061314575374, 0.03728...
104
104
['William Chan', 'Nan Rosemary Ke', 'Ian Lane']
1504.01483v1
Deep Neural Network (DNN) acoustic models have yielded many state-of-the-art results in Automatic Speech Recognition (ASR) tasks. More recently, Recurrent Neural Network (RNN) models have been shown to outperform DNNs counterparts. However, state-of-the-art DNN and RNN models tend to be impractical to deploy on embedde...
Transferring Knowledge from a RNN to a DNN
2,015
http://arxiv.org/pdf/1504.01483v1
Title Transferring Knowledge RNN DNN Summary Deep Neural Network DNN acoustic model yielded many stateoftheart result Automatic Speech Recognition ASR task recently Recurrent Neural Network RNN model shown outperform DNNs counterpart However stateoftheart DNN RNN model tend impractical deploy embedded system limited co...
[-0.015455097891390324, 0.028654413297772408, 0.007837031036615372, 0.03152240067720413, -0.02903357334434986, -0.03145992010831833, 0.044474970549345016, -0.016366124153137207, -0.04448847472667694, 0.003005929524078965, -0.03364250808954239, 0.010486699640750885, 0.061562370508909225, 0.026221472769975662, 0.01799595...
105
105
['Sarath Chandar', 'Mitesh M. Khapra', 'Hugo Larochelle', 'Balaraman Ravindran']
1504.07225v3
Common Representation Learning (CRL), wherein different descriptions (or views) of the data are embedded in a common subspace, is receiving a lot of attention recently. Two popular paradigms here are Canonical Correlation Analysis (CCA) based approaches and Autoencoder (AE) based approaches. CCA based approaches learn ...
Correlational Neural Networks
2,015
http://arxiv.org/pdf/1504.07225v3
Title Correlational Neural Networks Summary Common Representation Learning CRL wherein different description view data embedded common subspace receiving lot attention recently Two popular paradigm Canonical Correlation Analysis CCA based approach Autoencoder AE based approach CCA based approach learn joint representat...
[-0.012877211906015873, 0.0465271919965744, -0.013225826434791088, 0.03985082358121872, -0.031307924538850784, 0.03361591696739197, 0.038085710257291794, -0.0033907159231603146, 0.014290286228060722, 0.005074954591691494, -0.06664880365133286, -0.001258045551367104, 0.051832761615514755, 0.008261256851255894, 0.0092008...
106
106
['Jan Chorowski', 'Dzmitry Bahdanau', 'Dmitriy Serdyuk', 'Kyunghyun Cho', 'Yoshua Bengio']
1506.07503v1
Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks in- cluding machine translation, handwriting synthesis and image caption gen- eration. We extend the attention-mechanism with features needed for speech recognition. We sh...
Attention-Based Models for Speech Recognition
2,015
http://arxiv.org/pdf/1506.07503v1
Title AttentionBased Models Speech Recognition Summary Recurrent sequence generator conditioned input data attention mechanism recently shown good performance range task cluding machine translation handwriting synthesis image caption gen eration extend attentionmechanism feature needed speech recognition show adaptatio...
[0.019523048773407936, 0.0204450786113739, 0.008692066185176373, 0.04665157571434975, 0.0042520323768258095, -0.046689894050359726, 0.04492989555001259, -0.01992475427687168, -0.030953174456954002, -0.037043068557977676, -0.05595624819397926, -0.045688990503549576, 0.053314339369535446, 0.027763931080698967, 0.05054059...
107
107
['Haşim Sak', 'Andrew Senior', 'Kanishka Rao', 'Françoise Beaufays']
1507.06947v1
We have recently shown that deep Long Short-Term Memory (LSTM) recurrent neural networks (RNNs) outperform feed forward deep neural networks (DNNs) as acoustic models for speech recognition. More recently, we have shown that the performance of sequence trained context dependent (CD) hidden Markov model (HMM) acoustic m...
Fast and Accurate Recurrent Neural Network Acoustic Models for Speech Recognition
2,015
http://arxiv.org/pdf/1507.06947v1
Title Fast Accurate Recurrent Neural Network Acoustic Models Speech Recognition Summary recently shown deep Long ShortTerm Memory LSTM recurrent neural network RNNs outperform feed forward deep neural network DNNs acoustic model speech recognition recently shown performance sequence trained context dependent CD hidden ...
[-0.008163961581885815, 0.01952042616903782, 0.02254086546599865, 0.06792142987251282, -0.011170904152095318, -0.03642510622739792, 0.03995650261640549, 0.010094176046550274, -0.057481907308101654, -0.03643953800201416, -0.05215294286608696, -0.04255299270153046, 0.07225282490253448, 0.014487023465335369, -0.0352041833...
108
108
['William Chan', 'Navdeep Jaitly', 'Quoc V. Le', 'Oriol Vinyals']
1508.01211v2
We present Listen, Attend and Spell (LAS), a neural network that learns to transcribe speech utterances to characters. Unlike traditional DNN-HMM models, this model learns all the components of a speech recognizer jointly. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent ne...
Listen, Attend and Spell
2,015
http://arxiv.org/pdf/1508.01211v2
Title Listen Attend Spell Summary present Listen Attend Spell LAS neural network learns transcribe speech utterance character Unlike traditional DNNHMM model model learns component speech recognizer jointly system two component listener speller listener pyramidal recurrent network encoder accepts filter bank spectrum i...
[-0.001111628138460219, 0.024756576865911484, 0.018808944150805473, 0.058023370802402496, -0.025091195479035378, -0.033588431775569916, 0.026153886690735817, -0.021866867318749428, -0.039743371307849884, -0.005591810680925846, -0.0414145402610302, -0.021855205297470093, 0.04940786957740784, 0.05602109432220459, 0.03506...
109
109
['Shihao Ji', 'S. V. N. Vishwanathan', 'Nadathur Satish', 'Michael J. Anderson', 'Pradeep Dubey']
1511.06909v7
We propose BlackOut, an approximation algorithm to efficiently train massive recurrent neural network language models (RNNLMs) with million word vocabularies. BlackOut is motivated by using a discriminative loss, and we describe a new sampling strategy which significantly reduces computation while improving stability, ...
BlackOut: Speeding up Recurrent Neural Network Language Models With Very Large Vocabularies
2,015
http://arxiv.org/pdf/1511.06909v7
Title BlackOut Speeding Recurrent Neural Network Language Models Large Vocabularies Summary propose BlackOut approximation algorithm efficiently train massive recurrent neural network language model RNNLMs million word vocabulary BlackOut motivated using discriminative loss describe new sampling strategy significantly ...
[-0.0006652222364209592, 0.041236214339733124, 0.009081205353140831, 0.0381535068154335, -0.011480504646897316, -0.007721998263150454, 0.034583013504743576, 0.014878233894705772, -0.025639204308390617, -0.028501413762569427, -0.01765325292944908, -0.05223629251122475, 0.01297624222934246, 0.04424270987510681, 0.0052630...
110
110
['Marta R. Costa-Jussà', 'José A. R. Fonollosa']
1603.00810v3
Neural Machine Translation (MT) has reached state-of-the-art results. However, one of the main challenges that neural MT still faces is dealing with very large vocabularies and morphologically rich languages. In this paper, we propose a neural MT system using character-based embeddings in combination with convolutional...
Character-based Neural Machine Translation
2,016
http://arxiv.org/pdf/1603.00810v3
Title Characterbased Neural Machine Translation Summary Neural Machine Translation MT reached stateoftheart result However one main challenge neural MT still face dealing large vocabulary morphologically rich language paper propose neural MT system using characterbased embeddings combination convolutional highway layer...
[0.014114925637841225, 0.035114072263240814, 0.002446637023240328, 0.09347090870141983, -0.000986468861810863, -0.0038921234663575888, -0.020763492211699486, 0.016344992443919182, -0.04062526300549507, -0.023435117676854134, -0.0058680386282503605, -0.0912121906876564, 0.0344366617500782, 0.04694531857967377, 0.0838363...
111
111
['Yangfeng Ji', 'Gholamreza Haffari', 'Jacob Eisenstein']
1603.01913v2
This paper presents a novel latent variable recurrent neural network architecture for jointly modeling sequences of words and (possibly latent) discourse relations between adjacent sentences. A recurrent neural network generates individual words, thus reaping the benefits of discriminatively-trained vector representati...
A Latent Variable Recurrent Neural Network for Discourse Relation Language Models
2,016
http://arxiv.org/pdf/1603.01913v2
Title Latent Variable Recurrent Neural Network Discourse Relation Language Models Summary paper present novel latent variable recurrent neural network architecture jointly modeling sequence word possibly latent discourse relation adjacent sentence recurrent neural network generates individual word thus reaping benefit ...
[0.06390361487865448, 0.02962624467909336, 0.0004280621069483459, 0.055127594619989395, -0.02473086304962635, -0.014818601310253143, 0.012562375515699387, -0.007152600213885307, -0.023706721141934395, -0.07547377049922943, 0.010420695878565311, -0.022652819752693176, 0.023212887346744537, 0.028981022536754608, -0.00939...
112
112
['Zhiyuan Tang', 'Lantian Li', 'Dong Wang']
1603.09643v4
Although highly correlated, speech and speaker recognition have been regarded as two independent tasks and studied by two communities. This is certainly not the way that people behave: we decipher both speech content and speaker traits at the same time. This paper presents a unified model to perform speech and speaker ...
Multi-task Recurrent Model for Speech and Speaker Recognition
2,016
http://arxiv.org/pdf/1603.09643v4
Title Multitask Recurrent Model Speech Speaker Recognition Summary Although highly correlated speech speaker recognition regarded two independent task studied two community certainly way people behave decipher speech content speaker trait time paper present unified model perform speech speaker recognition simultaneousl...
[-0.018811749294400215, 0.028306886553764343, -7.840494072297588e-05, 0.04581144079566002, -0.023841267451643944, 0.01702040806412697, 0.04916604980826378, -0.04716957360506058, -0.017879294231534004, -0.0043065547943115234, -0.07053326070308685, -0.05154119059443474, 0.05155247077345848, -0.006731683854013681, 0.00124...
113
113
['Sarath Chandar', 'Sungjin Ahn', 'Hugo Larochelle', 'Pascal Vincent', 'Gerald Tesauro', 'Yoshua Bengio']
1605.07427v1
Memory networks are neural networks with an explicit memory component that can be both read and written to by the network. The memory is often addressed in a soft way using a softmax function, making end-to-end training with backpropagation possible. However, this is not computationally scalable for applications which ...
Hierarchical Memory Networks
2,016
http://arxiv.org/pdf/1605.07427v1
Title Hierarchical Memory Networks Summary Memory network neural network explicit memory component read written network memory often addressed soft way using softmax function making endtoend training backpropagation possible However computationally scalable application require network read extremely large memory hand w...
[0.008692226372659206, 0.014331907965242863, -0.02022092044353485, 0.03938573971390724, 0.021290449425578117, 0.004444582387804985, 0.00995647069066763, -0.019644591957330704, -0.015351303853094578, -0.05517832189798355, -0.03843724727630615, -0.0025149635039269924, -0.006596947554498911, 0.020143985748291016, 0.030484...
114
114
['Sam Wiseman', 'Alexander M. Rush']
1606.02960v2
Sequence-to-Sequence (seq2seq) modeling has rapidly become an important general-purpose NLP tool that has proven effective for many text-generation and sequence-labeling tasks. Seq2seq builds on deep neural language modeling and inherits its remarkable accuracy in estimating local, next-word distributions. In this work...
Sequence-to-Sequence Learning as Beam-Search Optimization
2,016
http://arxiv.org/pdf/1606.02960v2
Title SequencetoSequence Learning BeamSearch Optimization Summary SequencetoSequence seq2seq modeling rapidly become important generalpurpose NLP tool proven effective many textgeneration sequencelabeling task Seq2seq build deep neural language modeling inherits remarkable accuracy estimating local nextword distributio...
[0.06399953365325928, 0.0633859634399414, 0.011333177797496319, 0.021156882867217064, -0.0031690301839262247, -0.010915356688201427, 0.008128847926855087, -0.021023651584982872, 0.016098977997899055, -0.030058465898036957, -0.016497142612934113, -0.021620193496346474, 0.010177547112107277, 0.08310149610042572, 0.015923...
115
115
['Ankit Vani', 'Yacine Jernite', 'David Sontag']
1705.08557v1
In this work, we present the Grounded Recurrent Neural Network (GRNN), a recurrent neural network architecture for multi-label prediction which explicitly ties labels to specific dimensions of the recurrent hidden state (we call this process "grounding"). The approach is particularly well-suited for extracting large nu...
Grounded Recurrent Neural Networks
2,017
http://arxiv.org/pdf/1705.08557v1
Title Grounded Recurrent Neural Networks Summary work present Grounded Recurrent Neural Network GRNN recurrent neural network architecture multilabel prediction explicitly tie label specific dimension recurrent hidden state call process grounding approach particularly wellsuited extracting large number concept text app...
[0.03502779081463814, 0.003919562324881554, -0.008802095428109169, 0.014723682776093483, 0.0010392939439043403, 0.0014341771602630615, 0.022555751726031303, 0.002775089582428336, -0.010944842360913754, -0.02333691529929638, -0.006412681192159653, -0.02209273912012577, 0.03233269602060318, 0.08150706440210342, 0.0060337...
116
116
['Tsung-Hsien Wen', 'Yishu Miao', 'Phil Blunsom', 'Steve Young']
1705.10229v1
Developing a dialogue agent that is capable of making autonomous decisions and communicating by natural language is one of the long-term goals of machine learning research. Traditional approaches either rely on hand-crafting a small state-action set for applying reinforcement learning that is not scalable or constructi...
Latent Intention Dialogue Models
2,017
http://arxiv.org/pdf/1705.10229v1
Title Latent Intention Dialogue Models Summary Developing dialogue agent capable making autonomous decision communicating natural language one longterm goal machine learning research Traditional approach either rely handcrafting small stateaction set applying reinforcement learning scalable constructing deterministic m...
[0.10098562389612198, 0.06708446890115738, -0.01680503971874714, 0.022988336160779, -0.03346974030137062, -0.0035950455348938704, -0.0012845752062276006, -0.02798912487924099, 0.008067675866186619, -0.06890923529863358, 0.009950743988156319, 0.010213034227490425, -0.046132877469062805, 0.0958106741309166, 0.00786982383...
117
117
['Julius Kunze', 'Louis Kirsch', 'Ilia Kurenkov', 'Andreas Krug', 'Jens Johannsmeier', 'Sebastian Stober']
1706.00290v1
End-to-end training of automated speech recognition (ASR) systems requires massive data and compute resources. We explore transfer learning based on model adaptation as an approach for training ASR models under constrained GPU memory, throughput and training data. We conduct several systematic experiments adapting a Wa...
Transfer Learning for Speech Recognition on a Budget
2,017
http://arxiv.org/pdf/1706.00290v1
Title Transfer Learning Speech Recognition Budget Summary Endtoend training automated speech recognition ASR system requires massive data compute resource explore transfer learning based model adaptation approach training ASR model constrained GPU memory throughput training data conduct several systematic experiment ad...
[0.0016755897086113691, 0.05258483067154884, 0.018963608890771866, 0.055986594408750534, 0.007710160221904516, 0.02001683972775936, 0.020336255431175232, -0.012822264805436134, -0.012132210657000542, -0.04293149709701538, -0.052866131067276, 0.009552624076604843, 0.01511545479297638, 0.04265570268034935, 0.003736619371...
118
118
['Matt Shannon']
1706.02776v1
State-level minimum Bayes risk (sMBR) training has become the de facto standard for sequence-level training of speech recognition acoustic models. It has an elegant formulation using the expectation semiring, and gives large improvements in word error rate (WER) over models trained solely using cross-entropy (CE) or co...
Optimizing expected word error rate via sampling for speech recognition
2,017
http://arxiv.org/pdf/1706.02776v1
Title Optimizing expected word error rate via sampling speech recognition Summary Statelevel minimum Bayes risk sMBR training become de facto standard sequencelevel training speech recognition acoustic model elegant formulation using expectation semiring give large improvement word error rate WER model trained solely u...
[-0.0015462484443560243, 0.019713643938302994, 0.03889480233192444, 0.007201068103313446, -0.020102335140109062, -0.019838565960526466, 0.021725941449403763, -0.027281509712338448, -0.018865812569856644, -0.038721613585948944, -0.04394925385713577, -0.04947885125875473, 0.07006166875362396, 0.01906268671154976, -0.0026...
119
119
['Artem M. Grachev', 'Dmitry I. Ignatov', 'Andrey V. Savchenko']
1708.05963v1
In this paper, we consider several compression techniques for the language modeling problem based on recurrent neural networks (RNNs). It is known that conventional RNNs, e.g, LSTM-based networks in language modeling, are characterized with either high space complexity or substantial inference time. This problem is esp...
Neural Networks Compression for Language Modeling
2,017
http://arxiv.org/pdf/1708.05963v1
Title Neural Networks Compression Language Modeling Summary paper consider several compression technique language modeling problem based recurrent neural network RNNs known conventional RNNs eg LSTMbased network language modeling characterized either high space complexity substantial inference time problem especially c...
[0.022581350058317184, 0.021247537806630135, -0.0034962582867592573, 0.019048819318413734, -0.03415098786354065, -0.01466596033424139, -0.003558417782187462, 0.054771434515714645, -0.08111077547073364, -0.0035942259710282087, 0.018638819456100464, -0.05355372279882431, 0.038730550557374954, 0.00589417852461338, 0.02029...
120
120
['Mostafa Dehghani', 'Aliaksei Severyn', 'Sascha Rothe', 'Jaap Kamps']
1711.00313v2
Training deep neural networks requires massive amounts of training data, but for many tasks only limited labeled data is available. This makes weak supervision attractive, using weak or noisy signals like the output of heuristic methods or user click-through data for training. In a semi-supervised setting, we can use a...
Avoiding Your Teacher's Mistakes: Training Neural Networks with Controlled Weak Supervision
2,017
http://arxiv.org/pdf/1711.00313v2
Title Avoiding Teachers Mistakes Training Neural Networks Controlled Weak Supervision Summary Training deep neural network requires massive amount training data many task limited labeled data available make weak supervision attractive using weak noisy signal like output heuristic method user clickthrough data training ...
[0.0369996540248394, 0.051004763692617416, -0.01590188406407833, 0.018436508253216743, 0.024910999462008476, -0.00912522617727518, 0.03938376158475876, -0.03585285693407059, -0.0014833923196420074, -0.022313877940177917, -0.08250012993812561, 0.011987103149294853, -0.038493309170007706, 0.03639517351984978, 0.018426405...
121
121
['Christopher Tegho', 'Paweł Budzianowski', 'Milica Gašić']
1711.11486v1
In statistical dialogue management, the dialogue manager learns a policy that maps a belief state to an action for the system to perform. Efficient exploration is key to successful policy optimisation. Current deep reinforcement learning methods are very promising but rely on epsilon-greedy exploration, thus subjecting...
Uncertainty Estimates for Efficient Neural Network-based Dialogue Policy Optimisation
2,017
http://arxiv.org/pdf/1711.11486v1
Title Uncertainty Estimates Efficient Neural Networkbased Dialogue Policy Optimisation Summary statistical dialogue management dialogue manager learns policy map belief state action system perform Efficient exploration key successful policy optimisation Current deep reinforcement learning method promising rely epsilong...
[0.05407075583934784, 0.06020825356245041, -0.009871243499219418, 0.03316120430827141, 0.0015843056607991457, 0.0019181536044925451, 0.0073473891243338585, -0.03800572082400322, -0.016431016847491264, -0.02771364524960518, -0.04231969267129898, -0.051028646528720856, -0.0011075044749304652, 0.08347871154546738, -0.0448...
122
122
['Yuanhang Su', 'Yuzhong Huang', 'C. -C. Jay Kuo']
1803.01686v1
In this work, we investigate the memory capability of recurrent neural networks (RNNs), where this capability is defined as a function that maps an element in a sequence to the current output. We first analyze the system function of a recurrent neural network (RNN) cell, and provide analytical results for three RNNs. T...
On Extended Long Short-term Memory and Dependent Bidirectional Recurrent Neural Network
2,018
http://arxiv.org/pdf/1803.01686v1
Title Extended Long Shortterm Memory Dependent Bidirectional Recurrent Neural Network Summary work investigate memory capability recurrent neural network RNNs capability defined function map element sequence current output first analyze system function recurrent neural network RNN cell provide analytical result three R...
[0.016818039119243622, -0.02590359002351761, -0.006911926437169313, 0.06631135195493698, -0.03752010688185692, -0.009794680401682854, 0.015172813087701797, -0.020624857395887375, 0.007288065273314714, -0.04691636934876442, -0.024906670674681664, -0.0552426353096962, 0.013229344971477985, 0.04268839582800865, 0.03988541...
123
123
['Lin Ma', 'Zhengdong Lu', 'Hang Li']
1506.00333v2
In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA). Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer...
Learning to Answer Questions From Image Using Convolutional Neural Network
2,015
http://arxiv.org/pdf/1506.00333v2
Title Learning Answer Questions Image Using Convolutional Neural Network Summary paper propose employ convolutional neural network CNN image question answering QA proposed CNN provides endtoend framework convolutional architecture learning image question representation also intermodal interaction produce answer specifi...
[0.0839354544878006, 0.060329996049404144, -0.008508297614753246, 0.08565450459718704, -0.015195291489362717, 0.00968173984438181, 0.025598539039492607, 0.0436222068965435, 0.002900107530876994, -0.03461875766515732, 0.015335976146161556, 0.011785490438342094, -0.03949568793177605, 0.04100176692008972, 0.01872228458523...
124
124
['Zichao Yang', 'Xiaodong He', 'Jianfeng Gao', 'Li Deng', 'Alex Smola']
1511.02274v2
This paper presents stacked attention networks (SANs) that learn to answer natural language questions from images. SANs use semantic representation of a question as query to search for the regions in an image that are related to the answer. We argue that image question answering (QA) often requires multiple steps of re...
Stacked Attention Networks for Image Question Answering
2,015
http://arxiv.org/pdf/1511.02274v2
Title Stacked Attention Networks Image Question Answering Summary paper present stacked attention network SANs learn answer natural language question image SANs use semantic representation question query search region image related answer argue image question answering QA often requires multiple step reasoning Thus dev...
[-0.0011845617555081844, 0.009816315956413746, -0.022241875529289246, 0.03702593967318535, -0.0039842999540269375, -0.007120921742171049, 0.021945452317595482, -0.006195677910000086, 0.010406684130430222, -0.05124964937567711, -0.006271982099860907, 0.05718287080526352, -0.013278278522193432, 0.03345341607928276, 0.033...
125
125
['Jacob Andreas', 'Marcus Rohrbach', 'Trevor Darrell', 'Dan Klein']
1511.02799v4
Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic str...
Neural Module Networks
2,015
http://arxiv.org/pdf/1511.02799v4
Title Neural Module Networks Summary Visual question answering fundamentally compositional naturea question like dog share substructure question like color dog cat paper seek simultaneously exploit representational capacity deep network compositional linguistic structure question describe procedure constructing learnin...
[0.03032113052904606, 0.04285570979118347, -0.05058463290333748, 0.021224964410066605, -0.01112926285713911, 0.04416812211275101, 0.020102521404623985, -0.005148149561136961, -0.01478614378720522, -0.02363499067723751, -0.027765372768044472, 0.009290046989917755, -0.015773294493556023, 0.03814404830336571, 0.0231920648...
126
126
['Federico Raue', 'Andreas Dengel', 'Thomas M. Breuel', 'Marcus Liwicki']
1511.04401v5
In this paper, we extend a symbolic association framework for being able to handle missing elements in multimodal sequences. The general scope of the work is the symbolic associations of object-word mappings as it happens in language development in infants. In other words, two different representations of the same abst...
Symbol Grounding Association in Multimodal Sequences with Missing Elements
2,015
http://arxiv.org/pdf/1511.04401v5
Title Symbol Grounding Association Multimodal Sequences Missing Elements Summary paper extend symbolic association framework able handle missing element multimodal sequence general scope work symbolic association objectword mapping happens language development infant word two different representation abstract concept a...
[-0.00641487305983901, 0.02614639326930046, -0.01266532950103283, 0.02603224106132984, -0.014365623705089092, 0.02258208394050598, 0.0633237436413765, -0.0059031276032328606, -0.040016911923885345, -0.03117685206234455, -0.019343946129083633, -0.042571887373924255, 0.048582714051008224, 0.028821134939789772, 0.01439730...
127
127
['Dan Hendrycks', 'Mantas Mazeika', 'Duncan Wilson', 'Kevin Gimpel']
1802.05300v1
The growing importance of massive datasets with the advent of deep learning makes robustness to label noise a critical property for classifiers to have. Sources of label noise include automatic labeling for large datasets, non-expert labeling, and label corruption by data poisoning adversaries. In the latter case, corr...
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
2,018
http://arxiv.org/pdf/1802.05300v1
Title Using Trusted Data Train Deep Networks Labels Corrupted Severe Noise Summary growing importance massive datasets advent deep learning make robustness label noise critical property classifier Sources label noise include automatic labeling large datasets nonexpert labeling label corruption data poisoning adversary ...
[0.0499345101416111, 0.06746858358383179, 0.0007473518489859998, 0.04109011963009834, -0.011891260743141174, -0.01363668404519558, 0.04493304342031479, 0.00044890728895552456, 0.008615335449576378, -0.03051946870982647, -0.03643353655934334, 0.005021167453378439, 0.010708912275731564, 0.0793510153889656, 0.009104130789...
128
128
['Kyunghyun Cho', 'Aaron Courville', 'Yoshua Bengio']
1507.01053v1
Whereas deep neural networks were first mostly used for classification tasks, they are rapidly expanding in the realm of structured output problems, where the observed target is composed of multiple random variables that have a rich joint distribution, given the input. We focus in this paper on the case where the input...
Describing Multimedia Content using Attention-based Encoder--Decoder Networks
2,015
http://arxiv.org/pdf/1507.01053v1
Title Describing Multimedia Content using Attentionbased EncoderDecoder Networks Summary Whereas deep neural network first mostly used classification task rapidly expanding realm structured output problem observed target composed multiple random variable rich joint distribution given input focus paper case input also r...
[0.020975526422262192, -0.010257728397846222, -0.0024284517858177423, 0.04366829991340637, 0.0024071575608104467, -0.023577744141221046, 0.032206058502197266, -0.036669131368398666, -0.06916859745979309, -0.06539066880941391, -0.032877132296562195, -0.012988226488232613, 0.014521989040076733, 0.09337498992681503, 0.025...
129
129
['Desmond Elliott', 'Stella Frank', 'Eva Hasler']
1510.04709v2
In this paper we present an approach to multi-language image description bringing together insights from neural machine translation and neural image description. To create a description of an image for a given target language, our sequence generation models condition on feature vectors from the image, the description f...
Multilingual Image Description with Neural Sequence Models
2,015
http://arxiv.org/pdf/1510.04709v2
Title Multilingual Image Description Neural Sequence Models Summary paper present approach multilanguage image description bringing together insight neural machine translation neural image description create description image given target language sequence generation model condition feature vector image description sou...
[0.03873835876584053, 0.03974294289946556, 0.011122329160571098, 0.07812575995922089, -0.030448313802480698, 0.031966615468263626, 0.01927098073065281, -0.019631776958703995, -0.045805588364601135, -0.03218648210167885, -0.026834923774003983, -0.060991425067186356, 0.032955825328826904, 0.025300851091742516, 0.03094795...
130
130
['Oswaldo Ludwig', 'Xiao Liu', 'Parisa Kordjamshidi', 'Marie-Francine Moens']
1603.08474v1
This paper introduces the visually informed embedding of word (VIEW), a continuous vector representation for a word extracted from a deep neural model trained using the Microsoft COCO data set to forecast the spatial arrangements between visual objects, given a textual description. The model is composed of a deep multi...
Deep Embedding for Spatial Role Labeling
2,016
http://arxiv.org/pdf/1603.08474v1
Title Deep Embedding Spatial Role Labeling Summary paper introduces visually informed embedding word VIEW continuous vector representation word extracted deep neural model trained using Microsoft COCO data set forecast spatial arrangement visual object given textual description model composed deep multilayer perceptron...
[0.049278587102890015, 0.006925229914486408, -0.00891056563705206, 0.05675702169537544, -0.02547542192041874, 0.002801428083330393, 0.013896778225898743, -0.05079101771116257, -0.00027351450989954174, -0.03531043604016304, -0.017839377745985985, 0.005415066611021757, 0.013618972152471542, 0.030895709991455078, 0.000100...
131
131
['Yuntian Deng', 'Anssi Kanervisto', 'Jeffrey Ling', 'Alexander M. Rush']
1609.04938v2
We present a neural encoder-decoder model to convert images into presentational markup based on a scalable coarse-to-fine attention mechanism. Our method is evaluated in the context of image-to-LaTeX generation, and we introduce a new dataset of real-world rendered mathematical expressions paired with LaTeX markup. We ...
Image-to-Markup Generation with Coarse-to-Fine Attention
2,016
http://arxiv.org/pdf/1609.04938v2
Title ImagetoMarkup Generation CoarsetoFine Attention Summary present neural encoderdecoder model convert image presentational markup based scalable coarsetofine attention mechanism method evaluated context imagetoLaTeX generation introduce new dataset realworld rendered mathematical expression paired LaTeX markup show...
[-0.037048403173685074, 0.06474770605564117, 0.023353565484285355, 0.04908118024468422, 0.0032489988952875137, -0.007392432540655136, 0.045315228402614594, 0.01213331799954176, -0.050541844218969345, -0.025739675387740135, -0.02434820681810379, 0.039918601512908936, 0.017113251611590385, 0.07616622745990753, 0.04478913...
132
132
['Sumeet S. Singh']
1802.05415v1
We present a deep recurrent neural network model with soft visual attention that learns to generate LaTeX markup of real-world math formulas given their images. Applying neural sequence generation techniques that have been very successful in the fields of machine translation and image/handwriting/speech captioning, rec...
Teaching Machines to Code: Neural Markup Generation with Visual Attention
2,018
http://arxiv.org/pdf/1802.05415v1
Title Teaching Machines Code Neural Markup Generation Visual Attention Summary present deep recurrent neural network model soft visual attention learns generate LaTeX markup realworld math formula given image Applying neural sequence generation technique successful field machine translation imagehandwritingspeech capti...
[0.01412122044712305, 0.043919309973716736, -0.006231324747204781, 0.030719632282853127, 0.0054451823234558105, 0.0015950931701809168, 0.04878579080104828, -0.0071022603660821915, -0.03350820392370224, -0.03550293669104576, -0.004475708119571209, -0.005813692230731249, 0.01067198533564806, 0.10894140601158142, 0.061574...
133
133
['Mohammad Javad Shafiee', 'Elnaz Barshan', 'Alexander Wong']
1704.02081v1
A promising paradigm for achieving highly efficient deep neural networks is the idea of evolutionary deep intelligence, which mimics biological evolution processes to progressively synthesize more efficient networks. A crucial design factor in evolutionary deep intelligence is the genetic encoding scheme used to simula...
Evolution in Groups: A deeper look at synaptic cluster driven evolution of deep neural networks
2,017
http://arxiv.org/pdf/1704.02081v1
Title Evolution Groups deeper look synaptic cluster driven evolution deep neural network Summary promising paradigm achieving highly efficient deep neural network idea evolutionary deep intelligence mimic biological evolution process progressively synthesize efficient network crucial design factor evolutionary deep int...
[-0.02147897146642208, 0.025692127645015717, -0.054428599774837494, 0.05017568916082382, 0.0025389394722878933, 0.005992465186864138, 0.030231373384594917, 0.007615628186613321, -0.023921435698866844, 0.016216987743973732, -0.02581300400197506, 0.017828235402703285, -0.03593451902270317, 0.02337166853249073, 0.03265490...
134
134
['Mete Ozay', 'Ilke Öztekin', 'Uygar Öztekin', 'Fatos T. Yarman Vural']
1205.2382v3
A relatively recent advance in cognitive neuroscience has been multi-voxel pattern analysis (MVPA), which enables researchers to decode brain states and/or the type of information represented in the brain during a cognitive operation. MVPA methods utilize machine learning algorithms to distinguish among types of inform...
Mesh Learning for Classifying Cognitive Processes
2,012
http://arxiv.org/pdf/1205.2382v3
Title Mesh Learning Classifying Cognitive Processes Summary relatively recent advance cognitive neuroscience multivoxel pattern analysis MVPA enables researcher decode brain state andor type information represented brain cognitive operation MVPA method utilize machine learning algorithm distinguish among type informati...
[-0.031485315412282944, 0.019229529425501823, -0.054886896163225174, 0.0016190819442272186, 0.0027205513324588537, 0.006775710731744766, 0.036257434636354446, 0.023961171507835388, 0.04668120667338371, 0.028998807072639465, -0.01394672505557537, 0.017376072704792023, 0.080603688955307, 0.06615357846021652, 0.0436157360...
135
135
['A. H. Karimi', 'M. J. Shafiee', 'A. Ghodsi', 'A. Wong']
1707.00081v1
In this work, we perform an exploratory study on synthesizing deep neural networks using biological synaptic strength distributions, and the potential influence of different distributions on modelling performance particularly for the scenario associated with small data sets. Surprisingly, a CNN with convolutional layer...
Synthesizing Deep Neural Network Architectures using Biological Synaptic Strength Distributions
2,017
http://arxiv.org/pdf/1707.00081v1
Title Synthesizing Deep Neural Network Architectures using Biological Synaptic Strength Distributions Summary work perform exploratory study synthesizing deep neural network using biological synaptic strength distribution potential influence different distribution modelling performance particularly scenario associated ...
[0.010573148727416992, -0.008800351992249489, -0.02899429388344288, 0.0047276862896978855, -0.000527168158441782, -0.030559904873371124, 0.014191307127475739, -0.027332350611686707, -0.02529105916619301, 0.03455287218093872, -0.05245533585548401, -0.017583083361387253, 0.025366928428411484, 0.03453656658530235, 0.03021...
136
136
['Yukun Bao', 'Zhongyi Hu', 'Tao Xiong']
1401.1926v1
Addressing the issue of SVMs parameters optimization, this study proposes an efficient memetic algorithm based on Particle Swarm Optimization algorithm (PSO) and Pattern Search (PS). In the proposed memetic algorithm, PSO is responsible for exploration of the search space and the detection of the potential regions with...
A PSO and Pattern Search based Memetic Algorithm for SVMs Parameters Optimization
2,014
http://arxiv.org/pdf/1401.1926v1
Title PSO Pattern Search based Memetic Algorithm SVMs Parameters Optimization Summary Addressing issue SVMs parameter optimization study proposes efficient memetic algorithm based Particle Swarm Optimization algorithm PSO Pattern Search PS proposed memetic algorithm PSO responsible exploration search space detection po...
[-0.003677141619846225, -0.035738736391067505, -0.04013725370168686, -0.03242357820272446, 0.030800711363554, -0.039950333535671234, -0.01886816881597042, 0.005230278708040714, -0.042481835931539536, 0.024726781994104385, 0.012441267259418964, 0.03372093662619591, 0.02127053588628769, 0.031095923855900764, 0.0043699624...
137
137
['Laurent Dinh', 'Jascha Sohl-Dickstein', 'Samy Bengio']
1605.08803v3
Unsupervised learning of probabilistic models is a central yet challenging problem in machine learning. Specifically, designing models with tractable learning, sampling, inference and evaluation is crucial in solving this task. We extend the space of such models using real-valued non-volume preserving (real NVP) transf...
Density estimation using Real NVP
2,016
http://arxiv.org/pdf/1605.08803v3
Title Density estimation using Real NVP Summary Unsupervised learning probabilistic model central yet challenging problem machine learning Specifically designing model tractable learning sampling inference evaluation crucial solving task extend space model using realvalued nonvolume preserving real NVP transformation s...
[-0.010649830102920532, 0.02311602793633938, -0.020355718210339546, -0.005554623901844025, 0.0006417612312361598, -0.028737174347043037, -0.010901322588324547, -0.01590411737561226, -0.1122327521443367, 0.03075490891933441, 0.037034448236227036, 0.010051854886114597, -0.011723372153937817, 0.07618006318807602, 0.031419...
138
138
['Tim Salimans', 'Jonathan Ho', 'Xi Chen', 'Szymon Sidor', 'Ilya Sutskever']
1703.03864v2
We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs avail...
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
2,017
http://arxiv.org/pdf/1703.03864v2
Title Evolution Strategies Scalable Alternative Reinforcement Learning Summary explore use Evolution Strategies ES class black box optimization algorithm alternative popular MDPbased RL technique Qlearning Policy Gradients Experiments MuJoCo Atari show ES viable solution strategy scale extremely well number CPUs availa...
[0.010028883814811707, -0.013128409162163734, -0.026057640090584755, -0.030221175402402878, 0.004878659266978502, -0.014877968467772007, -0.030878612771630287, -0.0016068447148427367, -0.03687499091029167, 0.013204746879637241, -0.025796733796596527, 0.041408758610486984, -0.018586693331599236, 0.0671517550945282, 0.02...
139
139
['Peter Karkus', 'David Hsu', 'Wee Sun Lee']
1703.06692v3
This paper introduces the QMDP-net, a neural network architecture for planning under partial observability. The QMDP-net combines the strengths of model-free learning and model-based planning. It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a plan...
QMDP-Net: Deep Learning for Planning under Partial Observability
2,017
http://arxiv.org/pdf/1703.06692v3
Title QMDPNet Deep Learning Planning Partial Observability Summary paper introduces QMDPnet neural network architecture planning partial observability QMDPnet combine strength modelfree learning modelbased planning recurrent policy network represents policy parameterized set task connecting model planning algorithm sol...
[-0.031025972217321396, 0.02668103389441967, 0.021649088710546494, -0.015323212370276451, -0.0014813239686191082, -0.016121603548526764, 0.00402535405009985, -0.032107558101415634, -0.03283702954649925, 0.008134311065077782, -0.0023769207764416933, 0.04723343253135681, -0.0532427579164505, 0.05191734433174133, -0.00111...
140
140
['Gregory Farquhar', 'Tim Rocktäschel', 'Maximilian Igl', 'Shimon Whiteson']
1710.11417v2
Combining deep model-free reinforcement learning with on-line planning is a promising approach to building on the successes of deep RL. On-line planning with look-ahead trees has proven successful in environments where transition models are known a priori. However, in complex environments where transition models need t...
TreeQN and ATreeC: Differentiable Tree-Structured Models for Deep Reinforcement Learning
2,017
http://arxiv.org/pdf/1710.11417v2
Title TreeQN ATreeC Differentiable TreeStructured Models Deep Reinforcement Learning Summary Combining deep modelfree reinforcement learning online planning promising approach building success deep RL Online planning lookahead tree proven successful environment transition model known priori However complex environment ...
[-0.01316402293741703, 0.05500173568725586, -0.01709579862654209, -0.011896061711013317, 0.011172867380082607, -0.028798716142773628, -0.027678873389959335, -0.033178865909576416, -0.0897672027349472, 0.03314647823572159, -0.01905691996216774, 0.04594454914331436, -0.013976276852190495, 0.09259748458862305, -0.02823040...
141
141
['Nan Rosemary Ke', 'Anirudh Goyal', 'Olexa Bilaniuk', 'Jonathan Binas', 'Laurent Charlin', 'Chris Pal', 'Yoshua Bengio']
1711.02326v1
A major drawback of backpropagation through time (BPTT) is the difficulty of learning long-term dependencies, coming from having to propagate credit information backwards through every single step of the forward computation. This makes BPTT both computationally impractical and biologically implausible. For this reason,...
Sparse Attentive Backtracking: Long-Range Credit Assignment in Recurrent Networks
2,017
http://arxiv.org/pdf/1711.02326v1
Title Sparse Attentive Backtracking LongRange Credit Assignment Recurrent Networks Summary major drawback backpropagation time BPTT difficulty learning longterm dependency coming propagate credit information backwards every single step forward computation make BPTT computationally impractical biologically implausible r...
[0.029247580096125603, 0.01919320784509182, 0.0024480000138282776, 8.124144369503483e-05, 0.0074616107158362865, -0.012341553345322609, -0.01701655425131321, 0.028148027136921883, 0.015285437926650047, -0.01433069258928299, -0.00913965329527855, -0.04621945321559906, 0.04792526364326477, -0.0030442881397902966, 0.04588...
142
142
['Anakha V Babu', 'Bipin Rajendran']
1711.03640v1
We study the performance of stochastically trained deep neural networks (DNNs) whose synaptic weights are implemented using emerging memristive devices that exhibit limited dynamic range, resolution, and variability in their programming characteristics. We show that a key device parameter to optimize the learning effic...
Stochastic Deep Learning in Memristive Networks
2,017
http://arxiv.org/pdf/1711.03640v1
Title Stochastic Deep Learning Memristive Networks Summary study performance stochastically trained deep neural network DNNs whose synaptic weight implemented using emerging memristive device exhibit limited dynamic range resolution variability programming characteristic show key device parameter optimize learning effi...
[-0.05314948037266731, -0.009155993349850178, -0.050157349556684494, 0.04872630164027214, 0.005095598287880421, -0.02518356591463089, 0.0003099807072430849, -0.05767659470438957, 0.002156688831746578, 0.01658071018755436, -0.03266753628849983, -0.05126240849494934, -0.011261051520705223, 0.06419191509485245, 0.02200341...
143
143
['Yukun Bao', 'Tao Xiong', 'Zhongyi Hu']
1401.0104v1
Multi-step-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multi-step-ahead...
PSO-MISMO Modeling Strategy for Multi-Step-Ahead Time Series Prediction
2,013
http://arxiv.org/pdf/1401.0104v1
Title PSOMISMO Modeling Strategy MultiStepAhead Time Series Prediction Summary Multistepahead time series prediction one challenging research topic field time series modeling prediction continually research Recently multipleinput several multipleoutputs MISMO modeling strategy proposed promising alternative multistepah...
[-0.03915258124470711, -0.0018047576304525137, -0.037946537137031555, -0.029163600876927376, 0.023922298103570938, -0.010559935122728348, 0.015669895336031914, -0.0034794695675373077, -0.0033809870947152376, 0.021691443398594856, 0.01624354161322117, -0.0034497908782213926, 0.016113724559545517, 0.02761148288846016, 0....
144
144
['Behnam Neyshabur', 'Ryota Tomioka', 'Nathan Srebro']
1503.00036v2
We investigate the capacity, convexity and characterization of a general family of norm-constrained feed-forward networks.
Norm-Based Capacity Control in Neural Networks
2,015
http://arxiv.org/pdf/1503.00036v2
Title NormBased Capacity Control Neural Networks Summary investigate capacity convexity characterization general family normconstrained feedforward network Authors 0 Ahmed Osman Wojciech Samek 1 Ji Young Lee Franck Dernoncourt 2 Iulian Vlad Serban Tim Klinger Gerald Tesau 3 Sebastian Ruder Joachim Bingel Isabelle Aug 4...
[-0.02022957056760788, 0.012715630233287811, -0.04774617776274681, 0.01465693674981594, -0.021458376199007034, -0.05353955924510956, 0.008180944249033928, -0.01536419428884983, -0.03482271730899811, 0.057900186628103256, -0.042462870478630066, -0.06971564888954163, -0.030261939391493797, 0.09634757786989212, 0.03070682...
145
145
['Konrad Zolna']
1612.01589v1
The method presented extends a given regression neural network to make its performance improve. The modification affects the learning procedure only, hence the extension may be easily omitted during evaluation without any change in prediction. It means that the modified model may be evaluated as quickly as the original...
Improving the Performance of Neural Networks in Regression Tasks Using Drawering
2,016
http://arxiv.org/pdf/1612.01589v1
Title Improving Performance Neural Networks Regression Tasks Using Drawering Summary method presented extends given regression neural network make performance improve modification affect learning procedure hence extension may easily omitted evaluation without change prediction mean modified model may evaluated quickly ...
[0.00877348706126213, 0.0447792187333107, -0.028045685961842537, 0.007741921115666628, 0.001329071819782257, -0.01616915501654148, 0.04684789851307869, 0.024029122665524483, -0.03844287246465683, 0.011336134746670723, -0.013169636949896812, 0.04185344651341438, 0.03981006518006325, 0.0020347849931567907, 0.047180231660...
146
146
['Yujia Li', 'Kevin Swersky', 'Richard Zemel']
1412.5244v1
A key element in transfer learning is representation learning; if representations can be developed that expose the relevant factors underlying the data, then new tasks and domains can be learned readily based on mappings of these salient factors. We propose that an important aim for these representations are to be unbi...
Learning unbiased features
2,014
http://arxiv.org/pdf/1412.5244v1
Title Learning unbiased feature Summary key element transfer learning representation learning representation developed expose relevant factor underlying data new task domain learned readily based mapping salient factor propose important aim representation unbiased Different form representation learning derived alternat...
[-0.021625731140375137, -0.013297004625201225, -0.03595166280865669, 0.02415221743285656, -0.0011171476216986775, -0.00457181828096509, 0.07256867736577988, -0.016797970980405807, -0.022556159645318985, -0.04133708029985428, -0.030532395467162132, 0.027638962492346764, -0.00877831969410181, 0.08131878077983856, 0.01382...
147
147
['David Balduzzi', 'Muhammad Ghifary']
1509.03005v1
This paper proposes GProp, a deep reinforcement learning algorithm for continuous policies with compatible function approximation. The algorithm is based on two innovations. Firstly, we present a temporal-difference based method for learning the gradient of the value-function. Secondly, we present the deviator-actor-cr...
Compatible Value Gradients for Reinforcement Learning of Continuous Deep Policies
2,015
http://arxiv.org/pdf/1509.03005v1
Title Compatible Value Gradients Reinforcement Learning Continuous Deep Policies Summary paper proposes GProp deep reinforcement learning algorithm continuous policy compatible function approximation algorithm based two innovation Firstly present temporaldifference based method learning gradient valuefunction Secondly ...
[-0.0199118759483099, 0.019877638667821884, -0.012061123736202717, -0.004727119579911232, 0.021368667483329773, -0.03343823552131653, 0.021761588752269745, -0.025325322523713112, -0.034446652978658676, 0.048284150660037994, -0.047001712024211884, 0.019318830221891403, 0.004566787742078304, 0.07806970924139023, -0.01044...
148
148
['Takayuki Osogami', 'Makoto Otsuka']
1509.08634v1
We propose a particularly structured Boltzmann machine, which we refer to as a dynamic Boltzmann machine (DyBM), as a stochastic model of a multi-dimensional time-series. The DyBM can have infinitely many layers of units but allows exact and efficient inference and learning when its parameters have a proposed structure...
Learning dynamic Boltzmann machines with spike-timing dependent plasticity
2,015
http://arxiv.org/pdf/1509.08634v1
Title Learning dynamic Boltzmann machine spiketiming dependent plasticity Summary propose particularly structured Boltzmann machine refer dynamic Boltzmann machine DyBM stochastic model multidimensional timeseries DyBM infinitely many layer unit allows exact efficient inference learning parameter proposed structure pro...
[-0.023196227848529816, -0.07988418638706207, -0.03992109000682831, 0.012567665427923203, 0.015530785545706749, 0.014076362363994122, 0.028746625408530235, -0.01719609461724758, -0.006394918542355299, 0.04103895276784897, 0.008705190382897854, 0.029434477910399437, 0.0034950783010572195, 0.06274043768644333, 0.03900062...
149
149
['Yujia Li', 'Daniel Tarlow', 'Marc Brockschmidt', 'Richard Zemel']
1511.05493v4
Graph-structured data appears frequently in domains including chemistry, natural language semantics, social networks, and knowledge bases. In this work, we study feature learning techniques for graph-structured inputs. Our starting point is previous work on Graph Neural Networks (Scarselli et al., 2009), which we modif...
Gated Graph Sequence Neural Networks
2,015
http://arxiv.org/pdf/1511.05493v4
Title Gated Graph Sequence Neural Networks Summary Graphstructured data appears frequently domain including chemistry natural language semantics social network knowledge base work study feature learning technique graphstructured input starting point previous work Graph Neural Networks Scarselli et al 2009 modify use ga...
[0.004501670598983765, 0.04923496022820473, 0.006892047822475433, 0.03186250478029251, -0.04291075840592384, -0.03801732137799263, 0.0056146360002458096, 0.047856032848358154, 0.07702115923166275, -0.02041093073785305, 0.029697367921471596, -0.00595736363902688, -0.0017703132471069694, 0.09651996940374374, 0.0196350105...
150
150
['Gabriel Dulac-Arnold', 'Richard Evans', 'Hado van Hasselt', 'Peter Sunehag', 'Timothy Lillicrap', 'Jonathan Hunt', 'Timothy Mann', 'Theophane Weber', 'Thomas Degris', 'Ben Coppin']
1512.07679v2
Being able to reason in an environment with a large number of discrete actions is essential to bringing reinforcement learning to a larger class of problems. Recommender systems, industrial plants and language models are only some of the many real-world tasks involving large numbers of discrete actions for which curren...
Deep Reinforcement Learning in Large Discrete Action Spaces
2,015
http://arxiv.org/pdf/1512.07679v2
Title Deep Reinforcement Learning Large Discrete Action Spaces Summary able reason environment large number discrete action essential bringing reinforcement learning larger class problem Recommender system industrial plant language model many realworld task involving large number discrete action current method difficul...
[0.0021167639642953873, 0.0476946122944355, -0.014139327220618725, -0.011740886606276035, 0.005617568269371986, -0.009041255339980125, 0.04904504492878914, -0.0007096983026713133, 0.0023235927801579237, -0.004593364428728819, -0.019344856962561607, 0.009114304557442665, -0.09643930196762085, 0.08340571820735931, 0.0100...
151
151
['Aviv Tamar', 'Yi Wu', 'Garrett Thomas', 'Sergey Levine', 'Pieter Abbeel']
1602.02867v4
We introduce the value iteration network (VIN): a fully differentiable neural network with a `planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiab...
Value Iteration Networks
2,016
http://arxiv.org/pdf/1602.02867v4
Title Value Iteration Networks Summary introduce value iteration network VIN fully differentiable neural network planning module embedded within VINs learn plan suitable predicting outcome involve planningbased reasoning policy reinforcement learning Key approach novel differentiable approximation valueiteration algori...
[0.004493028856813908, 0.019453197717666626, -0.03255758062005043, -0.01281800027936697, -0.006495372857898474, -0.06049295514822006, 0.009518175385892391, -0.04015645384788513, -0.07157497107982635, 0.05011313036084175, 0.04419944807887077, 0.05142959579825401, -0.02482539974153042, 0.11494502425193787, -0.00948524288...
152
152
['Mikael Henaff', 'Arthur Szlam', 'Yann LeCun']
1602.06662v2
Although RNNs have been shown to be powerful tools for processing sequential data, finding architectures or optimization strategies that allow them to model very long term dependencies is still an active area of research. In this work, we carefully analyze two synthetic datasets originally outlined in (Hochreiter and S...
Recurrent Orthogonal Networks and Long-Memory Tasks
2,016
http://arxiv.org/pdf/1602.06662v2
Title Recurrent Orthogonal Networks LongMemory Tasks Summary Although RNNs shown powerful tool processing sequential data finding architecture optimization strategy allow model long term dependency still active area research work carefully analyze two synthetic datasets originally outlined Hochreiter Schmidhuber 1997 u...
[-0.01747208833694458, 0.08482013642787933, -0.0024121911264955997, 0.022803330793976784, -0.014167627319693565, -0.023958789184689522, -0.001385111128911376, -0.005846528336405754, -0.018839579075574875, -0.0077763148583471775, -0.0014658243162557483, -0.020521823316812515, 0.025712769478559494, -0.02004188299179077, ...
153
153
['Hado van Hasselt', 'Arthur Guez', 'Matteo Hessel', 'Volodymyr Mnih', 'David Silver']
1602.07714v2
Most learning algorithms are not invariant to the scale of the function that is being approximated. We propose to adaptively normalize the targets used in learning. This is useful in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the polic...
Learning values across many orders of magnitude
2,016
http://arxiv.org/pdf/1602.07714v2
Title Learning value across many order magnitude Summary learning algorithm invariant scale function approximated propose adaptively normalize target used learning useful valuebased reinforcement learning magnitude appropriate value approximation change time update policy behavior main motivation prior work learning pl...
[-0.04185480251908302, 0.030381247401237488, -0.02586919628083706, -0.039523981511592865, 0.020662624388933182, -0.030731257051229477, 0.021895112469792366, 0.008124059066176414, -0.060192834585905075, 0.04004546254873276, -0.04899836704134941, -0.0027776153292506933, -0.020518342033028603, 0.08556074649095535, -0.0272...
154
154
['Laura Deming', 'Sasha Targ', 'Nate Sauder', 'Diogo Almeida', 'Chun Jimmie Ye']
1605.07156v1
Each human genome is a 3 billion base pair set of encoding instructions. Decoding the genome using deep learning fundamentally differs from most tasks, as we do not know the full structure of the data and therefore cannot design architectures to suit it. As such, architectures that fit the structure of genomics should ...
Genetic Architect: Discovering Genomic Structure with Learned Neural Architectures
2,016
http://arxiv.org/pdf/1605.07156v1
Title Genetic Architect Discovering Genomic Structure Learned Neural Architectures Summary human genome 3 billion base pair set encoding instruction Decoding genome using deep learning fundamentally differs task know full structure data therefore cannot design architecture suit architecture fit structure genomics learn...
[-7.282174192368984e-05, 0.09117594361305237, -0.03623959422111511, -0.015610825270414352, 0.018734263256192207, 0.028670238330960274, 0.0044066994450986385, 0.042840730398893356, 0.004791349172592163, 0.0667874664068222, 0.024639450013637543, -0.021972354501485825, 0.0029261228628456593, 0.10818623006343842, -0.001823...
155
155
['Tejas D. Kulkarni', 'Ardavan Saeedi', 'Simanta Gautam', 'Samuel J. Gershman']
1606.02396v1
Learning robust value functions given raw observations and rewards is now possible with model-free and model-based deep reinforcement learning algorithms. There is a third alternative, called Successor Representations (SR), which decomposes the value function into two components -- a reward predictor and a successor ma...
Deep Successor Reinforcement Learning
2,016
http://arxiv.org/pdf/1606.02396v1
Title Deep Successor Reinforcement Learning Summary Learning robust value function given raw observation reward possible modelfree modelbased deep reinforcement learning algorithm third alternative called Successor Representations SR decomposes value function two component reward predictor successor map successor map r...
[0.0047506894916296005, 0.06638247519731522, -0.007088123355060816, -0.003773239441215992, -0.016015956178307533, -0.013669737614691257, -0.018909061327576637, -0.028850967064499855, -0.03467404097318649, 0.043055251240730286, -0.01732652820646763, 0.039135873317718506, -0.025339452549815178, 0.052633967250585556, 0.01...
156
156
['Yan Duan', 'John Schulman', 'Xi Chen', 'Peter L. Bartlett', 'Ilya Sutskever', 'Pieter Abbeel']
1611.02779v2
Deep reinforcement learning (deep RL) has been successful in learning sophisticated behaviors automatically; however, the learning process requires a huge number of trials. In contrast, animals can learn new tasks in just a few trials, benefiting from their prior knowledge about the world. This paper seeks to bridge th...
RL$^2$: Fast Reinforcement Learning via Slow Reinforcement Learning
2,016
http://arxiv.org/pdf/1611.02779v2
Title RL2 Fast Reinforcement Learning via Slow Reinforcement Learning Summary Deep reinforcement learning deep RL successful learning sophisticated behavior automatically however learning process requires huge number trial contrast animal learn new task trial benefiting prior knowledge world paper seek bridge gap Rathe...
[0.029053961858153343, 0.031220439821481705, -0.022526610642671585, -0.0188052449375391, -0.01906392350792885, 0.006721537094563246, 0.002386258915066719, -0.00878199189901352, -0.03018711879849434, -0.015467081218957901, 0.002808040240779519, 0.021843677386641502, -0.016497164964675903, 0.053887419402599335, 0.0067304...
157
157
['Jasmine Collins', 'Jascha Sohl-Dickstein', 'David Sussillo']
1611.09913v3
Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and pe...
Capacity and Trainability in Recurrent Neural Networks
2,016
http://arxiv.org/pdf/1611.09913v3
Title Capacity Trainability Recurrent Neural Networks Summary Two potential bottleneck expressiveness recurrent neural network RNNs ability store information task parameter store information input history unit show experimentally common RNN architecture achieve nearly pertask perunit capacity bound careful training var...
[0.023925350978970528, 0.01983645185828209, 0.009100564755499363, 0.0378272607922554, -0.008427752181887627, -0.021360011771321297, 0.03861137479543686, -0.03598586469888687, -0.02964172326028347, -0.004948351997882128, -0.04585633799433708, -0.05377557873725891, 0.015707507729530334, 0.03036889061331749, 0.01282105408...
158
158
['Mohammad Taha Bahadori', 'Krzysztof Chalupka', 'Edward Choi', 'Robert Chen', 'Walter F. Stewart', 'Jimeng Sun']
1702.02604v2
In application domains such as healthcare, we want accurate predictive models that are also causally interpretable. In pursuit of such models, we propose a causal regularizer to steer predictive models towards causally-interpretable solutions and theoretically study its properties. In a large-scale analysis of Electron...
Causal Regularization
2,017
http://arxiv.org/pdf/1702.02604v2
Title Causal Regularization Summary application domain healthcare want accurate predictive model also causally interpretable pursuit model propose causal regularizer steer predictive model towards causallyinterpretable solution theoretically study property largescale analysis Electronic Health Records EHR causallyregul...
[-0.02947417087852955, 0.08376093208789825, -0.018450533971190453, -0.04846975579857826, 0.028199339285492897, 0.00529453856870532, 0.01475520059466362, 0.037952154874801636, 0.017821740359067917, 0.05523843318223953, 0.08522357791662216, 0.033800411969423294, -0.02397766336798668, 0.05173458531498909, 0.02230899967253...
159
159
['Dario Garcia-Gasulla', 'Ferran Parés', 'Armand Vilalta', 'Jonatan Moreno', 'Eduard Ayguadé', 'Jesús Labarta', 'Ulises Cortés', 'Toyotaro Suzumura']
1703.01127v4
Deep neural networks are representation learning techniques. During training, a deep net is capable of generating a descriptive language of unprecedented size and detail in machine learning. Extracting the descriptive language coded within a trained CNN model (in the case of image data), and reusing it for other purpos...
On the Behavior of Convolutional Nets for Feature Extraction
2,017
http://arxiv.org/pdf/1703.01127v4
Title Behavior Convolutional Nets Feature Extraction Summary Deep neural network representation learning technique training deep net capable generating descriptive language unprecedented size detail machine learning Extracting descriptive language coded within trained CNN model case image data reusing purpose field int...
[0.032952629029750824, 0.01611396297812462, -0.01034631859511137, 0.10258033126592636, -0.02511817403137684, 0.008014094084501266, 0.06135148927569389, 0.025373611599206924, -0.04846315085887909, -0.0306550282984972, -0.015558541752398014, 0.02333213947713375, -0.027563640847802162, 0.058921780437231064, 0.025731228291...
160
160
['Aditya Grover', 'Manik Dhar', 'Stefano Ermon']
1705.08868v2
Adversarial learning of probabilistic models has recently emerged as a promising alternative to maximum likelihood. Implicit models such as generative adversarial networks (GAN) often generate better samples compared to explicit models trained by maximum likelihood. Yet, GANs sidestep the characterization of an explici...
Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models
2,017
http://arxiv.org/pdf/1705.08868v2
Title FlowGAN Combining Maximum Likelihood Adversarial Learning Generative Models Summary Adversarial learning probabilistic model recently emerged promising alternative maximum likelihood Implicit model generative adversarial network GAN often generate better sample compared explicit model trained maximum likelihood Y...
[-0.003950317390263081, 0.07688191533088684, 0.0009166643721982837, -0.004617008846253157, -0.0009049780201166868, -0.018247541040182114, 0.03817833214998245, -0.009871242567896843, -0.035942502319812775, 0.002297213999554515, -0.03915375843644142, -0.0024538380093872547, -0.005245716776698828, 0.07906213402748108, 0.0...
161
161
['Chris J. Maddison', 'Dieterich Lawson', 'George Tucker', 'Nicolas Heess', 'Mohammad Norouzi', 'Andriy Mnih', 'Arnaud Doucet', 'Yee Whye Teh']
1705.09279v3
When used as a surrogate objective for maximum likelihood estimation in latent variable models, the evidence lower bound (ELBO) produces state-of-the-art results. Inspired by this, we consider the extension of the ELBO to a family of lower bounds defined by a particle filter's estimator of the marginal likelihood, the ...
Filtering Variational Objectives
2,017
http://arxiv.org/pdf/1705.09279v3
Title Filtering Variational Objectives Summary used surrogate objective maximum likelihood estimation latent variable model evidence lower bound ELBO produce stateoftheart result Inspired consider extension ELBO family lower bound defined particle filter estimator marginal likelihood filtering variational objective FIV...
[0.006874850019812584, 0.0401766411960125, 0.0023397665936499834, -0.037960510700941086, 0.028556382283568382, -0.05808863043785095, 0.02238357998430729, 0.009222697466611862, -0.05419144779443741, 0.01739583909511566, 0.027890967205166817, 0.025587838143110275, -0.010478954762220383, 0.09561523050069809, 0.00105245469...
162
162
['Jiaxin Shi', 'Shengyang Sun', 'Jun Zhu']
1705.10119v3
Recent progress in variational inference has paid much attention to the flexibility of variational posteriors. One promising direction is to use implicit distributions, i.e., distributions without tractable densities as the variational posterior. However, existing methods on implicit posteriors still face challenges of...
Kernel Implicit Variational Inference
2,017
http://arxiv.org/pdf/1705.10119v3
Title Kernel Implicit Variational Inference Summary Recent progress variational inference paid much attention flexibility variational posterior One promising direction use implicit distribution ie distribution without tractable density variational posterior However existing method implicit posterior still face challeng...
[-0.010575110092759132, 0.08029486238956451, -0.028552521020174026, 0.009393423795700073, 0.014080696739256382, -0.04454711452126503, -0.006002450827509165, -0.004905517678707838, -0.012617061845958233, 0.03260224685072899, -0.00027383549604564905, 0.05595364049077034, 0.03977024555206299, 0.04732508212327957, 0.060321...
163
163
['Julien Perez', 'Tomi Silander']
1705.10993v1
Partially observable environments present an important open challenge in the domain of sequential control learning with delayed rewards. Despite numerous attempts during the two last decades, the majority of reinforcement learning algorithms and associated approximate models, applied to this context, still assume Marko...
Non-Markovian Control with Gated End-to-End Memory Policy Networks
2,017
http://arxiv.org/pdf/1705.10993v1
Title NonMarkovian Control Gated EndtoEnd Memory Policy Networks Summary Partially observable environment present important open challenge domain sequential control learning delayed reward Despite numerous attempt two last decade majority reinforcement learning algorithm associated approximate model applied context sti...
[-0.024732718244194984, 0.01664862409234047, -0.02197991870343685, -0.03774814307689667, 0.022393692284822464, -0.009015173651278019, 0.01819654554128647, -0.02713029459118843, -0.014965947717428207, 0.0020225439220666885, -0.015060298144817352, 0.005750337149947882, -0.061916615813970566, 0.08552317321300507, 0.028598...
164
164
['Emmanuel Dufourq', 'Bruce A. Bassett']
1707.00703v1
Regression or classification? This is perhaps the most basic question faced when tackling a new supervised learning problem. We present an Evolutionary Deep Learning (EDL) algorithm that automatically solves this by identifying the question type with high accuracy, along with a proposed deep architecture. Typically, a ...
Automated Problem Identification: Regression vs Classification via Evolutionary Deep Networks
2,017
http://arxiv.org/pdf/1707.00703v1
Title Automated Problem Identification Regression v Classification via Evolutionary Deep Networks Summary Regression classification perhaps basic question faced tackling new supervised learning problem present Evolutionary Deep Learning EDL algorithm automatically solves identifying question type high accuracy along pr...
[0.017554648220539093, 0.059413518756628036, -0.04749822989106178, 0.021838361397385597, -0.031179143115878105, 0.04083777591586113, 0.023964963853359222, 0.014833181165158749, 0.0026751523837447166, -0.031123120337724686, 0.0407755970954895, 0.03238639608025551, -0.03210805729031563, 0.05949537083506584, 0.01950212195...
165
165
['Nikhil Mishra', 'Mostafa Rohaninejad', 'Xi Chen', 'Pieter Abbeel']
1707.03141v3
Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task. In response, recent work in meta-learning proposes training a meta-learner on a distribution of similar tasks, in the hopes of generalization to novel but...
A Simple Neural Attentive Meta-Learner
2,017
http://arxiv.org/pdf/1707.03141v3
Title Simple Neural Attentive MetaLearner Summary Deep neural network excel regime large amount data tend struggle data scarce need adapt quickly change task response recent work metalearning proposes training metalearner distribution similar task hope generalization novel related task learning highlevel strategy captu...
[0.004009567201137543, 0.03183344751596451, -0.025987913832068443, -0.008682005107402802, 0.004264588467776775, -0.02678695321083069, 0.03491746261715889, -0.0019520559580996633, -0.0837555080652237, 0.006071198731660843, -0.02262096107006073, 0.020031025633215904, -0.023839369416236877, 0.03718545287847519, 0.04783983...
166
166
['Simone Scardapane', 'Steven Van Vaerenbergh', 'Simone Totaro', 'Aurelio Uncini']
1707.04035v2
Neural networks are generally built by interleaving (adaptable) linear layers with (fixed) nonlinear activation functions. To increase their flexibility, several authors have proposed methods for adapting the activation functions themselves, endowing them with varying degrees of flexibility. None of these approaches, h...
Kafnets: kernel-based non-parametric activation functions for neural networks
2,017
http://arxiv.org/pdf/1707.04035v2
Title Kafnets kernelbased nonparametric activation function neural network Summary Neural network generally built interleaving adaptable linear layer fixed nonlinear activation function increase flexibility several author proposed method adapting activation function endowing varying degree flexibility None approach how...
[-0.022498300299048424, 0.017312129959464073, -0.030839217826724052, 0.00999076571315527, 0.004893321078270674, -0.04123031347990036, 0.03281127288937569, -0.023878756910562515, -0.062344033271074295, 0.02721944823861122, -0.052044257521629333, 0.06830356270074844, -0.0011952181812375784, 0.08157125115394592, 0.0450752...
167
167
['Razvan Pascanu', 'Yujia Li', 'Oriol Vinyals', 'Nicolas Heess', 'Lars Buesing', 'Sebastien Racanière', 'David Reichert', 'Théophane Weber', 'Daan Wierstra', 'Peter Battaglia']
1707.06170v1
Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first m...
Learning model-based planning from scratch
2,017
http://arxiv.org/pdf/1707.06170v1
Title Learning modelbased planning scratch Summary Conventional wisdom hold modelbased planning powerful approach sequential decisionmaking often challenging practice however model used evaluate plan prescribe construct plan introduce Imaginationbased Planner first modelbased sequential decisionmaking agent learn const...
[0.006597284693270922, 0.017582206055521965, -0.01829913817346096, -0.030757684260606766, -0.02217869460582733, -0.004649126436561346, -0.02426075004041195, -0.014787039719522, -0.0075731598772108555, 0.007130926474928856, 0.03597297519445419, 0.03913571685552597, -0.04022463038563728, 0.1198381632566452, -0.0135048776...
168
168
['Isabeau Prémont-Schwarz', 'Alexander Ilin', 'Tele Hotloo Hao', 'Antti Rasmus', 'Rinu Boney', 'Harri Valpola']
1707.09219v4
We propose a recurrent extension of the Ladder networks whose structure is motivated by the inference required in hierarchical latent variable models. We demonstrate that the recurrent Ladder is able to handle a wide variety of complex learning tasks that benefit from iterative inference and temporal modeling. The arch...
Recurrent Ladder Networks
2,017
http://arxiv.org/pdf/1707.09219v4
Title Recurrent Ladder Networks Summary propose recurrent extension Ladder network whose structure motivated inference required hierarchical latent variable model demonstrate recurrent Ladder able handle wide variety complex learning task benefit iterative inference temporal modeling architecture show closetooptimal re...
[0.008140220306813717, 0.0573250949382782, 0.0059646740555763245, 0.04990041255950928, 0.00830447394400835, 0.0008088753093034029, 0.01290297694504261, -0.02803797833621502, -0.05059698969125748, 0.03342720493674278, -0.022068766877055168, -0.007347153499722481, -0.00231430702842772, 0.031997282058000565, 0.04565506055...
169
169
['Kenji Kawaguchi', 'Leslie Pack Kaelbling', 'Yoshua Bengio']
1710.05468v3
With a direct analysis of neural networks, this paper presents a mathematically tight generalization theory to partially address an open problem regarding the generalization of deep learning. Unlike previous bound-based theory, our main theory is quantitatively as tight as possible for every dataset individually, while...
Generalization in Deep Learning
2,017
http://arxiv.org/pdf/1710.05468v3
Title Generalization Deep Learning Summary direct analysis neural network paper present mathematically tight generalization theory partially address open problem regarding generalization deep learning Unlike previous boundbased theory main theory quantitatively tight possible every dataset individually producing qualit...
[-0.012876843102276325, 0.04441167414188385, -0.01813766546547413, 0.03599070385098457, 0.007772340439260006, -0.008717292919754982, 0.000206919910851866, -0.012016835622489452, -0.043246444314718246, 0.04879344627261162, 0.012480729259550571, -0.02803785540163517, -0.015330921858549118, 0.05843678116798401, 0.01337924...
170
170
['Yannic Kilcher', 'Gary Becigneul', 'Thomas Hofmann']
1710.11386v1
It is commonly agreed that the use of relevant invariances as a good statistical bias is important in machine-learning. However, most approaches that explicitly incorporate invariances into a model architecture only make use of very simple transformations, such as translations and rotations. Hence, there is a need for ...
Parametrizing filters of a CNN with a GAN
2,017
http://arxiv.org/pdf/1710.11386v1
Title Parametrizing filter CNN GAN Summary commonly agreed use relevant invariance good statistical bias important machinelearning However approach explicitly incorporate invariance model architecture make use simple transformation translation rotation Hence need method model extract richer transformation capture much ...
[-0.018514923751354218, 0.07642007619142532, -0.01028684712946415, 0.010576027445495129, 0.019531575962901115, -0.013879798352718353, 0.008748224005103111, 0.00696737552061677, -0.10806851089000702, -0.002081956248730421, -0.0034765591844916344, 0.07079602032899857, -0.026015527546405792, 0.02715121954679489, 0.0681100...
171
171
['Zhen He', 'Shaobing Gao', 'Liang Xiao', 'Daxue Liu', 'Hangen He', 'David Barber']
1711.01577v3
Long Short-Term Memory (LSTM) is a popular approach to boosting the ability of Recurrent Neural Networks to store longer term temporal information. The capacity of an LSTM network can be increased by widening and adding layers. However, usually the former introduces additional parameters, while the latter increases the...
Wider and Deeper, Cheaper and Faster: Tensorized LSTMs for Sequence Learning
2,017
http://arxiv.org/pdf/1711.01577v3
Title Wider Deeper Cheaper Faster Tensorized LSTMs Sequence Learning Summary Long ShortTerm Memory LSTM popular approach boosting ability Recurrent Neural Networks store longer term temporal information capacity LSTM network increased widening adding layer However usually former introduces additional parameter latter i...
[0.00562323397025466, 0.033965133130550385, -0.014653699472546577, 0.014243245124816895, -0.020695164799690247, 0.005932270083576441, 0.042821645736694336, 0.03192581981420517, 0.0015997419832274318, -0.037798233330249786, -0.031120603904128075, -0.05774397403001785, 0.03605860844254494, 0.02911956235766411, 0.00345828...
172
172
['Shruti R. Kulkarni', 'John M. Alexiades', 'Bipin Rajendran']
1711.03637v1
We describe a novel spiking neural network (SNN) for automated, real-time handwritten digit classification and its implementation on a GP-GPU platform. Information processing within the network, from feature extraction to classification is implemented by mimicking the basic aspects of neuronal spike initiation and prop...
Learning and Real-time Classification of Hand-written Digits With Spiking Neural Networks
2,017
http://arxiv.org/pdf/1711.03637v1
Title Learning Realtime Classification Handwritten Digits Spiking Neural Networks Summary describe novel spiking neural network SNN automated realtime handwritten digit classification implementation GPGPU platform Information processing within network feature extraction classification implemented mimicking basic aspect...
[-0.029836155474185944, -0.004212182946503162, -0.005331828258931637, 0.062443844974040985, 0.03699972480535507, 0.010561011731624603, 0.06665130704641342, 0.006106346379965544, 0.006240308750420809, -0.013324867002665997, -0.020448651164770126, 0.02570227161049843, 0.025979647412896156, 0.09828615933656693, 0.01205835...
173
173
['Joan Serrà', 'Dídac Surís', 'Marius Miron', 'Alexandros Karatzoglou']
1801.01423v2
Catastrophic forgetting occurs when a neural network loses the information learned in a previous task after training on subsequent tasks. This problem remains a hurdle for artificial intelligence systems with sequential learning capabilities. In this paper, we propose a task-based hard attention mechanism that preserve...
Overcoming catastrophic forgetting with hard attention to the task
2,018
http://arxiv.org/pdf/1801.01423v2
Title Overcoming catastrophic forgetting hard attention task Summary Catastrophic forgetting occurs neural network loses information learned previous task training subsequent task problem remains hurdle artificial intelligence system sequential learning capability paper propose taskbased hard attention mechanism preser...
[-0.037000950425863266, 0.03462167829275131, -0.007812450174242258, -0.01077504362910986, 0.009100215509533882, 0.009886335581541061, -0.023790210485458374, -0.0192704014480114, 0.002923540771007538, 0.01397237554192543, -0.0008448531734757125, 0.0033932134974747896, -0.004014899488538504, 0.05124600976705551, 0.033668...
174
174
['Zachary C. Lipton', 'Yu-Xiang Wang', 'Alex Smola']
1802.03916v2
Faced with distribution shift between training and test set, we wish to detect and quantify the shift, and to correct our classifiers without test set labels. Motivated by medical diagnosis, where diseases (targets), cause symptoms (observations), we focus on label shift, where the label marginal $p(y)$ changes but the...
Detecting and Correcting for Label Shift with Black Box Predictors
2,018
http://arxiv.org/pdf/1802.03916v2
Title Detecting Correcting Label Shift Black Box Predictors Summary Faced distribution shift training test set wish detect quantify shift correct classifier without test set label Motivated medical diagnosis disease target cause symptom observation focus label shift label marginal py change conditional pxy propose Blac...
[0.0030144392512738705, 0.03700340911746025, -0.009759852662682533, -0.00436204532161355, -0.010795993730425835, 0.01528732106089592, 0.0360267236828804, 0.0810960903763771, -0.04707879573106766, -0.00498958257958293, 0.09225547313690186, 0.006553373299539089, 0.00669174874201417, 0.05810375139117241, -0.01896798051893...
175
175
['Kenji Kawaguchi', 'Yoshua Bengio']
1802.07426v1
This paper introduces a novel measure-theoretic learning theory to analyze generalization behaviors of practical interest. The proposed learning theory has the following abilities: 1) to utilize the qualities of each learned representation on the path from raw inputs to outputs in representation learning, 2) to guarant...
Generalization in Machine Learning via Analytical Learning Theory
2,018
http://arxiv.org/pdf/1802.07426v1
Title Generalization Machine Learning via Analytical Learning Theory Summary paper introduces novel measuretheoretic learning theory analyze generalization behavior practical interest proposed learning theory following ability 1 utilize quality learned representation path raw input output representation learning 2 guar...
[-0.04089696332812309, 0.010580497793853283, -0.04603782296180725, -0.009592321701347828, 0.004706556908786297, 0.004627256654202938, 0.006789667531847954, 0.010783826932311058, -0.05618637055158615, 0.03374860808253288, 0.06609243899583817, -0.017072129994630814, -0.003834246890619397, 0.05823005735874176, 0.024226350...
176
176
['Roman Novak', 'Yasaman Bahri', 'Daniel A. Abolafia', 'Jeffrey Pennington', 'Jascha Sohl-Dickstein']
1802.08760v1
In practice it is often found that large over-parameterized neural networks generalize better than their smaller counterparts, an observation that appears to conflict with classical notions of function complexity, which typically favor smaller models. In this work, we investigate this tension between complexity and gen...
Sensitivity and Generalization in Neural Networks: an Empirical Study
2,018
http://arxiv.org/pdf/1802.08760v1
Title Sensitivity Generalization Neural Networks Empirical Study Summary practice often found large overparameterized neural network generalize better smaller counterpart observation appears conflict classical notion function complexity typically favor smaller model work investigate tension complexity generalization ex...
[-0.008312312886118889, 0.009196307510137558, -0.05233009159564972, 0.021689528599381447, 0.03583923727273941, -0.023269161581993103, 0.03057517297565937, -0.006933005526661873, -0.021818634122610092, 0.027232104912400246, -0.00268950336612761, 0.030916081741452217, 0.00390625512227416, 0.02507195994257927, 0.027891039...
177
177
['Ari S. Morcos', 'David G. T. Barrett', 'Neil C. Rabinowitz', 'Matthew Botvinick']
1803.06959v1
Despite their ability to memorize large datasets, deep neural networks often achieve good generalization performance. However, the differences between the learned solutions of networks which generalize and those which do not remain unclear. Additionally, the tuning properties of single directions (defined as the activa...
On the importance of single directions for generalization
2,018
http://arxiv.org/pdf/1803.06959v1
Title importance single direction generalization Summary Despite ability memorize large datasets deep neural network often achieve good generalization performance However difference learned solution network generalize remain unclear Additionally tuning property single direction defined activation single unit linear com...
[-0.025508176535367966, 0.058791566640138626, -0.02577769011259079, 0.013703260570764542, 0.011755692772567272, 0.011057221330702305, 0.06269649416208267, -0.022969113662838936, -0.06650397926568985, 0.03535652160644531, -0.004868027754127979, -0.016830842941999435, -0.001781700411811471, 0.06200243905186653, 0.0127793...
178
178
['Srinivas C. Turaga', 'Kevin L. Briggman', 'Moritz Helmstaedter', 'Winfried Denk', 'H. Sebastian Seung']
0911.5372v1
Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the s...
Maximin affinity learning of image segmentation
2,009
http://arxiv.org/pdf/0911.5372v1
Title Maximin affinity learning image segmentation Summary Images segmented first using classifier predict affinity graph reflects degree image pixel must grouped together partitioning graph yield segmentation Machine learning applied affinity classifier produce affinity graph good sense minimizing edge misclassificati...
[-0.026119409129023552, -0.0805911123752594, -0.007136921398341656, 0.028323953971266747, -0.02615479752421379, -0.021099906414747238, 0.008177189156413078, 0.014270447194576263, 0.04525921121239662, 0.029587985947728157, 0.014500692486763, 0.0403924323618412, 0.018854228779673576, -0.03336678072810173, 0.0146484868600...
179
179
['Sergey S. Tarasenko']
1102.2739v1
This study is focused on the development of the cortex-like visual object recognition system. We propose a general framework, which consists of three hierarchical levels (modules). These modules functionally correspond to the V1, V4 and IT areas. Both bottom-up and top-down connections between the hierarchical levels V...
A General Framework for Development of the Cortex-like Visual Object Recognition System: Waves of Spikes, Predictive Coding and Universal Dictionary of Features
2,011
http://arxiv.org/pdf/1102.2739v1
Title General Framework Development Cortexlike Visual Object Recognition System Waves Spikes Predictive Coding Universal Dictionary Features Summary study focused development cortexlike visual object recognition system propose general framework consists three hierarchical level module module functionally correspond V1 ...
[-0.0009064223268069327, 0.0021670248825103045, -0.025190727785229683, 0.04891321063041687, -0.002613988472148776, 0.014427641406655312, 0.01968766376376152, 0.008697547018527985, 0.03644901141524315, 0.005616879090666771, -0.037766337394714355, 0.014864770695567131, 0.00046208896674215794, 0.08074658364057541, 0.01453...
180
180
['Dan C. Cireşan', 'Ueli Meier', 'Luca M. Gambardella', 'Jürgen Schmidhuber']
1103.4487v1
The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent substantial improvement by others dates back 7 years (error rate 0.4%) . Recently we were able to significantly improve this result, using graphics cards to greatly speed up training of simple ...
Handwritten Digit Recognition with a Committee of Deep Neural Nets on GPUs
2,011
http://arxiv.org/pdf/1103.4487v1
Title Handwritten Digit Recognition Committee Deep Neural Nets GPUs Summary competitive MNIST handwritten digit recognition benchmark long history broken record since 1998 recent substantial improvement others date back 7 year error rate 04 Recently able significantly improve result using graphic card greatly speed tra...
[-0.009414189495146275, 0.0630563497543335, 0.0035463629756122828, 0.09012654423713684, -0.010637854225933552, 0.0219058096408844, 0.08015017956495285, 0.023462682962417603, -0.002985337283462286, 0.03166167810559273, 0.02365194261074066, -0.023339396342635155, 0.050962645560503006, 0.022648407146334648, -0.01319724414...
181
181
['Ridwan Al Iqbal']
1110.0214v1
Artificial Neural Network is among the most popular algorithm for supervised learning. However, Neural Networks have a well-known drawback of being a "Black Box" learner that is not comprehensible to the Users. This lack of transparency makes it unsuitable for many high risk tasks such as medical diagnosis that require...
Eclectic Extraction of Propositional Rules from Neural Networks
2,011
http://arxiv.org/pdf/1110.0214v1
Title Eclectic Extraction Propositional Rules Neural Networks Summary Artificial Neural Network among popular algorithm supervised learning However Neural Networks wellknown drawback Black Box learner comprehensible Users lack transparency make unsuitable many high risk task medical diagnosis requires rational justific...
[0.0367228165268898, 0.029489461332559586, -0.013720064423978329, 0.011697973124682903, -0.06896384060382843, -0.0035867507103830576, -0.007628391031175852, 0.046477969735860825, 0.011711702682077885, 0.007252187933772802, 0.04214801266789436, 0.019037997350096703, 0.02832217514514923, 0.028179598972201347, -0.02131610...
182
182
['Arnab Ghosh', 'Viveka Kulharia', 'Vinay Namboodiri']
1612.01294v1
Communicating and sharing intelligence among agents is an important facet of achieving Artificial General Intelligence. As a first step towards this challenge, we introduce a novel framework for image generation: Message Passing Multi-Agent Generative Adversarial Networks (MPM GANs). While GANs have recently been shown...
Message Passing Multi-Agent GANs
2,016
http://arxiv.org/pdf/1612.01294v1
Title Message Passing MultiAgent GANs Summary Communicating sharing intelligence among agent important facet achieving Artificial General Intelligence first step towards challenge introduce novel framework image generation Message Passing MultiAgent Generative Adversarial Networks MPM GANs GANs recently shown effective...
[0.017849059775471687, 0.05504539608955383, 0.0005229501985013485, 0.022542985156178474, -0.021121103316545486, 0.0011106240563094616, 0.05898742750287056, -0.0020006345584988594, -0.0388215146958828, 0.001636327593587339, -0.030131246894598007, -0.0034502067137509584, -0.03480396419763565, 0.010919131338596344, 0.0714...
183
183
['Tong Che', 'Yanran Li', 'Athul Paul Jacob', 'Yoshua Bengio', 'Wenjie Li']
1612.02136v5
Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, wh...
Mode Regularized Generative Adversarial Networks
2,016
http://arxiv.org/pdf/1612.02136v5
Title Mode Regularized Generative Adversarial Networks Summary Although Generative Adversarial Networks achieve stateoftheart result variety generative task regarded highly unstable prone miss mode argue bad behavior GANs due particular functional shape trained discriminator high dimensional space easily make training ...
[0.0021598569583147764, 0.08404155820608139, -0.019086865708231926, -0.008449694141745567, 0.03577665239572525, -0.025913136079907417, 0.005626949481666088, 0.00010370239760959521, -0.04340387135744095, 0.051369521766901016, -0.003550903871655464, 0.015018542297184467, -0.02204766683280468, 0.033727481961250305, 0.0594...
184
184
['Bharat Singh', 'Soham De', 'Yangmuzi Zhang', 'Thomas Goldstein', 'Gavin Taylor']
1510.04609v1
The increasing complexity of deep learning architectures is resulting in training time requiring weeks or even months. This slow training is due in part to vanishing gradients, in which the gradients used by back-propagation are extremely large for weights connecting deep layers (layers near the output layer), and extr...
Layer-Specific Adaptive Learning Rates for Deep Networks
2,015
http://arxiv.org/pdf/1510.04609v1
Title LayerSpecific Adaptive Learning Rates Deep Networks Summary increasing complexity deep learning architecture resulting training time requiring week even month slow training due part vanishing gradient gradient used backpropagation extremely large weight connecting deep layer layer near output layer extremely smal...
[0.004674241412431002, -0.010195175185799599, -0.02250341884791851, 0.06782056391239166, 0.020764455199241638, -0.0065002962946891785, 0.06451480090618134, -0.014551694504916668, -0.024900073185563087, 0.015319187194108963, -0.01956958696246147, 0.039943575859069824, 0.016074420884251595, 0.0381023995578289, 0.00629762...
185
185
['Baochen Sun', 'Jiashi Feng', 'Kate Saenko']
1511.05547v2
Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning methods. Supervised domain adaptation methods have been proposed for ...
Return of Frustratingly Easy Domain Adaptation
2,015
http://arxiv.org/pdf/1511.05547v2
Title Return Frustratingly Easy Domain Adaptation Summary Unlike human learning machine learning often fails handle change training source test target input distribution domain shift common practical scenario severely damage performance conventional machine learning method Supervised domain adaptation method proposed c...
[0.006140164099633694, 0.03558460250496864, -0.02201683446764946, -0.03230151906609535, -0.02162107452750206, 0.00257741822861135, 0.08435023576021194, -0.0018613454885780811, -0.05232500284910202, -0.008091026917099953, -0.0293829794973135, 0.00776475016027689, 0.02512151189148426, 0.05079600214958191, -0.029154613614...
186
186
['Lukas Cavigelli', 'Luca Benini']
1512.04295v2
An ever increasing number of computer vision and image/video processing challenges are being approached using deep convolutional neural networks, obtaining state-of-the-art results in object recognition and detection, semantic segmentation, action recognition, optical flow and superresolution. Hardware acceleration of ...
Origami: A 803 GOp/s/W Convolutional Network Accelerator
2,015
http://arxiv.org/pdf/1512.04295v2
Title Origami 803 GOpsW Convolutional Network Accelerator Summary ever increasing number computer vision imagevideo processing challenge approached using deep convolutional neural network obtaining stateoftheart result object recognition detection semantic segmentation action recognition optical flow superresolution Ha...
[-0.03533274680376053, 0.014776026830077171, -0.0014125180896371603, 0.11635368317365646, 0.015131840482354164, -0.014659779146313667, 0.02819630317389965, 0.006116359960287809, -0.02864663116633892, -0.03482743725180626, 0.019600851461291313, 0.012567292898893356, 0.017588449642062187, 0.06970798969268799, 0.044958956...
187
187
['Aravind S. Lakshminarayanan', 'Ramnandan Krishnamurthy', 'Peeyush Kumar', 'Balaraman Ravindran']
1605.05359v2
This paper introduces an automated skill acquisition framework in reinforcement learning which involves identifying a hierarchical description of the given task in terms of abstract states and extended actions between abstract states. Identifying such structures present in the task provides ways to simplify and speed u...
Option Discovery in Hierarchical Reinforcement Learning using Spatio-Temporal Clustering
2,016
http://arxiv.org/pdf/1605.05359v2
Title Option Discovery Hierarchical Reinforcement Learning using SpatioTemporal Clustering Summary paper introduces automated skill acquisition framework reinforcement learning involves identifying hierarchical description given task term abstract state extended action abstract state Identifying structure present task ...
[-0.030196843668818474, 0.0006299293017946184, -0.02648656629025936, -0.008701860904693604, 0.003459151368588209, -0.027547433972358704, 0.014461868442595005, -0.00015311477181967348, -0.03315132483839989, -0.013102319091558456, 0.00047338573494926095, 0.0448630228638649, 0.0005496739177033305, 0.07918047159910202, 0.0...
188
188
['Andreas Veit', 'Michael Wilber', 'Serge Belongie']
1605.06431v2
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks...
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
2,016
http://arxiv.org/pdf/1605.06431v2
Title Residual Networks Behave Like Ensembles Relatively Shallow Networks Summary work propose novel interpretation residual network showing seen collection many path differing length Moreover residual network seem enable deep network leveraging short path training support observation rewrite residual network explicit ...
[-0.008388273417949677, 0.02791118435561657, -0.022707780823111534, 0.055501021444797516, 0.01037044171243906, -0.02185480110347271, 9.231572767021134e-05, -0.010486502200365067, -0.06265868246555328, 0.028354881331324577, 0.01657799817621708, 0.024325605481863022, -0.008000190369784832, -0.006819234229624271, 0.019591...
189
189
['Anh Nguyen', 'Alexey Dosovitskiy', 'Jason Yosinski', 'Thomas Brox', 'Jeff Clune']
1605.09304v5
Deep neural networks (DNNs) have demonstrated state-of-the-art results on many pattern recognition tasks, especially vision classification problems. Understanding the inner workings of such computational brains is both fascinating basic science that is interesting in its own right - similar to why we study the human br...
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks
2,016
http://arxiv.org/pdf/1605.09304v5
Title Synthesizing preferred input neuron neural network via deep generator network Summary Deep neural network DNNs demonstrated stateoftheart result many pattern recognition task especially vision classification problem Understanding inner working computational brain fascinating basic science interesting right simila...
[-0.01585194654762745, 0.07251649349927902, -0.01084416825324297, 0.02141485922038555, 0.002123874379321933, -0.0032464833930134773, 0.025581488385796547, 0.011971809901297092, -0.01633543334901333, 0.035361990332603455, -0.015380226075649261, 0.004455838818103075, -0.009208879433572292, 0.06377391517162323, 0.06678320...
190
190
['Rathinakumar Appuswamy', 'Tapan Nayak', 'John Arthur', 'Steven Esser', 'Paul Merolla', 'Jeffrey Mckinstry', 'Timothy Melano', 'Myron Flickner', 'Dharmendra Modha']
1606.02407v1
We derive a relationship between network representation in energy-efficient neuromorphic architectures and block Toplitz convolutional matrices. Inspired by this connection, we develop deep convolutional networks using a family of structured convolutional matrices and achieve state-of-the-art trade-off between energy e...
Structured Convolution Matrices for Energy-efficient Deep learning
2,016
http://arxiv.org/pdf/1606.02407v1
Title Structured Convolution Matrices Energyefficient Deep learning Summary derive relationship network representation energyefficient neuromorphic architecture block Toplitz convolutional matrix Inspired connection develop deep convolutional network using family structured convolutional matrix achieve stateoftheart tr...
[-0.016599668189883232, 0.04337156191468239, -0.00867722649127245, 0.11679377406835556, 0.0019713821820914745, -0.0058877295814454556, 0.016658274456858635, 0.01525559090077877, -0.006829366087913513, -0.032412685453891754, -0.042291976511478424, 0.010505524463951588, -0.004430582281202078, 0.04620462656021118, 0.02834...
191
191
['Baochen Sun', 'Kate Saenko']
1607.01719v1
Deep neural networks are able to learn powerful representations from large quantities of labeled input data, however they cannot always generalize well across changes in input distributions. Domain adaptation algorithms have been proposed to compensate for the degradation in performance due to domain shift. In this pap...
Deep CORAL: Correlation Alignment for Deep Domain Adaptation
2,016
http://arxiv.org/pdf/1607.01719v1
Title Deep CORAL Correlation Alignment Deep Domain Adaptation Summary Deep neural network able learn powerful representation large quantity labeled input data however cannot always generalize well across change input distribution Domain adaptation algorithm proposed compensate degradation performance due domain shift p...
[-0.010725886560976505, 0.05521048605442047, -0.019344104453921318, -0.018435906618833542, -0.012039180845022202, 0.00935688428580761, 0.04981807991862297, -0.024802017956972122, -0.026348521932959557, 0.03277937322854996, -0.07072371989488602, 0.019499894231557846, 0.031083934009075165, 0.04162105545401573, -0.0245450...
192
192
['Jun Liu', 'Amir Shahroudy', 'Dong Xu', 'Gang Wang']
1607.07043v1
3D action recognition - analysis of human actions based on 3D skeleton data - becomes popular recently due to its succinctness, robustness, and view-invariant representation. Recent attempts on this problem suggested to develop RNN-based learning methods to model the contextual dependency in the temporal domain. In thi...
Spatio-Temporal LSTM with Trust Gates for 3D Human Action Recognition
2,016
http://arxiv.org/pdf/1607.07043v1
Title SpatioTemporal LSTM Trust Gates 3D Human Action Recognition Summary 3D action recognition analysis human action based 3D skeleton data becomes popular recently due succinctness robustness viewinvariant representation Recent attempt problem suggested develop RNNbased learning method model contextual dependency tem...
[-0.012921557761728764, 0.017357636243104935, 0.0009066946222446859, 0.07766448706388474, -0.0021652295254170895, 0.03554300218820572, -0.007129453122615814, -0.007529738824814558, 0.0005641651805490255, -0.051401954144239426, -0.04626976698637009, -0.05487972870469093, 0.04389138147234917, 0.0846305638551712, 0.040922...
193
193
['Suraj Srinivas', 'R. Venkatesh Babu']
1611.06791v1
Deep Neural Networks often require good regularizers to generalize well. Dropout is one such regularizer that is widely used among Deep Learning practitioners. Recent work has shown that Dropout can also be viewed as performing Approximate Bayesian Inference over the network parameters. In this work, we generalize this...
Generalized Dropout
2,016
http://arxiv.org/pdf/1611.06791v1
Title Generalized Dropout Summary Deep Neural Networks often require good regularizers generalize well Dropout one regularizer widely used among Deep Learning practitioner Recent work shown Dropout also viewed performing Approximate Bayesian Inference network parameter work generalize notion introduce rich family regul...
[-0.016172297298908234, 0.060791682451963425, -0.006108471658080816, 0.029795153066515923, 0.00688586849719286, -0.0159979946911335, 0.0460902638733387, 0.006665560882538557, -0.02113274671137333, 0.038688767701387405, 0.014540540054440498, 0.020467763766646385, 0.005920268129557371, 0.06350425630807877, 0.017433755099...
194
194
['I. Theodorakopoulos', 'V. Pothos', 'D. Kastaniotis', 'N. Fragoulis']
1701.05221v5
A new, radical CNN design approach is presented in this paper, considering the reduction of the total computational load during inference. This is achieved by a new holistic intervention on both the CNN architecture and the training procedure, which targets to the parsimonious inference by learning to exploit or remove...
Parsimonious Inference on Convolutional Neural Networks: Learning and applying on-line kernel activation rules
2,017
http://arxiv.org/pdf/1701.05221v5
Title Parsimonious Inference Convolutional Neural Networks Learning applying online kernel activation rule Summary new radical CNN design approach presented paper considering reduction total computational load inference achieved new holistic intervention CNN architecture training procedure target parsimonious inference...
[0.01892285794019699, 0.03176836296916008, -0.02030724287033081, 0.06722205877304077, -0.006590038072317839, -0.007452232763171196, 0.0459536537528038, 0.019313611090183258, -0.02842235006392002, -0.026905441656708717, 0.021887587383389473, 0.051471006125211716, -0.00291715981438756, 0.038893792778253555, 0.01017731800...
195
195
['Chelsea Finn', 'Pieter Abbeel', 'Sergey Levine']
1703.03400v3
We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on...
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
2,017
http://arxiv.org/pdf/1703.03400v3
Title ModelAgnostic MetaLearning Fast Adaptation Deep Networks Summary propose algorithm metalearning modelagnostic sense compatible model trained gradient descent applicable variety different learning problem including classification regression reinforcement learning goal metalearning train model variety learning task...
[-0.008136829361319542, 0.023955857381224632, -0.03234463930130005, 0.029733700677752495, 0.04428404942154884, 0.005840153433382511, 0.04772241786122322, 0.0074520595371723175, -0.05295838415622711, 0.00922408513724804, -0.034250665456056595, 0.10482901334762573, -0.027029139921069145, 0.01720396988093853, 0.0220488514...
196
196
['Asit Mishra', 'Jeffrey J Cook', 'Eriko Nurvitadhi', 'Debbie Marr']
1704.03079v1
For computer vision applications, prior works have shown the efficacy of reducing the numeric precision of model parameters (network weights) in deep neural networks but also that reducing the precision of activations hurts model accuracy much more than reducing the precision of model parameters. We study schemes to tr...
WRPN: Training and Inference using Wide Reduced-Precision Networks
2,017
http://arxiv.org/pdf/1704.03079v1
Title WRPN Training Inference using Wide ReducedPrecision Networks Summary computer vision application prior work shown efficacy reducing numeric precision model parameter network weight deep neural network also reducing precision activation hurt model accuracy much reducing precision model parameter study scheme train...
[-0.0358797088265419, 0.044300444424152374, -0.024891357868909836, 0.0524381622672081, 0.0145856449380517, -0.03891358897089958, 0.03083834797143936, 0.008303012698888779, -0.06069585680961609, 0.050226014107465744, -0.016413861885666847, 0.0038845674134790897, 0.03238734230399132, -0.008598925545811653, 0.020039036870...
197
197
['David Rolnick', 'Andreas Veit', 'Serge Belongie', 'Nir Shavit']
1705.10694v3
Deep neural networks trained on large supervised datasets have led to impressive results in image classification and other tasks. However, well-annotated datasets can be time-consuming and expensive to collect, lending increased interest to larger but noisy datasets that are more easily obtained. In this paper, we show...
Deep Learning is Robust to Massive Label Noise
2,017
http://arxiv.org/pdf/1705.10694v3
Title Deep Learning Robust Massive Label Noise Summary Deep neural network trained large supervised datasets led impressive result image classification task However wellannotated datasets timeconsuming expensive collect lending increased interest larger noisy datasets easily obtained paper show deep neural network capa...
[0.030910776928067207, 0.05125641077756882, -0.01859557442367077, 0.03943230211734772, 0.03508317098021507, 0.0039650918915867805, 0.05846835672855377, -0.0035400555934756994, -0.01660863310098648, 0.018252722918987274, -0.02822532132267952, 0.0233351718634367, -0.023831773549318314, 0.04866470396518707, 0.029610130935...
198
198
['Stefan Lattner', 'Maarten Grachten']
1707.01357v1
Content-invariance in mapping codes learned by GAEs is a useful feature for various relation learning tasks. In this paper we show that the content-invariance of mapping codes for images of 2D and 3D rotated objects can be substantially improved by extending the standard GAE loss (symmetric reconstruction error) with a...
Improving Content-Invariance in Gated Autoencoders for 2D and 3D Object Rotation
2,017
http://arxiv.org/pdf/1707.01357v1
Title Improving ContentInvariance Gated Autoencoders 2D 3D Object Rotation Summary Contentinvariance mapping code learned GAEs useful feature various relation learning task paper show contentinvariance mapping code image 2D 3D rotated object substantially improved extending standard GAE loss symmetric reconstruction er...
[0.008513077162206173, 0.07667741179466248, 0.014725294895470142, 0.057814229279756546, -0.0029120943509042263, 0.034956950694322586, 0.01792016439139843, 0.006903417408466339, -0.054473649710416794, -0.026799369603395462, -0.033115796744823456, 0.0428788922727108, 0.03869888186454773, 0.10498959571123123, 0.0543282441...
199
199
['Jindong Wang', 'Yiqiang Chen', 'Shuji Hao', 'Xiaohui Peng', 'Lisha Hu']
1707.03502v2
Sensor-based activity recognition seeks the profound high-level knowledge about human activities from multitudes of low-level sensor readings. Conventional pattern recognition approaches have made tremendous progress in the past years. However, those methods often heavily rely on heuristic hand-crafted feature extracti...
Deep Learning for Sensor-based Activity Recognition: A Survey
2,017
http://arxiv.org/pdf/1707.03502v2
Title Deep Learning Sensorbased Activity Recognition Survey Summary Sensorbased activity recognition seek profound highlevel knowledge human activity multitude lowlevel sensor reading Conventional pattern recognition approach made tremendous progress past year However method often heavily rely heuristic handcrafted fea...
[-0.04772906005382538, 0.004994967021048069, -0.011839666403830051, 0.05454079434275627, 0.0062690130434930325, 0.02029799297451973, 0.038358382880687714, -0.0038154057692736387, 0.004776395857334137, -0.018325598910450935, -0.011894668452441692, 0.03594629466533661, 0.018405994400382042, 0.027837131172418594, 0.000997...