forum_id string | forum_title string | forum_authors list | forum_abstract string | forum_keywords list | forum_pdf_url string | forum_url string | note_id string | note_type string | note_created int64 | note_replyto string | note_readers list | note_signatures list | venue string | year string | note_text string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpor... | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | OgesTW8qZ5TWn | review | 1,363,419,120,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their comments and agree with most of them.
- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012).
Experimental results show that ou... |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpor... | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | PnfD3BSBKbnZh | review | 1,362,079,260,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"anonymous reviewer 75b8"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors
review: - A brief summary of the paper's contributions, in the context of prior work.
This paper proposes a new energy function (or scoring function) for ranking pairs of entities and their relations... |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpor... | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | yA-tyFEFr2A5u | review | 1,362,246,000,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"anonymous reviewer 7e51"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors
review: This paper proposes a new model for modeling data of multi-relational knowledge bases such as Wordnet or YAGO. Inspired by the work of (Bordes et al., AAAI11), they propose a neural network-base... |
msGKsXQXNiCBk | Learning New Facts From Knowledge Bases With Neural Tensor Networks and
Semantic Word Vectors | [
"Danqi Chen",
"Richard Socher",
"Christopher Manning",
"Andrew Y. Ng"
] | Knowledge bases provide applications with the benefit of easily accessible, systematic relational knowledge but often suffer in practice from their incompleteness and lack of knowledge of new entities and relations. Much work has focused on building or extending them by finding patterns in large unannotated text corpor... | [
"new facts",
"knowledge bases",
"neural tensor networks",
"semantic word vectors",
"relations",
"entities",
"model",
"database",
"bases",
"applications"
] | https://openreview.net/pdf?id=msGKsXQXNiCBk | https://openreview.net/forum?id=msGKsXQXNiCBk | 7jyp7wrwSzagb | review | 1,363,419,120,000 | msGKsXQXNiCBk | [
"everyone"
] | [
"Danqi Chen, Richard Socher, Christopher D. Manning, Andrew Y. Ng"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their comments and agree with most of them.
- We've updated our paper on arxiv, and added the important experimental comparison to the model in 'Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing' (AISTATS 2012).
Experimental results show that ou... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | rGZJRE7IJwrK3 | review | 1,392,852,360,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Charles Martin"
] | ICLR.cc/2013/conference | 2013 | review: It is noted that the connection between RG and multi-scale modeling has been pointed out by Candes in
E. J. Candès, P. Charlton and H. Helgason. Detecting highly oscillatory signals by chirplet path pursuit. Appl. Comput. Harmon. Anal. 24 14-40.
where it was noted that the multi-scale basis suggested in ... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 4Uh8Uuvz86SFd | comment | 1,363,212,060,000 | 7to37S6Q3_7Qe | [
"everyone"
] | [
"Cédric Bény"
] | ICLR.cc/2013/conference | 2013 | reply: I have submitted a replacement to the arXiv on March 13, which should be available the same day at 8pm EST/EDT as version 4.
In order to address the first issue, I rewrote section 2 to make it less confusing, specifically by not trying to be overly general. I also rewrote the caption of figure 1 to make it a ... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 7to37S6Q3_7Qe | review | 1,362,321,600,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"anonymous reviewer 441c"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep learning and the renormalization group
review: The model tries to relate renormalization group and deep learning, specifically hierarchical Bayesian network. The primary problems are that 1) the paper is only descriptive - it does not explain models clearly and precisely, and 2) it has no numerica... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | tb0cgaJXQfgX6 | review | 1,363,477,320,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Reviewer 441c,
Have you taken a look at the new version of the paper? Does it go some way to addressing your concerns? |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | 7Kq-KFuY-y7S_ | review | 1,365,121,080,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: It seems to me like there could be an interesting connection between approximate inference in graphical models and the renormalization methods.
There is in fact a long history of interactions between condensed matter physics and graphical models. For example, it is well known that the loopy belief propagati... |
IpmfpAGoH2KbX | Deep learning and the renormalization group | [
"Cédric Bény"
] | Renormalization group methods, which analyze the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. The aim of this paper is to compare and contrast the ideas behind the renormalization group (RG) on the one hand ... | [
"algorithm",
"deep learning",
"way",
"effective behavior",
"system",
"scale",
"key"
] | https://openreview.net/pdf?id=IpmfpAGoH2KbX | https://openreview.net/forum?id=IpmfpAGoH2KbX | Qj1vSox-vpQ-U | review | 1,362,219,360,000 | IpmfpAGoH2KbX | [
"everyone"
] | [
"anonymous reviewer acf4"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep learning and the renormalization group
review: This paper discusses deep learning from the perspective of renormalization groups in theoretical physics. Both concepts are naturally related; however, this relation has not been formalized adequately thus far and advancing this is a novelty of the p... |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified li... | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | ff2dqJ6VEpR8u | review | 1,362,252,900,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer 5a78"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: In the deep learning community there has been a recent trend in
moving away from the traditional sigmoid/tanh activation function to
inject non-linearity into the model. One activation function that has
been shown to work well in... |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified li... | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | kH1XHWcuGjDuU | review | 1,361,946,600,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer 9c3f"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: This paper analyzes properties of rectified linear autoencoder
networks.
In particular, the paper shows that rectified linear networks are
similar to linear networks (ICA). The major difference is the
nolinearity ('switching') t... |
SqNvxV9FQoSk2 | Switched linear encoding with rectified linear autoencoders | [
"Leif Johnson",
"Craig Corcoran"
] | Several recent results in machine learning have established formal connections between autoencoders---artificial neural network models that attempt to reproduce their inputs---and other coding models like sparse coding and K-means. This paper explores in depth an autoencoder model that is constructed using rectified li... | [
"linear",
"models",
"rectified linear autoencoders",
"machine learning",
"formal connections",
"autoencoders",
"neural network models",
"inputs",
"sparse coding"
] | https://openreview.net/pdf?id=SqNvxV9FQoSk2 | https://openreview.net/forum?id=SqNvxV9FQoSk2 | oozAQe0eAnQ1w | review | 1,362,360,840,000 | SqNvxV9FQoSk2 | [
"everyone"
] | [
"anonymous reviewer ab3b"
] | ICLR.cc/2013/conference | 2013 | title: review of Switched linear encoding with rectified linear autoencoders
review: The paper draws links between autoencoders with tied weights and rectified linear units (similar to Glorot et al AISTATS 2011), the triangle k-means and soft-thresholding of Coates et al. (AISTATS 2011 and ICML 2011), and the linear-au... |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a nov... | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | EW9REhyYQcESw | review | 1,362,202,140,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"anonymous reviewer 1024"
] | ICLR.cc/2013/conference | 2013 | title: review of Why Size Matters: Feature Coding as Nystrom Sampling
review: The authors provide an analysis of the accuracy bounds of feature coding + linear classifier pipelines. They predict an approximate accuracy bound given the dictionary size and correctly estimate the phenomenon observed in the literature wher... |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a nov... | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | oxSZoe2BGRoB6 | review | 1,362,196,320,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"anonymous reviewer 998c"
] | ICLR.cc/2013/conference | 2013 | title: review of Why Size Matters: Feature Coding as Nystrom Sampling
review: This paper presents a theoretical analysis and empirical validation of a novel view of feature extraction systems based on the idea of Nystrom sampling for kernel methods. The main idea is to analyze the kernel matrix for a feature space def... |
DD2gbWiOgJDmY | Why Size Matters: Feature Coding as Nystrom Sampling | [
"Oriol Vinyals",
"Yangqing Jia",
"Trevor Darrell"
] | Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a nov... | [
"nystrom",
"data points",
"size matters",
"feature",
"approximation",
"bounds",
"function",
"dictionary size",
"computer vision",
"machine learning community"
] | https://openreview.net/pdf?id=DD2gbWiOgJDmY | https://openreview.net/forum?id=DD2gbWiOgJDmY | 8sJwMe5ZwE8uz | review | 1,363,264,440,000 | DD2gbWiOgJDmY | [
"everyone"
] | [
"Oriol Vinyals, Yangqing Jia, Trevor Darrell"
] | ICLR.cc/2013/conference | 2013 | review: We agree with the reviewer regarding the existence of better dictionary learning methods, and note that many of these are also related to corresponding advanced Nystrom sampling methods, such as [Zhang et al. Improved Nystrom low-rank approximation and error analysis. ICML 08]. These methods could improve perfo... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | RzSh7m1KhlzKg | review | 1,363,574,460,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: I would like to thank the reviewers for their investment of time and effort to formulate their valued comments. The paper was updated according to your comments. Below I address your concerns:
A common remark is the lack of comparison with state-of-the-art NMF solvers for Kullback-Leibler divergence (KLD). I... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | FFkZF49pZx-pS | review | 1,362,210,360,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 4322"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: Summary:
The paper presents a new algorithm for solving L1 regularized NMF problems in which the fitting term is the Kullback-Leiber divergence. The strategy combines the classic multiplicative updates with a diagonal app... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | MqwZf2jPZCJ-n | review | 1,363,744,920,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: First: sorry for the multiple postings. Browser acting weird. Can't remove them ...
Update: I was able to get the sbcd code to work. Two mods required (refer to Algorithm 1 in the Li, Lebanon & Park paper - ref [18] in v2 paper on arxiv):
1) you have to be careful with initialization. If the estimates for W... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | oo1KoBhzu3CGs | review | 1,362,192,540,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 57f3"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: This paper develops a new iterative optimization algorithm for performing non-negative matrix factorization, assuming a standard 'KL-divergence' objective function. The method proposed combines the use of a traditional upda... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | aplzZcXNokptc | review | 1,363,615,980,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"Hugo Van hamme"
] | ICLR.cc/2013/conference | 2013 | review: About the comparison with Cyclic Coordinate Descent (as described in C.-J. Hsieh and I. S. Dhillon, “Fast Coordinate Descent Methods with Variable Selection for Non-negative Matrix Factorization,” in proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), San Dieg... |
i87JIQTAnB8AQ | The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization | [
"Hugo Van hamme"
] | Non-negative matrix factorization (NMF) has become a popular machine learning approach to many problems in text mining, speech and image processing, bio-informatics and seismic data analysis to name a few. In NMF, a matrix of non-negative data is approximated by the low-rank product of two matrices with non-negative en... | [
"diagonalized newton algorithm",
"nmf",
"nonnegative matrix factorization",
"data",
"convergence",
"matrix factorization",
"popular machine",
"many problems",
"text mining"
] | https://openreview.net/pdf?id=i87JIQTAnB8AQ | https://openreview.net/forum?id=i87JIQTAnB8AQ | EW5mE9upmnWp1 | review | 1,362,382,860,000 | i87JIQTAnB8AQ | [
"everyone"
] | [
"anonymous reviewer 482c"
] | ICLR.cc/2013/conference | 2013 | title: review of The Diagonalized Newton Algorithm for Nonnegative Matrix Factorization
review: Overview:
This paper proposes an element-wise (diagonal Hessian) Newton method to speed up convergence of the multiplicative update algorithm (MU) for NMF problems. Monotonic progress is guaranteed by an element-wise fall... |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semant... | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | UgMKgxnHDugHr | review | 1,362,080,640,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"anonymous reviewer cfb0"
] | ICLR.cc/2013/conference | 2013 | title: review of Zero-Shot Learning Through Cross-Modal Transfer
review: *A brief summary of the paper's contributions, in the context of prior work*
This paper introduces a zero-shot learning approach to image classification. The model first tries to detect whether an image contains an object from a so-far unseen cat... |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semant... | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | 88s34zXWw20My | review | 1,362,001,800,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"anonymous reviewer 310e"
] | ICLR.cc/2013/conference | 2013 | title: review of Zero-Shot Learning Through Cross-Modal Transfer
review: summary:
the paper presents a framework to learn to classify images that can come either from known
or unknown classes. This is done by first mapping both images and classes into a joint embedding
space. Furthermore, the probability of an image... |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semant... | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | ddIxYp60xFd0m | review | 1,363,754,820,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their feedback.
I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.
- Thanks for the reference... |
qEV_E7oCrKqWT | Zero-Shot Learning Through Cross-Modal Transfer | [
"Richard Socher",
"Milind Ganjoo",
"Hamsa Sridhar",
"Osbert Bastani",
"Christopher Manning",
"Andrew Y. Ng"
] | This work introduces a model that can recognize objects in images even if no training data is available for the objects. The only necessary knowledge about the unseen categories comes from unsupervised large text corpora. In our zero-shot framework distributional information in language can be seen as spanning a semant... | [
"model",
"transfer",
"objects",
"images",
"unseen classes",
"work",
"training data",
"available",
"necessary knowledge",
"unseen categories"
] | https://openreview.net/pdf?id=qEV_E7oCrKqWT | https://openreview.net/forum?id=qEV_E7oCrKqWT | SSiPd5Rr9bdXm | review | 1,363,754,760,000 | qEV_E7oCrKqWT | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: We thank the reviewers for their feedback.
I have not seen references to similarity learning, which can be used to say if two images are of the same class. These can obviously be used to determine if an image is of a known class or not, without having seen any image of the class.
- Thanks for the reference... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | eG1mGYviVwE-r | comment | 1,363,730,760,000 | Av10rQ9sBlhsf | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: Okay, thanks. We understand your viewpoint. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | EHF-pZ3qwbnAT | review | 1,362,609,900,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer a9e8"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: This paper explores how inference can be done in a part-sharing model and the computational cost of doing so. It relies on 'executive summaries' where each layer only holds approximate information about th... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | sPw_squDz1sCV | review | 1,363,536,060,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"Aaron Courville"
] | ICLR.cc/2013/conference | 2013 | review: Reviewer c1e8,
Please read the authors' responses to your review. Do they change your evaluation of the paper? |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | Rny5iXEwhGnYN | comment | 1,362,095,760,000 | p7BE8U1NHl8Tr | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: The unsupervised learning will also appear at ICLR. So we didn't describe it in this paper and concentrated instead on the advantages of compositional models for search after the learning has been done.
The reviewer says that this result is not very novel and mentions analogies to complexity gain of large con... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | O3uWBm_J8IOlG | comment | 1,363,731,300,000 | EHF-pZ3qwbnAT | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for your comments. The paper is indeed conjectural which is why we are submitting it to this new type of conference. But we have some proof of content from some of our earlier work -- and we are working on developing real world models using these types of ideas. |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | Av10rQ9sBlhsf | comment | 1,363,643,940,000 | Rny5iXEwhGnYN | [
"everyone"
] | [
"anonymous reviewer c1e8"
] | ICLR.cc/2013/conference | 2013 | reply: Sorry: I should have written 'although I do not see it as very surprising' instead of 'novel'.
The analogy with convolutional networks is that quantities computed by low-level nodes can be shared by several high level nodes. This is trivial in the case of conv. nets, and not trivial in your case because you h... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | oCzZPts6ZYo6d | review | 1,362,211,680,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer 915e"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: This paper presents a complexity analysis of certain inference algorithms for compositional models of images based on part sharing.
The intuition behind these models is that objects are composed of parts... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | p7BE8U1NHl8Tr | review | 1,361,997,540,000 | ZhGJ9KQlXi9jk | [
"everyone"
] | [
"anonymous reviewer c1e8"
] | ICLR.cc/2013/conference | 2013 | title: review of Complexity of Representation and Inference in Compositional Models with
Part Sharing
review: The paper describe a compositional object models that take the form of a hierarchical generative models. Both object and part models provide (1) a set of part models, and (2) a generative model essentially... |
ZhGJ9KQlXi9jk | Complexity of Representation and Inference in Compositional Models with
Part Sharing | [
"Alan Yuille",
"Roozbeh Mottaghi"
] | This paper describes serial and parallel compositional models of multiple objects with part sharing. Objects are built by part-subpart compositions and expressed in terms of a hierarchical dictionary of object parts. These parts are represented on lattices of decreasing sizes which yield an executive summary descriptio... | [
"inference",
"complexity",
"part",
"representation",
"compositional models",
"objects",
"terms",
"serial computers",
"parallel computers",
"level"
] | https://openreview.net/pdf?id=ZhGJ9KQlXi9jk | https://openreview.net/forum?id=ZhGJ9KQlXi9jk | zV1YApahdwAIu | comment | 1,362,352,080,000 | oCzZPts6ZYo6d | [
"everyone"
] | [
"Alan L. Yuille, Roozbeh Mottaghi"
] | ICLR.cc/2013/conference | 2013 | reply: We hadn't thought of renormalization or image compression. But renormalization does deal with scale (I think B. Gidas had some papers on this in the 90's). There probably is a relation to image compression which we should explore. |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | qO9gWZZ1gfqhl | review | 1,362,163,380,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 777f"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: Segmentation with multi-scale max pooling CNN, applied to indoor vision, using depth information. Interesting paper! Fine results.
Question: how does that compare to multi-scale max pooling CNN for a previous award-winning application, nam... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | tG4Zt9xaZ8G5D | comment | 1,363,298,100,000 | Ub0AUfEOKkRO1 | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and helpful comments. We computed and added error bars as suggested in Table 1. However, computing standard deviation for the individual means per class of objects does not apply here: the per class accuracies are not computed image per image. Each number corresponds to a ratio of the ... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | OOB_F66xrPKGA | comment | 1,363,297,980,000 | 2-VeRGGdvD-58 | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and helpful comments.
The missing values in the depth acquisition were pre-processed using inpainting code available online on Nathan Siberman’s web page. We added the reference to the paper.
In the paper, we made the observation that the classes for which depth fails to outperform ... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | Ub0AUfEOKkRO1 | review | 1,362,368,040,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 5193"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: This work builds on recent object-segmentation work by Farabet et al., by augmenting the pixel-processing pathways with ones that processes a depth map from a Kinect RGBD camera. This work seems to me a well-motivated and natural extension no... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | VVbCVyTLqczWn | comment | 1,363,297,440,000 | qO9gWZZ1gfqhl | [
"everyone"
] | [
"Camille Couprie"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and pointing out the paper of Ciresan et al., that we added to our list of references. Similarly to us, they apply the idea of using a kind of multi-scale network. However, Ciseran's approach to foveation differs from ours: where we use a multiscale pyramid to provide a foveated input t... |
ttnAE7vaATtaK | Indoor Semantic Segmentation using depth information | [
"Camille Couprie",
"Clement Farabet",
"Laurent Najman",
"Yann LeCun"
] | This work addresses multi-class segmentation of indoor scenes with RGB-D inputs. While this area of research has gained much attention recently, most works still rely on hand-crafted features. In contrast, we apply a multiscale convolutional network to learn features directly from the images and the depth information. ... | [
"depth information",
"indoor scenes",
"features",
"indoor semantic segmentation",
"work",
"segmentation",
"inputs",
"area",
"research"
] | https://openreview.net/pdf?id=ttnAE7vaATtaK | https://openreview.net/forum?id=ttnAE7vaATtaK | 2-VeRGGdvD-58 | review | 1,362,213,660,000 | ttnAE7vaATtaK | [
"everyone"
] | [
"anonymous reviewer 03ba"
] | ICLR.cc/2013/conference | 2013 | title: review of Indoor Semantic Segmentation using depth information
review: This work applies convolutional neural networks to the task of RGB-D indoor scene segmentation. The authors previously evaulated the same multi-scale conv net architecture on the data using only RGB information, this work demonstrates that fo... |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural g... | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | LkyqLtotdQLG4 | review | 1,362,012,600,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 9212"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: The paper describes a Natural Gradient technique to train Boltzman machines. This is essentially the approach of Amari et al (1992) where the Fisher information matrix is expressed in which the authors estimate the Fisher in... |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural g... | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | o5qvoxIkjTokQ | review | 1,362,294,960,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 7e2e"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: This paper presents a natural gradient algorithm for deep Boltzmann machines. The authors must be commended for their extremely clear and succinct description of the natural gradient method in Section 2. This presentation is ... |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural g... | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | dt6KtywBaEvBC | review | 1,362,379,800,000 | OpvgONa-3WODz | [
"everyone"
] | [
"anonymous reviewer 77a7"
] | ICLR.cc/2013/conference | 2013 | title: review of Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines
review: This paper introduces a new gradient descent algorithm that combines is based on Hessian-free optimization, but replaces the approximate Hessian-vector product by an approximate Fisher information matrix-vector product. It is... |
OpvgONa-3WODz | Metric-Free Natural Gradient for Joint-Training of Boltzmann Machines | [
"Guillaume Desjardins",
"Razvan Pascanu",
"Aaron Courville",
"Yoshua Bengio"
] | This paper introduces the Metric-Free Natural Gradient (MFNG) algorithm for training Boltzmann Machines. Similar in spirit to the Hessian-Free method of Martens [8], our algorithm belongs to the family of truncated Newton methods and exploits an efficient matrix-vector product to avoid explicitely storing the natural g... | [
"natural gradient",
"boltzmann machines",
"mfng",
"algorithm",
"similar",
"spirit",
"martens",
"algorithm belongs",
"family",
"truncated newton methods"
] | https://openreview.net/pdf?id=OpvgONa-3WODz | https://openreview.net/forum?id=OpvgONa-3WODz | pC-4pGPkfMnuQ | review | 1,363,459,200,000 | OpvgONa-3WODz | [
"everyone"
] | [
"Guillaume Desjardins, Razvan Pascanu, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: Thank you to the reviewers for the helpful feedback. The provided references will no doubt come in handy for future work.
To all reviewers:In an effort to speedup run time, we have re-implemented a significant portion of the MFNG algorithm. This resulted in large speedups for the diagonal approximation of MF... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | d6u7vbCNJV6Q8 | review | 1,361,968,020,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer ac47"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: Deep predictive coding networks
This paper introduces a new model which combines bottom-up, top-down, and temporal information to learning a generative model in an unsupervised fashion on videos. The model is formulated in terms of states, which carry temporal... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | Xu4KaWxqIDurf | review | 1,363,393,200,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | review: The revised paper is uploaded onto arXiv. It will be announced on 18th March.
In the mean time, the paper is also made available at
https://www.dropbox.com/s/klmpu482q6nt1ws/DPCN.pdf |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | 00ZvUXp_e10_E | comment | 1,363,392,660,000 | EEhwkCLtAuko7 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for you review and comments, particularly for pointing out some mistakes in the paper. Following is our response to some concerns you have raised.
>>> 'You should state the functional form for F and G!! Working backwards from the energy function, it looks as if these are just linear functions?'
... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | iiUe8HAsepist | comment | 1,363,392,180,000 | d6u7vbCNJV6Q8 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for your review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> 'The explanation of the model was overly complicated. After reading the the entire explanation it appears the model is simply doing sparse coding... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | EEhwkCLtAuko7 | review | 1,362,405,300,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer 62ac"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: This paper attempts to capture both the temporal dynamics of signals and the contribution of top down connections for inference using a deep model. The experimental results are qualitatively encouraging, and the model structure seems like a sensible direction to... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | o1YP1AMjPx1jv | comment | 1,363,393,020,000 | Za8LX-xwgqXw5 | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | XTZrXGh8rENYB | comment | 1,363,393,320,000 | 3vEUvBbCrO8cu | [
"everyone"
] | [
"Rakesh Chalasani"
] | ICLR.cc/2013/conference | 2013 | reply: This is in reply to reviewer 1829, mistakenly pasted here. Please ignore. |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | Za8LX-xwgqXw5 | review | 1,362,498,780,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"anonymous reviewer 1829"
] | ICLR.cc/2013/conference | 2013 | title: review of Deep Predictive Coding Networks
review: A brief summary of the paper's contributions, in the context of prior work.
The paper proposes a hierarchical sparse generative model in the context of a dynamical system. The model can capture temporal dependencies in time-varying data, and top-down information... |
yyC_7RZTkUD5- | Deep Predictive Coding Networks | [
"Rakesh Chalasani",
"Jose C. Principe"
] | The quality of data representation in deep learning methods is directly related to the prior model imposed on the representations; however, generally used fixed priors are not capable of adjusting to the context in the data. To address this issue, we propose deep predictive coding networks, a hierarchical generative mo... | [
"model",
"networks",
"priors",
"deep predictive",
"predictive",
"quality",
"data representation",
"deep learning methods",
"prior model",
"representations"
] | https://openreview.net/pdf?id=yyC_7RZTkUD5- | https://openreview.net/forum?id=yyC_7RZTkUD5- | 3vEUvBbCrO8cu | review | 1,363,392,960,000 | yyC_7RZTkUD5- | [
"everyone"
] | [
"Rakesh Chalasani, Jose C. Principe"
] | ICLR.cc/2013/conference | 2013 | review: Thank you for review and comments. We revised the paper to address most of your concerns. Following is our response to some specific point you have raised.
>>> ' The clarity of the paper needs to be improved. For example, it will be helpful to motivate more clearly about the specific formulation of the model... |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and le... | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | UUlHmZjBOIUBb | review | 1,362,353,160,000 | zzEf5eKLmAG0o | [
"everyone"
] | [
"anonymous reviewer d966"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums
review: The paper introduces an new algorithm for simultaneously learning a hidden layer (latent representation) for multiple data views as well as automatically segmenting that hidden layer into shared and view-spe... |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and le... | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | tt7CtuzeCYt5H | comment | 1,363,857,240,000 | DNKnDqeVJmgPF | [
"everyone"
] | [
"YoonSeop Kang"
] | ICLR.cc/2013/conference | 2013 | reply: 1. The distribution of sigma(s_{kj}) had modes near 0 and 1, but the graph of the distribution was omitted due to the space constraints. The amount of separation between modes were affected by the hyperparameters that were not mentioned in the paper.
2. It is true that the separation between digit features a... |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and le... | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | qqdsq7GUspqD2 | comment | 1,363,857,540,000 | UUlHmZjBOIUBb | [
"everyone"
] | [
"YoonSeop Kang"
] | ICLR.cc/2013/conference | 2013 | reply: 1. As the switch parameters converge quickly, the training time of our model was not very different from that of DWH.
2. We performed the experiment several times, but the result was consistent. Still, it is our fault that we didn't repeat the experiments enough to add error bars to the results.
3. MVHs are of... |
zzEf5eKLmAG0o | Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums | [
"YoonSeop Kang",
"Seungjin Choi"
] | We proposea graphical model for multi-view feature extraction that automatically adapts its structure to achieve better representation of data distribution. The proposed model, structure-adapting multi-view harmonium (SA-MVH) has switch parameters that control the connection between hidden nodes and input views, and le... | [
"features",
"exponential family harmoniums",
"graphical model",
"feature extraction",
"structure",
"better representation",
"data distribution",
"model",
"harmonium",
"parameters"
] | https://openreview.net/pdf?id=zzEf5eKLmAG0o | https://openreview.net/forum?id=zzEf5eKLmAG0o | DNKnDqeVJmgPF | review | 1,360,866,060,000 | zzEf5eKLmAG0o | [
"everyone"
] | [
"anonymous reviewer 0e7e"
] | ICLR.cc/2013/conference | 2013 | title: review of Learning Features with Structure-Adapting Multi-view Exponential Family
Harmoniums
review: The authors propose a bipartite, undirected graphical model for multiview learning, called structure-adapting multiview harmonimum (SA-MVH). The model is based on their earlier model called multiview harmoni... |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.... | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | D1cO7TgVjPGT9 | review | 1,361,300,640,000 | mLr3In-nbamNu | [
"everyone"
] | [
"anonymous reviewer 71f4"
] | ICLR.cc/2013/conference | 2013 | title: review of Local Component Analysis
review: In this paper, the authors consider unsupervised metric learning as a
density estimation problem with a Parzen windows estimator based on
Euclidean metric. They use maximum likelihood method and EM algorithm
for deriving a method that may be considered as an unsuper... |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.... | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | pRFvp6BDvn46c | review | 1,362,491,220,000 | mLr3In-nbamNu | [
"everyone"
] | [
"anonymous reviewer 61c0"
] | ICLR.cc/2013/conference | 2013 | title: review of Local Component Analysis
review: Summary of contributions:
The paper presents a robust algorithm for density estimation. The main idea is to model the density into a product of two independent distributions: one from a Parzen windows estimation (for modeling a low dimensional manifold) and the other f... |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.... | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | iGfW_jMjFAoZQ | review | 1,362,428,640,000 | mLr3In-nbamNu | [
"everyone"
] | [
"anonymous reviewer 18ca"
] | ICLR.cc/2013/conference | 2013 | title: review of Local Component Analysis
review: Summary of contributions:
1. The paper proposed an unsupervised local component analysis (LCA) framework that estimates the Parzen window covariance via maximizing the leave-one-out density. The basic algorithm is an EM procedure with closed form updates.
2. One fu... |
mLr3In-nbamNu | Local Component Analysis | [
"Nicolas Le Roux",
"Francis Bach"
] | Kernel density estimation, a.k.a. Parzen windows, is a popular density estimation method, which can be used for outlier detection or clustering. With multivariate data, its performance is heavily reliant on the metric used within the kernel. Most earlier work has focused on learning only the bandwidth of the kernel (i.... | [
"parzen windows",
"kernel",
"metrics",
"popular density estimation",
"outlier detection",
"clustering",
"multivariate data",
"performance",
"reliant"
] | https://openreview.net/pdf?id=mLr3In-nbamNu | https://openreview.net/forum?id=mLr3In-nbamNu | c2pVc0PtwzcEK | review | 1,364,253,000,000 | mLr3In-nbamNu | [
"everyone"
] | [
"Nicolas Le Roux, Francis Bach"
] | ICLR.cc/2013/conference | 2013 | review: First, we would like to thank the reviewers for their comments.
The main complaint was that the experiments were limited to toy problems. Since it is always hard to evaluate unsupervised learning algorithms (what is the metric of performance), the experiments were designed as a proof of concept. Hence, we ag... |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scal... | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | cAqVvWr0KLv0U | review | 1,362,183,240,000 | OOuGtqpeK-cLI | [
"everyone"
] | [
"anonymous reviewer 1567"
] | ICLR.cc/2013/conference | 2013 | title: review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities
review: In [10], the authors had previously proposed modifying the network
parametrization, in order to ensure zero-mean hidden unit activations across training examples (activit... |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scal... | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | og9azR3sTxoul | review | 1,362,399,720,000 | OOuGtqpeK-cLI | [
"everyone"
] | [
"anonymous reviewer b670"
] | ICLR.cc/2013/conference | 2013 | title: review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities
review: This paper builds on previous work by the same authors that looks at performing dynamic reparameterizations of neural networks to improve training efficiency. The previou... |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scal... | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | Id_EI3kn5mX4i | review | 1,362,387,060,000 | OOuGtqpeK-cLI | [
"everyone"
] | [
"anonymous reviewer c3d4"
] | ICLR.cc/2013/conference | 2013 | title: review of Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities
review: * A brief summary of the paper's contributions, in the context of prior work.
This paper extends the authors' previous work on making sure that the hidden units in a ne... |
OOuGtqpeK-cLI | Pushing Stochastic Gradient towards Second-Order Methods -- Backpropagation Learning with Transformations in Nonlinearities | [
"Tommi Vatanen",
"Tapani Raiko",
"Harri Valpola",
"Yann LeCun"
] | Recently, we proposed to transform the outputs of each hidden neuron in a multi-layer perceptron network to have zero output and zero slope on average, and use separate shortcut connections to model the linear dependencies instead. We continue the work by firstly introducing a third transformation to normalize the scal... | [
"transformations",
"outputs",
"stochastic gradient",
"methods",
"backpropagation",
"nonlinearities",
"hidden neuron",
"experiments",
"perceptron network",
"output"
] | https://openreview.net/pdf?id=OOuGtqpeK-cLI | https://openreview.net/forum?id=OOuGtqpeK-cLI | 8PUQYHnMEx8CL | review | 1,363,039,740,000 | OOuGtqpeK-cLI | [
"everyone"
] | [
"Tommi Vatanen, Tapani Raiko, Harri Valpola, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: First of all we would like to thank you for your informed, thorough and kind comments. We realize that there is major overlap with our previous paper [10]. We hope that these two papers could be combined in a journal paper later on. It was mentioned that we use some text verbatim from [10]. There is some basic ... |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and charact... | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | boGLoNdiUmbgV | review | 1,362,582,360,000 | UUwuUaQ5qRyWn | [
"everyone"
] | [
"anonymous reviewer 51ff"
] | ICLR.cc/2013/conference | 2013 | title: review of When Does a Mixture of Products Contain a Product of Mixtures?
review: This paper attempts at comparing mixture of factorial distributions (called product distributions) to RBMs. It does so by analyzing several theoretical properties, such as the smallest models which can represent any distribution wit... |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and charact... | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | dPNqPnWus1JhM | review | 1,362,219,240,000 | UUwuUaQ5qRyWn | [
"everyone"
] | [
"anonymous reviewer 6c04"
] | ICLR.cc/2013/conference | 2013 | title: review of When Does a Mixture of Products Contain a Product of Mixtures?
review: This paper compares the representational power of Restricted Boltzmann Machines
(RBMs) with that of mixtures of product distributions. The main result is that
RBMs can be exponentially more efficient (in terms of the number of par... |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and charact... | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | vvzH6kFyntmsR | comment | 1,364,258,160,000 | FdwnFIZNOxF5S | [
"everyone"
] | [
"anonymous reviewer 6c04"
] | ICLR.cc/2013/conference | 2013 | reply: Thanks for the updated version, I've re-read it quickly and it's indeed a bit clearer! |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and charact... | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | dYGvTnylo5TlF | review | 1,361,559,180,000 | UUwuUaQ5qRyWn | [
"everyone"
] | [
"anonymous reviewer 91ea"
] | ICLR.cc/2013/conference | 2013 | title: review of When Does a Mixture of Products Contain a Product of Mixtures?
review: The paper analyses the representational capacity of RBM's, contrasting it with other simple models.
I think the results are new but I'm definitely not an expert on this field. They are likely to be interesting for people working ... |
UUwuUaQ5qRyWn | When Does a Mixture of Products Contain a Product of Mixtures? | [
"Guido F. Montufar",
"Jason Morton"
] | We prove results on the relative representational power of mixtures of product distributions and restricted Boltzmann machines (products of mixtures of pairs of product distributions). Tools of independent interest are mode-based polyhedral approximations sensitive enough to compare full-dimensional models, and charact... | [
"mixtures",
"products",
"mixture",
"product",
"product distributions",
"restricted boltzmann machines",
"results",
"relative representational power",
"pairs",
"tools"
] | https://openreview.net/pdf?id=UUwuUaQ5qRyWn | https://openreview.net/forum?id=UUwuUaQ5qRyWn | FdwnFIZNOxF5S | review | 1,363,384,620,000 | UUwuUaQ5qRyWn | [
"everyone"
] | [
"Guido F. Montufar, Jason Morton"
] | ICLR.cc/2013/conference | 2013 | review: We thank all three reviewers for the helpful comments, which enabled us to improve the paper. We have uploaded a revision to the arxiv taking into account the comments, and respond to some specific concerns below.
We were unsure as to whether we should make the paper longer by providing more in-line intuiti... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | TTDqPocbXWPbU | review | 1,364,548,920,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: Hi,
This looks a whole lot like the semi-supervised recursive autoencoder that we introduced at EMNLP 2011 [1] and the unfolding recursive autoencoder that we introduced at NIPS 2011.
These models also have a reconstruction + cross entropy error at every iteration and hence do not suffer from the vanishin... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | 10n94yAXr20pD | comment | 1,363,534,380,000 | 5Br_BDba_D57X | [
"everyone"
] | [
"anonymous reviewer bc93"
] | ICLR.cc/2013/conference | 2013 | reply: It's true that any deep NN can be represented by a large recurrent net, but that's not the point I was making. The sentence I commented on gives the impression that a recurrent network has the same representational power as any deep network 'while substantially reducing the number of trainable parameters'. If yo... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | NNXtqijEtiN98 | review | 1,363,222,920,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Jason Rolfe"
] | ICLR.cc/2013/conference | 2013 | review: We are very thankful to all the reviewers and commenters for their constructive comments.
* Anonymous 8ddb:
1. Indeed, the architecture of DrSAE is similar to a deep sparse rectifier neural network (Glorot, Bordes, and Bengio, 2011) with tied weights (Bengio, Boulanger-Lewandowski and Pascanu, 2012). In ... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | vCQPfwXgPoCu7 | review | 1,364,571,960,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | review: Minor side comment: IN GENERAL, having a cost term at each iteration (time step of the unfolded network) does not eliminate the vanishing gradient problem!!!
The short-term dependencies can now be learned through the gradient on the cost on the early iterations, but the long-term effects may still be imprope... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | __De_0xQMv_R3 | review | 1,361,907,180,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: Thank you for this interesting contribution. The differentiation of hidden units into class units and parts units is fascinating and connects with what I consider a central objective for deep learning, i.e., learning representations where the learned features disentangle the underlying factors of variation (as ... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | uc38pbD6RhB1Z | review | 1,363,316,520,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"anonymous reviewer bc93"
] | ICLR.cc/2013/conference | 2013 | title: review of Discriminative Recurrent Sparse Auto-Encoders
review: SUMMARY:
The authors describe a discriminative recurrent sparse auto-encoder, which is essentially a recurrent neural network with a fixed input and linear rectifier units. The auto-encoder is initially trained to reproduce digits of MNIST, while... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | 6FfM6SG2MKt8r | comment | 1,367,028,540,000 | TTDqPocbXWPbU | [
"everyone"
] | [
"Jason Rolfe"
] | ICLR.cc/2013/conference | 2013 | reply: Thank you very much for your constructive comments.
There are indeed similarities between discriminative recurrent auto-encoders and the semi-supervised recursive autoencoders of Socher, Pennington, Huang, Ng, & Manning (2011a); we will add the appropriate citation to the paper. However, the networks of Soch... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | zzUEFMPkQcqkJ | review | 1,362,400,920,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"anonymous reviewer a32e"
] | ICLR.cc/2013/conference | 2013 | title: review of Discriminative Recurrent Sparse Auto-Encoders
review: Authors propose an interesting idea to use deep neural networks with tied weights (recurrent architecture) for image classification. However, I am not familiar enough with the prior work to judge novelty of the idea.
On the critical note, the pap... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | Sih8ijosvDuO_ | comment | 1,363,817,880,000 | KVmXTReW18TyN | [
"everyone"
] | [
"Jason Tyler Rolfe, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | reply: Q2: In response to your query, we have just completed a run with the encoder row magnitude bound set to 1/T, rather than 1.25/T. MNIST classification performance was 1.13%, rather than 1.08%. Although heuristic, the hyperparameters used in the paper were not the result of extensive hand-tuning. |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | 4V-Ozm5k8mVcn | review | 1,363,400,280,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"anonymous reviewer dd6a"
] | ICLR.cc/2013/conference | 2013 | title: review of Discriminative Recurrent Sparse Auto-Encoders
review: The paper describes the following variation of an autoencoder: An encoder (with relu nonlinearity) is iterated for 11 steps, with observations providing biases for the hiddens at each step. Afterwards, a decoder reconstructs the data from the last-s... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | SKcvK2UDvgKxL | review | 1,362,177,060,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"anonymous reviewer 8ddb"
] | ICLR.cc/2013/conference | 2013 | title: review of Discriminative Recurrent Sparse Auto-Encoders
review: Summary and general overview:
----------------------------------------------
The paper introduces Discriminative Recurrent Sparse Auto-Encoders, a new model, but more importantly a careful analysis of the behaviour of this model. It suggests that ... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | -uMO-UhKgU-Z_ | review | 1,368,275,760,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Richard Socher"
] | ICLR.cc/2013/conference | 2013 | review: Hi Jason and Yann,
Thanks for the insightful reply.
Best,
Richard |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | UEx3pAOcLlpPT | review | 1,363,223,340,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Jason Rolfe"
] | ICLR.cc/2013/conference | 2013 | review: * Jurgen Schidhuber:
Thank you very much for your constructive comments.
1. Like the work of Pollack (1990), DrSAE is based on an recursive autoencoder that receives input on each iteration. However, (sequential) RAAMs iteratively add new information on each iteration, and then iteratively reconstruct t... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | KVmXTReW18TyN | comment | 1,363,664,400,000 | 4V-Ozm5k8mVcn | [
"everyone"
] | [
"Jason Tyler Rolfe, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | reply: *Anonymous dd6a
Thank you very much for your helpful comments.
P2: Both the categorical-units and the part-units participate in reconstruction. Since the categorical-units become more active than the part-units (as per figure 7), they actually make a larger contribution to the reconstruction (evident in f... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | 5Br_BDba_D57X | comment | 1,363,395,000,000 | uc38pbD6RhB1Z | [
"everyone"
] | [
"Jason Tyler Rolfe, Yann LeCun"
] | ICLR.cc/2013/conference | 2013 | reply: * Anonymous bc93:
We offer our sincere thanks for your thoughtful comments.
Q1: The dynamics are indeed smooth, as shown in figure 5. However, there is no reason to believe that the dynamics will stabilize beyond the trained interval. In fact, simulations past the trained interval show that the most active... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | PZqMVyiGDoPcE | review | 1,363,734,420,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Andrew Maas"
] | ICLR.cc/2013/conference | 2013 | review: Interesting work! The use of relU units in an RNN is something I haven't seen before. I'd be interested in some discussion on how relU compares to e.g. tanh units in the recurrent setting. I imagine relU units may suffer less from vanishing/saturation during RNN training.
We have a related model (deep discr... |
aJh-lFL2dFJ21 | Discriminative Recurrent Sparse Auto-Encoders | [
"Jason Rolfe",
"Yann LeCun"
] | We present the discriminative recurrent sparse auto-encoder model, which consists of an encoder whose hidden layer is recurrent, and two linear decoders, one to reconstruct the input, and one to predict the output. The hidden layer is composed of rectified linear units (ReLU) and is subject to a sparsity penalty. The n... | [
"discriminative recurrent sparse",
"network",
"hidden layer",
"input",
"number",
"time",
"hidden units",
"model",
"encoder",
"recurrent"
] | https://openreview.net/pdf?id=aJh-lFL2dFJ21 | https://openreview.net/forum?id=aJh-lFL2dFJ21 | yy9FyB6XUYyiJ | review | 1,362,604,500,000 | aJh-lFL2dFJ21 | [
"everyone"
] | [
"Jürgen Schmidhuber"
] | ICLR.cc/2013/conference | 2013 | review: Interesting implementation and results.
But how is this approach related to the original, unmentioned work on Recurrent Auto-Encoders (RAAMs) by Pollack (1990) and colleagues? What's the main difference, if any? Similar for previous applications of RAAMs to unsupervised history compression, e.g., (Gisslen e... |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpai... | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | ua4iaAgtT2WVU | review | 1,362,265,800,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"anonymous reviewer b31c"
] | ICLR.cc/2013/conference | 2013 | title: review of Joint Training Deep Boltzmann Machines for Classification
review: This breaking-news paper proposes a new method to jointly train the layers of a DBM. DBM are usually 'pre-trained' in a layer-wise manner using RBMs, a conceivably suboptimal procedure. Here the authors propose to use a deterministic cri... |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpai... | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | g6eHAgMz5csdN | review | 1,363,214,940,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"Ian J. Goodfellow, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: We have updated our paper and are waiting for arXiv to make the update public. We'll add the updated paper to this webpage as soon as arXiv makes the public link available.
To anonymous reviewer 55e7:
-We'd like to draw to your attention that this paper was submitted to the workshops track. We agree with yo... |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpai... | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | nnKMnn0dlyqCD | review | 1,362,172,860,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"anonymous reviewer 55e7"
] | ICLR.cc/2013/conference | 2013 | title: review of Joint Training Deep Boltzmann Machines for Classification
review: The authors aim to introduce a new method for training deep Boltzmann machines. Inspired by inference procedure they turn the model into two hidden layers autoencoder with recurrent connections. Instead of reconstructing all pixels from ... |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpai... | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | i4E0iizbl6uCv | review | 1,367,449,740,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"Ian J. Goodfellow, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: We have posted an update to the arXiv paper, containing new material that we will present at the workshop. |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpai... | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | _B-UB_2zNqJCO | review | 1,363,360,620,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"anonymous reviewer 55e7"
] | ICLR.cc/2013/conference | 2013 | review: Indeed I didn't notice this was a workshop paper, which then doesn't have to be as complete.
Standard way to train nade is go in the fixed order. However you can also choose a random for each input (it leads to worse likelihood though). This is then equivalent to blanking random m pixels and predicting remai... |
0W7-W0EaA4Wak | Joint Training Deep Boltzmann Machines for Classification | [
"Ian Goodfellow",
"Aaron Courville",
"Yoshua Bengio"
] | We introduce a new method for training deep Boltzmann machines jointly. Prior methods require an initial learning pass that trains the deep Boltzmann machine greedily, one layer at a time, or do not perform well on classification tasks. In our approach, we train all layers of the DBM simultaneously, using a novel inpai... | [
"classification",
"new",
"deep boltzmann machines",
"prior methods",
"initial learning pass",
"deep boltzmann machine",
"layer",
"time"
] | https://openreview.net/pdf?id=0W7-W0EaA4Wak | https://openreview.net/forum?id=0W7-W0EaA4Wak | uu7m3uY-jKu9P | review | 1,363,234,680,000 | 0W7-W0EaA4Wak | [
"everyone"
] | [
"Ian J. Goodfellow, Aaron Courville, Yoshua Bengio"
] | ICLR.cc/2013/conference | 2013 | review: The arXiv link now contains the second revision. |
7hPJygSqJehqH | Latent Relation Representations for Universal Schemas | [
"Sebastian Riedel",
"Limin Yao",
"Andrew McCallum"
] | Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a univer... | [
"relations",
"schema",
"schemas",
"databases",
"latent relation representations",
"fixed",
"finite target schema",
"machine"
] | https://openreview.net/pdf?id=7hPJygSqJehqH | https://openreview.net/forum?id=7hPJygSqJehqH | VVGqfOMv0jV23 | review | 1,362,170,580,000 | 7hPJygSqJehqH | [
"everyone"
] | [
"anonymous reviewer 129c"
] | ICLR.cc/2013/conference | 2013 | title: review of Latent Relation Representations for Universal Schemas
review: The paper studies techniques for inferring a model of entities and relations capable of performing basic types of semantic inference (e.g., predicting if a specific relation holds for a given pair of entities). The models exploit different ... |
7hPJygSqJehqH | Latent Relation Representations for Universal Schemas | [
"Sebastian Riedel",
"Limin Yao",
"Andrew McCallum"
] | Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a univer... | [
"relations",
"schema",
"schemas",
"databases",
"latent relation representations",
"fixed",
"finite target schema",
"machine"
] | https://openreview.net/pdf?id=7hPJygSqJehqH | https://openreview.net/forum?id=7hPJygSqJehqH | 00Bom31A5XszS | review | 1,362,259,560,000 | 7hPJygSqJehqH | [
"everyone"
] | [
"anonymous reviewer 2d4e"
] | ICLR.cc/2013/conference | 2013 | title: review of Latent Relation Representations for Universal Schemas
review: This paper presents a framework for open information extraction. This problem is usually tackled either via distant weak supervision from a knowledge base (providing structure and relational schemas) or in a totally unsupervised fashion (wi... |
7hPJygSqJehqH | Latent Relation Representations for Universal Schemas | [
"Sebastian Riedel",
"Limin Yao",
"Andrew McCallum"
] | Traditional relation extraction predicts relations within some fixed and finite target schema. Machine learning approaches to this task require either manual annotation or, in the case of distant supervision, existing structured sources of the same schema. The need for existing datasets can be avoided by using a univer... | [
"relations",
"schema",
"schemas",
"databases",
"latent relation representations",
"fixed",
"finite target schema",
"machine"
] | https://openreview.net/pdf?id=7hPJygSqJehqH | https://openreview.net/forum?id=7hPJygSqJehqH | HN_nN48xQYLxO | review | 1,363,302,420,000 | 7hPJygSqJehqH | [
"everyone"
] | [
"Andrew McCallum"
] | ICLR.cc/2013/conference | 2013 | review: This is a test of a note to self. |
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robot... | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | PiVQP7pKuhiR5 | review | 1,363,392,540,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"Eugenio Culurciello, Jordan Bates, Aysegul Dundar, Jose Carrasco, Clement Farabet"
] | ICLR.cc/2013/conference | 2013 | review: Dear reviewers, we have fixed all issues that you have reported in your kind review of the manuscript and uploaded a revision. |
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robot... | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | -YucDnyrcVDfe | review | 1,364,401,500,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"Eugenio Culurciello, Jordan Bates, Aysegul Dundar, Jose Carrasco, Clement Farabet"
] | ICLR.cc/2013/conference | 2013 | review: we accept the poster presentation, thank you for organizing this! |
gGivgRWZsLgY0 | Clustering Learning for Robotic Vision | [
"Eugenio Culurciello",
"Jordan Bates",
"Aysegul Dundar",
"Jose Carrasco",
"Clement Farabet"
] | We present the clustering learning technique applied to multi-layer feedforward deep neural networks. We show that this unsupervised learning technique can compute network filters with only a few minutes and a much reduced set of pa- rameters. The goal of this paper is to promote the technique for general-purpose robot... | [
"robotic vision",
"clustering learning technique",
"unsupervised learning technique",
"network filters",
"minutes",
"set",
"rameters",
"goal",
"technique"
] | https://openreview.net/pdf?id=gGivgRWZsLgY0 | https://openreview.net/forum?id=gGivgRWZsLgY0 | NL-vN6tmpZNMh | review | 1,362,195,960,000 | gGivgRWZsLgY0 | [
"everyone"
] | [
"anonymous reviewer 5eb5"
] | ICLR.cc/2013/conference | 2013 | title: review of Clustering Learning for Robotic Vision
review: The paper presents an application of clustering-based feature learning ('CL') to image recognition tasks and tracking tasks for robotics. The basic system uses a clustering algorithm to train filters from small patches and then applies them convolutionall... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.