id stringlengths 9 13 | content unknown | decision stringclasses 13
values | reviews listlengths 3 12 | metareview listlengths 1 3 | sentence_texts listlengths 15 712 | opinion_groups listlengths 1 114 | conflicts_validation listlengths 1 155 | rebuttal_validation listlengths 1 155 | opinions listlengths 1 155 | PDF_path stringlengths 36 43 | PDF_version stringclasses 4
values | MD_path stringlengths 34 41 | conference stringclasses 13
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
rk9eAFcxg | {
"title": "Variational Recurrent Adversarial Deep Domain Adaptation",
"abstract": "We study the problem of learning domain invariant representations for time series data while transferring the complex temporal latent dependencies between the domains. Our model termed as Variational Recurrent Adversarial Deep Domai... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper offers a contribution to domain adaptation. The novelty with respect to methodology is modest, utilizing an existing variational RNN formulation and adversarial training method in this setting. But the application is important and results are... | [
"1. p. 7, 4.2, Could you provide more details for the baseline models (e.g. for DANN or R-DANN)? Diagrams with exact number filter maps/neurons and layer wiring would be perfect.",
"2. Have the authors tried simpler alternatives to adversarial training, e.g. MMD (reference: Long, 2015, ICML)?",
"3. Do you think... | [
[
17
],
[
21
],
[
23
],
[
4
],
[
12
],
[
10,
15
],
[
9,
14,
19
],
[
1
],
[
7
],
[
13
],
[
2
],
[
3
],
[
8
],
[
16
],
[
20
],
[
22
],
[
0
],
[
11
],
[
5
],... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
83
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
"role"... | benchmark/PDF/ICLR2017_rk9eAFcxg.pdf | openreview | benchmark/MD/ICLR2017_rk9eAFcxg.md | ICLR 2017 |
r1X3g2_xl | {
"TL;DR": "",
"title": "Adversarial Training Methods for Semi-Supervised Text Classification",
"abstract": "Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.\nHow... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper is concerned with extending adversarial and virtual adversarial training to text classification tasks. The main technical contribution is to apply perturbations to word embeddings rather than discrete input symbols. Excellent empirical perfo... | [
"Hi,\nIs the virtual adversarial training used in the pre-training phase?",
"Also In the results of Table 2, for the line of \"Virtual Adv\" and \"Adv+Virtual Adv\", the cost term of formula (3) are applied to all the samples, or only applied to unlabeled samples?",
"Thanks for your comments!\n>Is the virtual a... | [
[
15
],
[
4
],
[
5
],
[
7,
9
],
[
8
],
[
14
],
[
16
],
[
17
],
[
6
],
[
10
],
[
12
],
[
0
],
[
1
],
[
2
],
[
3
],
[
11
],
[
13
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5
]
}
],
"category": [
"QUAL-REP"
]
},
{
... | benchmark/PDF/ICLR2017_r1X3g2_xl.pdf | openreview | benchmark/MD/ICLR2017_r1X3g2_xl.md | ICLR 2017 |
r1aPbsFle | {
"TL;DR": "",
"title": "Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling",
"abstract": "Recurrent neural networks have been very successful at predicting sequences of words in tasks such as language modeling. However, all such models are based on the conventional classification fra... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "pros:\n - nice results on the tasks that justify acceptance of the paper\n \n cons:\n - In my opinion its a big stretch to describe this paper as a novel framework. The reasons for using the specific contrived augmented loss is based on the good result... | [
"- Have you tried your loss on a different dataset than PTB? (maybe now considered small for this task)",
"- There are many hyper-parameters (tau, beta, etc) in the proposed loss; how are they chosen (and what are the values for the reported results)?",
"- How important is tying with respect to training set siz... | [
[
8
],
[
9
],
[
20,
21
],
[
19
],
[
11
],
[
0
],
[
2
],
[
12
],
[
13
],
[
14
],
[
3,
5,
6
],
[
15
],
[
16,
17,
22
],
[
1
],
[
4
],
[
7
],
[
10
],
[
18
],
... | [
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
... | benchmark/PDF/ICLR2017_r1aPbsFle.pdf | openreview | benchmark/MD/ICLR2017_r1aPbsFle.md | ICLR 2017 |
S1c2cvqee | {
"TL;DR": "A Q-learning algorithm for automatically generating neural nets",
"title": "Designing Neural Network Architectures using Reinforcement Learning",
"abstract": "At present, designing convolutional neural network (CNN) architectures requires both human expertise and labor. New architectures are handcraft... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper comes up with a novel approach to searching the space of architectures for deep neural networks using reinforcement learning. The idea is straightforward and sensible: use a reinforcement learning strategy to iteratively grow a deep net grap... | [
"How do you position yourself to: https://arxiv.org/abs/1606.02492\n\"Convolutional Neural Fabrics\"",
"Thanks for pointing us to the Computational Neural Fabrics (CNF) work. We will include a citation and comparison in our paper. CNF bypasses the architecture selection process by creating a much wider network wi... | [
[
7
],
[
18,
22
],
[
0
],
[
15
],
[
6,
10,
11
],
[
8
],
[
13
],
[
14
],
[
19
],
[
20
],
[
21
],
[
9
],
[
12
],
[
2
],
[
3
],
[
5
],
[
16
],
[
4
],
[
1
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3,
4,
5
]
}
],
"category": [
"QUAL-CMP"
]
},
{
... | benchmark/PDF/ICLR2017_S1c2cvqee.pdf | openreview | benchmark/MD/ICLR2017_S1c2cvqee.md | ICLR 2017 |
Hkg4TI9xl | {
"title": "A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks",
"abstract": "We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctl... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper presents an approach that uses the statistics of softmax outputs to identify misclassifications and/or outliers. The reviewers had mostly minor comments on the paper, which appear to have been appropriately addressed in the revised version of... | [
"It would be interesting to also include a generative baseline for out-of-domain classification.",
"For example, on the TIMIT dataset, one could train a generative GMM-HMM system where phone durations are modeled using a Hidden-Markov-Model and output probabilities are modeled using Gaussian Mixture Models. One c... | [
[
6,
9
],
[
11
],
[
16
],
[
0,
7
],
[
12
],
[
13
],
[
14
],
[
15
],
[
1
],
[
3
],
[
10
],
[
5
],
[
4
],
[
2
],
[
8
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
6
]
}
],
"category": [
"QUAL-CMP"
]... | benchmark/PDF/ICLR2017_Hkg4TI9xl.pdf | openreview | benchmark/MD/ICLR2017_Hkg4TI9xl.md | ICLR 2017 |
SkpSlKIel | {
"TL;DR": "",
"title": "Why Deep Neural Networks for Function Approximation?",
"abstract": "Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow n... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper makes a solid technical contribution in proving that the deep networks are exponentially more efficient in function approximation compared to the shallow networks. They take the case of piecewise smooth networks, which is practically motivate... | [
"Section 5 claims that exponentially more units are needed when using more shallow networks, which seems to be referring to Corollary 12, but precisely this statement is given without a proof and only pointing to Theorem 11.",
"Is the conclusion of the paper referring to some statement other than Corollary 12?",
... | [
[
0
],
[
15
],
[
16
],
[
18
],
[
1
],
[
11
],
[
12
],
[
17
],
[
5
],
[
2
],
[
14
],
[
3
],
[
4
],
[
7
],
[
8
],
[
10
],
[
6
],
[
9
],
[
13
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
2
]
},
{
"role": "Author",
"data": [
3,
4,
5
]
}
],
"category": [
"CLAR-NOT",
"CLAR-WRT"
]
},
... | benchmark/PDF/ICLR2017_SkpSlKIel.pdf | openreview | benchmark/MD/ICLR2017_SkpSlKIel.md | ICLR 2017 |
Byk-VI9eg | {
"title": "Generative Multi-Adversarial Networks",
"abstract": "Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \\emph{Generative Multi-Adversarial Network} (GMAN), a framework that extends GANs to multiple ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "Using an ensemble in the discriminator portion of a GAN is a sensible idea, and it is well explored and described in this paper. Further clarification and exploration of how the multiple discriminators are combined (max versus averaging versus weighted... | [
"1. How is the max value computed? Is it approximated with a finite number of samples?",
"2. In Section 3.1's second paragraph, the \"prime\" functions (D', V') don't appear to be mentioned anywhere else. What are they?",
"1. The max is over the N players and is taken at each time step.",
"For example, at tim... | [
[
5
],
[
18,
22
],
[
1
],
[
0
],
[
4
],
[
6,
24
],
[
26
],
[
11,
12,
17,
20
],
[
13
],
[
14
],
[
15
],
[
21
],
[
2,
3
],
[
10
],
[
7
],
[
8
],
[
9
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_Byk-VI9eg.pdf | openreview | benchmark/MD/ICLR2017_Byk-VI9eg.md | ICLR 2017 |
BJrFC6ceg | {
"title": "PixelCNN++: Improving the PixelCNN with Discretized Logistic Mixture Likelihood and Other Modifications",
"abstract": "PixelCNNs are a recently proposed class of powerful generative models with tractable likelihood. Here we discuss our implementation of PixelCNNs which we make available at https://githu... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1
]
}
}
}
],
[
{
"role": "Revi... | [
[
{
"role": "PC",
"data": {
"comment": " The authors acknowledge that the ideas in the paper are incremental, but assert these are not-trivial improvements upon prior work on pixel CNNs. The reviewers tended to agree with this characterization. The paper presents SOTA pixel likelihood result... | [
"Did you try different ordering (ex. starting from the bottom right corner instead and going upward and leftward)? If so was the results similar?",
"We used the same ordering as the original PixelCNN and did not explore alternatives. My guess is that it would not matter much, but it's worth trying.",
"1. Do you... | [
[
17
],
[
18
],
[
16
],
[
15
],
[
21,
25
],
[
26
],
[
29
],
[
2,
7
],
[
8
],
[
24,
27
],
[
0
],
[
1
],
[
9
],
[
10
],
[
13
],
[
14
],
[
19
],
[
23
],
[
3
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role": "Reviewer 2 ... | benchmark/PDF/ICLR2017_BJrFC6ceg.pdf | openreview | benchmark/MD/ICLR2017_BJrFC6ceg.md | ICLR 2017 |
rJ8uNptgl | {
"TL;DR": "",
"title": "Towards the Limit of Network Quantization",
"abstract": "Network quantization is one of network compression techniques to reduce the redundancy of deep neural networks. It reduces the number of distinct network parameter values by quantization in order to save the storage for them. In thi... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper proposes using quantization schemes to compress the weights of a neural network. The paper carries out a methodical study of first deriving the objective function for optimizing the quantization, and then uses various quantization schemes. Ex... | [
"1) For the case of Huffman coding we see that Uniform quantization works very well actually compared to the Iterative ECSQ. Clarification - end of section 5.3 \"Note that one can use Hessian-weighted mean...\" - so what is used in the end, Hessian weighted mean or just a non-weighted mean? I would like to see a co... | [
[
1
],
[
2,
8,
11,
21
],
[
3
],
[
9
],
[
13
],
[
18
],
[
25,
26,
27,
28
],
[
7
],
[
17,
20
],
[
23
],
[
15
],
[
22
],
[
0
],
[
4
],
[
24
],
[
5
],
[
6
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6,
7,
8,
24
]
}
],
"category": [
"QUAL-EXP",
... | benchmark/PDF/ICLR2017_rJ8uNptgl.pdf | openreview | benchmark/MD/ICLR2017_rJ8uNptgl.md | ICLR 2017 |
Sy6iJDqlx | {
"title": "Attend, Adapt and Transfer: Attentive Deep Architecture for Adaptive Transfer from multiple sources in the same domain",
"abstract": "Transferring knowledge from prior source tasks in solving a new target task can be useful in several learning applications. The application of transfer poses two serious ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The authors present a mixture of experts framework to combine learnt policies to improve multi-task learning while avoiding negative transfer. The key to their approach is a soft attention mechanism, learnt with RL, which enables positive transfer. The... | [
"In Eq. 12, wouldn’t it make each Q_b converge to the same action-value function given by Q_T, and unlearn information from source tasks? Slow update may alleviate the problem.",
"Also, is this the only case where you re-train parameters learned in source tasks? Have you performed comparison between having source... | [
[
13
],
[
15
],
[
17
],
[
6
],
[
10
],
[
16
],
[
11
],
[
1
],
[
3,
18
],
[
4
],
[
7,
14
],
[
8
],
[
9
],
[
19
],
[
20
],
[
0
],
[
12
],
[
2
],
[
5
]
] | [
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"incorrect"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
6
]
}
],
"category": [
"QUAL-MET"
]
},
{
... | benchmark/PDF/ICLR2017_Sy6iJDqlx.pdf | openreview | benchmark/MD/ICLR2017_Sy6iJDqlx.md | ICLR 2017 |
ByxpMd9lx | {
"TL;DR": "",
"title": "Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks",
"abstract": "Recent papers have shown that neural networks obtain state-of-the-art performance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellen... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1
]
}
}
}
],
[
{
"role": "Revi... | [
[
{
"role": "PC",
"data": {
"comment": "One weak and one positive review without much concrete substance. The third review is positive, but the experiments are not that convincing: the gains from transfer are small in table 3 and in table 2 it is unclear how strong the baselines are. Given h... | [
"I am curious about what happens if the same task which could be implemented as T-A is treated as T-B or T-C. I'd expect the performance to fall, but unclear by how much.",
"We agree that this would be an interesting study. We will include corresponding experimental results in our later version.",
"As the train... | [
[
11
],
[
16
],
[
6
],
[
0,
4,
7,
14
],
[
12
],
[
13
],
[
18
],
[
1
],
[
2
],
[
3
],
[
9
],
[
5
],
[
17
],
[
8
],
[
10
],
[
15
],
[
19
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role": "Reviewer 2 ... | benchmark/PDF/ICLR2017_ByxpMd9lx.pdf | openreview | benchmark/MD/ICLR2017_ByxpMd9lx.md | ICLR 2017 |
BJbD_Pqlg | {
"title": "Human perception in computer vision",
"abstract": "Computer vision has made remarkable progress in recent years. Deep neural network (DNN) models optimized to identify objects in images exhibit unprecedented task-trained accuracy and, remarkably, some generalization ability: new visual problems can now ... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "I think the reviewers evaluated this paper very carefully and were well balanced. The reviewers all agree that the presented comparison between human vision and DNNs is interesting. At the same time, none of the reviewers would strongly defend the pape... | [
"- Did you do the threshold prediction experiment for differently scaled versions of the input image (as suggested by Table 2, row 2 and 3)?",
"If so, was there a systematical change in the best correlated L1 layer when changing the input size of the image by rescaling? I.e. is there an effect solely based on the... | [
[
23
],
[
19
],
[
20
],
[
22
],
[
4
],
[
9
],
[
10
],
[
0
],
[
1
],
[
3
],
[
13
],
[
14
],
[
15
],
[
21
],
[
24
],
[
2
],
[
5
],
[
6
],
[
7
],
[
8
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
6
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role": "Reviewer 1 ... | benchmark/PDF/ICLR2017_BJbD_Pqlg.pdf | openreview | benchmark/MD/ICLR2017_BJbD_Pqlg.md | ICLR 2017 |
r1Aab85gg | {
"title": "Offline bilingual word vectors, orthogonal transformations and the inverted softmax",
"abstract": "Usually bilingual word vectors are trained \"online''. Mikolov et al. showed they can also be found \"offline\"; whereby two pre-trained embeddings are aligned with a linear transformation, using dictionar... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4,
5
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
... | [
[
{
"role": "PC",
"data": {
"comment": "This is a nice contribution and that present some novel and interesting ideas. At the same time, the empirical evaluation is somewhat thin and could be improved. Nevertheless, the PCs believe this will make a good contribution to the Conference Track."... | [
"1. Gouws et al., 2015 were not the first to show that aligned sentences can be used alongside monolingual source to learn online bilingual vectors. It was Chandar et al., 2014 [1] who first showed that you can use both monolingual data and sentence aligned bilingual data to learn bilingual representations. Please ... | [
[
17
],
[
19
],
[
10
],
[
28
],
[
29
],
[
6
],
[
7
],
[
8
],
[
15,
22
],
[
0,
3
],
[
1
],
[
2
],
[
24
],
[
25
],
[
27
],
[
18
],
[
20
],
[
21
],
[
26
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
8,
62
]
}
],
"category": [
"QUAL-CMP"
]
},
{
"sentences": [
{
"role"... | benchmark/PDF/ICLR2017_r1Aab85gg.pdf | openreview | benchmark/MD/ICLR2017_r1Aab85gg.md | ICLR 2017 |
HyoST_9xl | {
"title": "DSD: Dense-Sparse-Dense Training for Deep Neural Networks",
"abstract": "Modern deep neural networks have a large number of parameters, making them very hard to train. We propose DSD, a dense-sparse-dense training flow, for regularizing deep neural networks and achieving better optimization performance.... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "Important problem, simple (in a positive way) idea, broad experimental evaluation; all reviewers recommend accepting the paper, and the AC agrees. Please incorporate any remaining reviewer feedback."
}
}
]
] | [
"When applying DSD, do the training time increase? by how much?",
"After training with DSD, how much memory storage is saved?",
"Does DSD also improve inference time compared with models trained without DSD?",
"Thanks for your questions, I will try to clarify them here and make sure it's reflected in the fina... | [
[
2
],
[
3
],
[
14
],
[
10
],
[
1
],
[
4
],
[
6
],
[
7
],
[
8,
12
],
[
16
],
[
18
],
[
0
],
[
5
],
[
11
],
[
17
],
[
15
],
[
9
],
[
13
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_HyoST_9xl.pdf | openreview | benchmark/MD/ICLR2017_HyoST_9xl.md | ICLR 2017 |
HJ1kmv9xx | {
"title": "LR-GAN: Layered Recursive Generative Adversarial Networks for Image Generation",
"abstract": "We present LR-GAN: an adversarial image generation model which takes scene structure and context into account. Unlike previous generative adversarial networks (GANs), the proposed GAN learns to generate image b... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4,
5,
6
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"commen... | [
[
{
"role": "PC",
"data": {
"comment": "The paper proposes a layered approach to image generation, ie starting by generating the background first, followed by generating the foreground objects. All three reviewers are positive, although not enthusiastic. The idea is nice, and the results are... | [
"Thank you for your submission.",
"Would you like to visualize for comparison the segmentation masks inferred from the non category specific models in CIFAR, when the images are not classified according to different categorical labels? It is interesting to see under which conditions the inferred segmentation mask... | [
[
3
],
[
12
],
[
15
],
[
9
],
[
5
],
[
20
],
[
24
],
[
1
],
[
2
],
[
4
],
[
10,
11
],
[
13
],
[
14
],
[
16
],
[
17
],
[
19
],
[
21
],
[
23
],
[
7
],
[
8
]... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect"... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_HJ1kmv9xx.pdf | openreview | benchmark/MD/ICLR2017_HJ1kmv9xx.md | ICLR 2017 |
HksioDcxl | {
"TL;DR": "",
"title": "Joint Training of Ratings and Reviews with Recurrent Recommender Networks",
"abstract": "Accurate modeling of ratings and text reviews is at the core of successful recommender systems. While neural networks have been remarkably successful in modeling images and natural language, they have... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper has some nice ideas, but requires a bit to push it over the acceptance threshold. I agree with the reviewers who ask for comparisons with other rating-review methods, and that other evaluation metrics more appropriate to the recommendation ta... | [
"1) How are the input y_t of user RNN differ from the input of the movie RNN?",
"2) Can you provide how y_t is computed in detail? how are x_t, 1_{newbie}, \\tau_t and \\tau_{t-1} combined? How is \\tau_{t} represented?",
"Thank you for the questions.",
"1) y_t is a function of x_t, 1_{newbie}, \\tau_t and \\... | [
[
0
],
[
7,
13,
17
],
[
5
],
[
21
],
[
3,
11
],
[
6
],
[
2
],
[
4
],
[
8,
14
],
[
9
],
[
10
],
[
12
],
[
15,
20
],
[
16
],
[
18
],
[
19
],
[
1
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6,
7,
8
]
}
],
"category": [
"CLAR-NOT"
]
},
{
... | benchmark/PDF/ICLR2017_HksioDcxl.pdf | openreview | benchmark/MD/ICLR2017_HksioDcxl.md | ICLR 2017 |
SJ3rcZcxl | {
"title": "Q-Prop: Sample-Efficient Policy Gradient with An Off-Policy Critic",
"abstract": "Model-free deep reinforcement learning (RL) methods have been successful in a wide variety of simulated domains. However, a major obstacle facing deep RL in the real world is their high sample complexity. Batch policy grad... | Accept (Oral) | [
[
{
"role": "Reviewer",
"data": {
"summary_of_the_paper": null,
"value": {
"review": [
0,
1,
2,
3,
4,
5
]
},
"scores": {
"Solid": null,
"Presentation": null... | [
[
{
"role": "PC",
"data": {
"comment": "This paper presents a nice contribution to the RL literature, finding an intermediate point between the high-variance (but unbiased) gradient estimates from policy optimization methods, and low(er)-variance (but biased) gradient estimates from off-poli... | [
"This paper proposed a new policy gradient method that uses the Taylor expansion of a critic as the control variate to reduce the variance in gradient estimation. The key idea is that the critic can be learned in an off-policy manner so that it is more sample efficient.",
"Although the algorithm structure is simi... | [
[
9
],
[
25
],
[
5
],
[
19
],
[
22
],
[
10
],
[
23
],
[
24
],
[
1
],
[
2
],
[
3
],
[
7
],
[
12
],
[
13
],
[
14
],
[
15
],
[
17,
26
],
[
20
],
[
27
],
[
8
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"inc... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0,
1
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1",
"data": [
2
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_SJ3rcZcxl.pdf | openreview | benchmark/MD/ICLR2017_SJ3rcZcxl.md | ICLR 2017 |
HJWHIKqgl | {
"title": "Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy",
"abstract": "We propose a method to optimize the representation and distinguishability of samples from two probability distributions, by maximizing the estimated power of a statistical test based on the maximum mean discrepan... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper presents two ways that MMDs can be used to aid the GAN training framework. The relation to current literature is clearly explained, and the paper has illuminating side-experiments. The main con is that it's not clear if MMD-based training wi... | [
"1) It is known that the discretisation method (e.g. hard-thresholding vs. Bernoulli-sampling proportional to the grey level) has a big influence on the performance of generative models;",
"at least when training latent variable models with MLL objective. What discretisation method did you use? Do you see differe... | [
[
5
],
[
9
],
[
10,
13
],
[
0
],
[
3
],
[
4
],
[
15
],
[
16
],
[
22
],
[
11,
14
],
[
1
],
[
21
],
[
2
],
[
6
],
[
7
],
[
8
],
[
12
],
[
17
],
[
18
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
3,
4,
5
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences":... | benchmark/PDF/ICLR2017_HJWHIKqgl.pdf | openreview | benchmark/MD/ICLR2017_HJWHIKqgl.md | ICLR 2017 |
BkVsEMYel | {
"title": "Inductive Bias of Deep Convolutional Networks through Pooling Geometry",
"abstract": "Our formal understanding of the inductive bias that drives the success of convolutional networks on computer vision tasks is limited. In particular, it is unclear what makes hypotheses spaces born from convolution and ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper uses the notion of separation rank from tensor algebra to analyze the correlations induced through convolution and pooling operations. They show that deep networks have exponentially larger separation ranks compared to shallow ones, and thus,... | [
"Could the authors say more about the invariances of separation rank?",
"That is, under which transformations will two classification tasks have identical separation rank?",
"The separation rank is a property that characterizes multivariate functions.",
"As such, it does not directly apply to classification t... | [
[
2
],
[
0
],
[
5
],
[
9,
10,
19,
19
],
[
11
],
[
12
],
[
20
],
[
15
],
[
7
],
[
3
],
[
4
],
[
8,
17,
17
],
[
14
],
[
6
],
[
1
],
[
13
],
[
16
],
[
18
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
6,
7,
8,
9
]
}
],
... | benchmark/PDF/ICLR2017_BkVsEMYel.pdf | openreview | benchmark/MD/ICLR2017_BkVsEMYel.md | ICLR 2017 |
ryxB0Rtxx | {
"title": "Identity Matters in Deep Learning",
"abstract": "An emerging design principle in deep learning is that each layer of a deep\nartificial neural network should be able to easily express the identity\ntransformation. This idea not only motivated various normalization techniques,\nsuch as batch normalizatio... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4,
5
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper begins by presenting a simple analysis for deep linear networks. This is more to demonstrate the intuitions behind their derivations, and does not have practical relevance. They then extend to non-linear resnets with ReLU units and demonstrat... | [
"Can you please clarify or discuss those points?",
"- Would the results in Theo 2.1 and 2.2 hold when input and output (x and y) have different dimensions?",
"- Theo 3.2 ensures overfitting the training set under the assumption that all points can not be too close. can you say anything about f(x+delta) how sta... | [
[
14
],
[
18
],
[
28
],
[
9
],
[
13
],
[
20
],
[
22
],
[
25
],
[
8
],
[
16
],
[
19
],
[
21
],
[
10
],
[
23
],
[
3
],
[
4
],
[
5,
12
],
[
6
],
[
11
],
[
1
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"incorrect",
... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_ryxB0Rtxx.pdf | openreview | benchmark/MD/ICLR2017_ryxB0Rtxx.md | ICLR 2017 |
HyM25Mqel | {
"title": "Sample Efficient Actor-Critic with Experience Replay",
"abstract": "This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and se... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
}
],
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
1,
2,
3,
4
]
... | [
[
{
"role": "PC",
"data": {
"comment": "pros:\n - set of contributions leading to SOTA for sample complexity wrt Atari (discrete) and continuous domain problems\n - significant experimental analysis\n - long all-in-one paper\n \n cons:\n - builds on existing ideas, although ablation analysis... | [
"I appreciate that you evaluate your approach on both Atari games and continuous control problems. Your evaluation seems to be very thorough.",
"- I thought one of the conclusions of the Mnih et al. paper on A3C was that experience replay was not required with Actor-Critic (assuming parallel agents). You seem to ... | [
[
27
],
[
18
],
[
23
],
[
24
],
[
6
],
[
21
],
[
22
],
[
25
],
[
26
],
[
12
],
[
13
],
[
1
],
[
4
],
[
11
],
[
14
],
[
0
],
[
9,
28,
29
],
[
10
],
[
15
],
[... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"incorrect",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"incorrect",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"incorrect",
"co... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role": "Reviewer 2 Further Reply",
"data": [
1
]
},
{
"r... | benchmark/PDF/ICLR2017_HyM25Mqel.pdf | openreview | benchmark/MD/ICLR2017_HyM25Mqel.md | ICLR 2017 |
S1oWlN9ll | {
"TL;DR": "",
"title": "Loss-aware Binarization of Deep Networks",
"abstract": "Deep neural network models, though very powerful and highly successful, are computationally expensive in terms of space and time. Recently, there have been a number of attempts on binarizing the network weights and activations. This ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "It's a simple contribution supported by empirical and theoretical analyses. After some discussion, all reviewers viewed the paper favourably."
}
}
]
] | [
"Do the binarized activations in the BPN2 setting affect the training procedure? Or does the training procedure remain the same as for BPN?",
"What is the main source of regularization: binarization or good optimization? (i.e. is the training error of BPN higher or lower than the training error of BWN?) Does lear... | [
[
9
],
[
12
],
[
1
],
[
2
],
[
3
],
[
6
],
[
7
],
[
8
],
[
13
],
[
5
],
[
10,
11
],
[
0
],
[
4
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
3,
4,
5
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_S1oWlN9ll.pdf | openreview | benchmark/MD/ICLR2017_S1oWlN9ll.md | ICLR 2017 |
BJAA4wKxg | {
"title": "A Convolutional Encoder Model for Neural Machine Translation",
"abstract": "The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence.\nIn this paper we present a faster and simpler architecture based on a succession of convolutional layers. \nThis... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "AC",
"data": {
"comment": "Authors, It would be great to have a rebuttal for this paper as reviewers will be discussing over the next week. Thanks."
}
}
],
[
{
"role": "PC",
"data": {
"comment": "This work demonstrates architectural choices ... | [
"Hi, I have a few questions about your submission.",
"First, can you please explain in more detail why your convnet model is faster for decoding? Is it because less FLOPs are required? Or is it because you were able to use less hidden units? Are both implementations efficient? It's not obvious to me why 15+5 laye... | [
[
12
],
[
16
],
[
3
],
[
4
],
[
2,
5,
8,
14
],
[
17
],
[
18
],
[
10
],
[
11
],
[
1,
6,
9
],
[
15
],
[
7
],
[
13
],
[
0
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_BJAA4wKxg.pdf | openreview | benchmark/MD/ICLR2017_BJAA4wKxg.md | ICLR 2017 |
S11KBYclx | {
"title": "Learning Curve Prediction with Bayesian Neural Networks",
"abstract": "Different neural network architectures, hyperparameters and training protocols lead to different performances as a function of time.\nHuman experts routinely inspect the resulting learning curves to quickly terminate runs with poor h... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The reviewers agreed that this is a good paper that proposes an interesting approach to modeling training curves. The approach is well motivated in terms of surrogate based (e.g. Bayesian) optimization. They are convinced that a great model of training... | [
"I'd be interested to know more about the basis functions in Figure 3 of the paper.",
"For example, what is \"vapor pressure\" -- is it one of your basis functions? Are these basis functions described somewhere? And, unlike the other ones in Fig 3, vapor pressure can become negative -- does this have any implicat... | [
[
0
],
[
3
],
[
13
],
[
14
],
[
15
],
[
25
],
[
26
],
[
2
],
[
21
],
[
6
],
[
7
],
[
5
],
[
8
],
[
9
],
[
10
],
[
16
],
[
19
],
[
20
],
[
22
],
[
24
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
5,
6,
7,
8,
9,
10
]
}
],
"category": [
"C... | benchmark/PDF/ICLR2017_S11KBYclx.pdf | openreview | benchmark/MD/ICLR2017_S11KBYclx.md | ICLR 2017 |
BJwFrvOeg | {
"title": "A Neural Knowledge Language Model",
"abstract": "Current language models have significant limitations in their ability to encode and decode knowledge. This is mainly because they acquire knowledge based on statistical co-occurrences, even if most of the knowledge words are rarely observed named entities... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "This work introduces a combination of a LM with knowledge based retrieval system. This builds upon the recent trend of incorporating pointers and external information into generation, but includes some novelty, making the paper \"different and more int... | [
"If the word-level alignments to the KB are created using string matching when building the dataset, how do you resolve ambiguities when the same named entities may occur in multiple facts (e.g. the location of birth and the location of death could both be the same state.)",
"What exact representation do you use ... | [
[
25
],
[
26
],
[
28
],
[
3
],
[
5
],
[
7
],
[
13
],
[
15
],
[
19
],
[
24
],
[
27
],
[
14
],
[
2
],
[
11
],
[
16
],
[
17
],
[
18
],
[
8
],
[
9
],
[
10
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_BJwFrvOeg.pdf | openreview | benchmark/MD/ICLR2017_BJwFrvOeg.md | ICLR 2017 |
HkEI22jeg | {
"TL;DR": "",
"title": "Multilayer Recurrent Network Models of Primate Retinal Ganglion Cell Responses",
"abstract": "Developing accurate predictive models of sensory neurons is vital to understanding sensory processing and brain computations. The current standard approach to modeling neurons is to start with si... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This work is an important step in developing the tools for understanding the nonlinear response properties of visual neurons. The methods are sound and the results are meaningful. Reviewer 3 gave a much lower score than the other two reviewers because ... | [
"I do not understand the experimental setup. In 3.1 on page 2 you talk about two separate experiments but you don't say what they are.",
"Then you talk about static images and movies. Which did you use? And why are training data interleaved with test data, and why do you need repetitions of the test data? What ar... | [
[
19
],
[
6
],
[
17
],
[
18
],
[
22
],
[
13
],
[
21
],
[
1
],
[
0
],
[
3
],
[
4
],
[
5
],
[
15
],
[
23
],
[
2
],
[
10
],
[
11
],
[
16
],
[
8
],
[
7
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
3,
4,
5,
6,
7,
8,
9
]
}
],
"category"... | benchmark/PDF/ICLR2017_HkEI22jeg.pdf | openreview | benchmark/MD/ICLR2017_HkEI22jeg.md | ICLR 2017 |
HJOZBvcel | {
"title": "Learning to Discover Sparse Graphical Models",
"abstract": "We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. In the set... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"summary_of_the_paper": null,
"value": {
"review": [
0,
1,
2,
3,
4,
5,
6
]
},
"scores": {
"Solid": null,
"Pres... | [
[
{
"role": "PC",
"data": {
"comment": "The authors provide a modern twist to the classical problem of graphical model selection. Traditionally, the sparsity priors to encourage selection of specific structures is hand-engineered. Instead, the authors propose using a neural network to train ... | [
"This paper proposes a new method for learning graphical models. Combined with a neural network architecture, some sparse edge structure is estimated via sampling methods. In introduction, the authors say that a problem in graphical lasso is model selection.",
"However, the proposed method still implicitly includ... | [
[
12
],
[
3
],
[
11
],
[
5
],
[
6
],
[
10
],
[
14
],
[
7,
16
],
[
8
],
[
9
],
[
1
],
[
4
],
[
13
],
[
17
],
[
2,
15
],
[
0
]
] | [
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1",
"data": [
1
]
},
{
"role": "Author",
"data": [
... | benchmark/PDF/ICLR2017_HJOZBvcel.pdf | openreview | benchmark/MD/ICLR2017_HJOZBvcel.md | ICLR 2017 |
BkSmc8qll | {
"title": "Dynamic Neural Turing Machine with Continuous and Discrete Addressing Schemes",
"abstract": "In this paper, we extend neural Turing machine (NTM) into a dynamic neural Turing machine (D-NTM) by introducing a trainable memory addressing scheme. This addressing scheme maintains for each memory cell two se... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"comment": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1
]
}
}
}
],
[
{
"role": "Revie... | [
[
{
"role": "PC",
"data": {
"comment": "This paper proposes some novel architectural elements, and the results are not far from published DNC results. However, the main issues of this paper are the complexity of the model, lack of justification for certain architectural choices, gaps with re... | [
"I think the footnote is missing.",
"Thanks for pointing out our mistake. That footnote was unnecessary, we are going to remove in the next updated version.",
"You introduce the address matrix, so I expected that you only use the address vectors $a$ for the addressing mechanism but in chapter 3 where you descri... | [
[
16
],
[
1
],
[
2
],
[
3
],
[
4
],
[
5
],
[
6
],
[
7
],
[
8
],
[
9
],
[
21
],
[
22
],
[
34
],
[
0
],
[
10
],
[
11
],
[
20
],
[
24
],
[
26
],
[
41
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"inc... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
"role": "Reviewer 2 ... | benchmark/PDF/ICLR2017_BkSmc8qll.pdf | openreview | benchmark/MD/ICLR2017_BkSmc8qll.md | ICLR 2017 |
BJRIA3Fgg | {
"TL;DR": "",
"title": "Modularized Morphing of Neural Networks",
"abstract": "In this work we study the problem of network morphism, an effective learning scheme to morph a well-trained neural network to a new one with the network function completely preserved. Different from existing work where basic morphing ... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2
]
}
}
}
],
[
{
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper investigates the problem of morphing one convolutional network into another with application to exploring the model space (starting from a pre-trained baseline model). The resulting morphed models perform better than the baseline, albeit at t... | [
"Definition 1 reads: \"We call such an M as a module. If there exists a process that we are able to morph M0 to M, then we say that module M is morphable, and the morphing process is called modular network morphism.\"",
"It sounds tautologic and is unclear whether the morphism should happen through a sequence of ... | [
[
0
],
[
20,
24
],
[
18,
22
],
[
2,
7,
9,
12
],
[
15
],
[
3
],
[
4
],
[
5
],
[
6
],
[
8
],
[
10
],
[
16
],
[
26,
29
],
[
27
],
[
13
],
[
17
],
[
21,
25
],... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_BJRIA3Fgg.pdf | openreview | benchmark/MD/ICLR2017_BJRIA3Fgg.md | ICLR 2017 |
S1TER2oll | {
"TL;DR": "",
"title": "FILTER SHAPING FOR CONVOLUTIONAL NEURAL NETWORKS",
"abstract": "Convolutional neural networks (CNNs) are powerful tools for classification of visual inputs. An important property of CNN is its restriction to local connections and sharing of local weights among different locations. In this... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4,
5
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
... | [
[
{
"role": "PC",
"data": {
"comment": "Using covariance analysis for designing convolution connection structure is a nice and novel idea. All three reviewers recommended acceptance. Since the reviewers were not confident about the theoretical derivations, the AC asked for an opinion from an... | [
"The paper discusses some motivations for using square filters for natural images, but there is another important motivation, which also applies in other domains: very efficient parallel algorithms are available to compute convolutions with small, square kernels (e.g. Winograd). This aspect is not really touched up... | [
[
4,
14
],
[
7
],
[
1
],
[
11
],
[
12
],
[
16
],
[
17
],
[
18
],
[
21
],
[
5,
8
],
[
23
],
[
6
],
[
19
],
[
28
],
[
32
],
[
33
],
[
38
],
[
39
],
[
40
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
6,
7,
8,
9,
10
]
}
],
"category": [
"SIGN-DOM"
... | benchmark/PDF/ICLR2017_S1TER2oll.pdf | openreview | benchmark/MD/ICLR2017_S1TER2oll.md | ICLR 2017 |
Hk8TGSKlg | {
"TL;DR": "",
"title": "Reasoning with Memory Augmented Neural Networks for Language Comprehension",
"abstract": "Hypothesis testing is an important cognitive process that supports human reasoning. In this paper, we introduce a computational hypothesis testing approach based on memory augmented neural networks. ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper proposes a memory-enhanced RNN in the vein of NTM, and a novel training method for this architecture of cloze-style QA. The results seem convincing, and the training method is decently novel according to reviewers, although the evaluation se... | [
"This paper proposes a novel approach at cloze form question answering in which the model iteratively proposes queries to the question until it converges on a query and answer.",
"Question: what is the relationship between eqn 10 and eqn 13? Is 13 an alternatively way of updating the query? Where is z_t in the qu... | [
[
6
],
[
1
],
[
15
],
[
0,
3,
13
],
[
8
],
[
2
],
[
5
],
[
11
],
[
12
],
[
14
],
[
9
],
[
10
],
[
4
],
[
7
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"ORIG-MTH"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_Hk8TGSKlg.pdf | openreview | benchmark/MD/ICLR2017_Hk8TGSKlg.md | ICLR 2017 |
HkJq1Ocxl | {
"title": "Programming With a Differentiable Forth Interpreter",
"abstract": "There are families of neural networks that can learn to compute any function, provided sufficient training data. However, given that in practice training data is scarce for all but a small set of problems, a core question is how to incor... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This work is stood out for many reviewers in terms of it's clarity (\"pleasure to read\") and originality, with reviewers calling it \"very ambitious\" and \"provocative\". Reviewers find the approach novel, and to fill an interesting niche in the area... | [
"Interesting ideas. A few questions:",
"- Could you explain in more detail what the Manipulate and Permute decoders do in Sec 3.2? Manipulate just uses an MLP to turn h into a set of m vectors? What is the structure of Permute?",
"- Is it possible to cast your learning problems as discrete search problems? If s... | [
[
16
],
[
18
],
[
20
],
[
15
],
[
2
],
[
6
],
[
10
],
[
11
],
[
13
],
[
17
],
[
19
],
[
21
],
[
3
],
[
4
],
[
1
],
[
9,
12
],
[
0
],
[
5
],
[
7
],
[
8
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_HkJq1Ocxl.pdf | openreview | benchmark/MD/ICLR2017_HkJq1Ocxl.md | ICLR 2017 |
BymIbLKgl | {
"TL;DR": "",
"title": "Learning Invariant Representations Of Planar Curves ",
"abstract": "We propose a metric learning framework for the construction of invariant geometric\nfunctions of planar curves for the Euclidean and Similarity group of transformations.\nWe leverage on the representational power of convo... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This work proposes learning of local representations of planar curves using convolutional neural networks.\n Invariance to rigid transformations and discriminability are enforced with a metric learning framework using a siamese architecture. Preliminar... | [
"First, I have to say I really did enjoy the result of the paper.",
"Is the number N of points fixed? If yes, this does not seem to be however a limitation thanks to Section 5, second paragraph, could you clarify this? How did you chose the red points? (via the support of the max pooling?)",
"Have you tried to ... | [
[
4
],
[
6
],
[
11
],
[
12
],
[
9
],
[
17
],
[
20
],
[
7,
18
],
[
16
],
[
22,
23
],
[
10
],
[
2
],
[
13
],
[
15
],
[
21
],
[
24
],
[
3
],
[
1
],
[
14
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_BymIbLKgl.pdf | openreview | benchmark/MD/ICLR2017_BymIbLKgl.md | ICLR 2017 |
Sk2Im59ex | {
"TL;DR": "",
"title": "Unsupervised Cross-Domain Image Generation",
"abstract": "We study the problem of transferring a sample in one domain to an analog sample in another domain. Given two related domains, S and T, we would like to learn a generative function G that maps an input sample from S to the domain T,... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The authors propose a application of GANs to map images to new domains with no labels. E.g., an MNIST 3 is used to generate a SVHN 3. Ablation analysis is given to help understand the model. The results are (subjectively) impressive and the approach co... | [
"1. From the introduction and abstract it's not clear what do the authors mean by f. Why would anyone care about f and f-constancy? It would be nice if that could be explained in the very beginning.",
"2. In the emoji experiment, did you apply any tricks to make GAN training stable? What was the stopping criterio... | [
[
12
],
[
2
],
[
0
],
[
8
],
[
5
],
[
9,
14
],
[
18
],
[
6
],
[
11
],
[
16
],
[
17
],
[
21
],
[
22
],
[
23
],
[
24
],
[
3
],
[
10
],
[
15
],
[
1
],
[
20
]... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
3,
4
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_Sk2Im59ex.pdf | openreview | benchmark/MD/ICLR2017_Sk2Im59ex.md | ICLR 2017 |
Sk8csP5ex | {
"title": "The loss surface of residual networks: Ensembles and the role of batch normalization",
"abstract": "Deep Residual Networks present a premium in performance in comparison to conventional\nnetworks of the same depth and are trainable at extreme depths. It has\nrecently been shown that Residual Networks be... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"comment": [
0,
1,
2,
3,
4,
5
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper presents an analysis of residual networks and argues that the residual networks behave as ensembles of shallow networks, whose depths are dynamic. The authors argue that their model provides a concrete explanation to the effectiveness of resn... | [
"- The issue regarding unrealistic assumptions of spin glass analysis for landscape of neural networks was posed as an open problem in COLT 2015.[1] Can you discuss the effect of this problem for your analysis?",
"- Regarding assumption that minimal of (12) should hold, is this assumption is realistic or not?",
... | [
[
7
],
[
9
],
[
10
],
[
15
],
[
13
],
[
4
],
[
2
],
[
16
],
[
0,
5
],
[
1
],
[
11
],
[
12
],
[
14
],
[
3
],
[
6
],
[
8
],
[
17
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
7,
8,
9,
10,
53,
54
]
}
],
"category": [
"QUAL-MET"
... | benchmark/PDF/ICLR2017_Sk8csP5ex.pdf | openreview | benchmark/MD/ICLR2017_Sk8csP5ex.md | ICLR 2017 |
Hy-2G6ile | {
"title": "Gated Multimodal Units for Information Fusion",
"abstract": "This paper presents a novel model for multimodal learning based on gated neural networks. The Gated Multimodal Unit (GMU) model is intended to be used as an internal unit in a neural network architecture whose purpose is to find an intermediat... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The authors propose a Gated Muiltimodal Unit to combine multi-modal information (visual and textual). They also collect a large dataset of movie summers and posters. Overall, the reviewers were quite positive, while AR4 points to related models and fee... | [
"Thanks for the interesting paper. I have two questions on the experiments and implementation:",
"1) As for the baseline of the concatenation method, the paper implemented by retraining the network. What about first retraining separately and then concatenating them and fine tuning the last layer? That may reduce... | [
[
19
],
[
24
],
[
23
],
[
7
],
[
11
],
[
12
],
[
15,
16
],
[
22
],
[
25
],
[
1
],
[
2
],
[
3
],
[
5
],
[
10
],
[
13
],
[
20
],
[
21
],
[
26
],
[
6
],
[
8
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_Hy-2G6ile.pdf | openreview | benchmark/MD/ICLR2017_Hy-2G6ile.md | ICLR 2017 |
HJDdiT9gl | {
"title": "Generating Long and Diverse Responses with Neural Conversation Models",
"abstract": "Building general-purpose conversation agents is a very challenging task, but necessary on the road toward intelligent agents that can interact with humans in natural language. Neural conversation models -- purely data-d... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2
]
}
}
},
{
"role"... | [
[
{
"role": "PC",
"data": {
"comment": "The reviewers agree that the work presents interesting, but incremental, results. They are also unconvinced that this is the right direction for research on dialogue systems to go in or that the methods presented will appeal to a broader audience. It s... | [
"The description of the \"target-glimpse model\" on page 4 and figure 1(b) do not seem to fit together.",
"In particular which symbols y_i are on the decoder side do not seem to fit between them. Please clarify - maybe I am not reading the figure right - but since the figure uses different letters it is not clear... | [
[
0
],
[
5
],
[
15
],
[
2
],
[
4
],
[
1
],
[
3,
8
],
[
6
],
[
7
],
[
9
],
[
11,
14
],
[
10,
12
],
[
13
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2,
4,
5,
6,
7,
8,
9,
10,
11,
... | benchmark/PDF/ICLR2017_HJDdiT9gl.pdf | openreview | benchmark/MD/ICLR2017_HJDdiT9gl.md | ICLR 2017 |
r1Chut9xl | {
"title": "Inference and Introspection in Deep Generative Models of Sparse Data",
"abstract": "Deep generative models such as deep latent Gaussian models (DLGMs) are powerful and popular density estimators. However, they have been applied almost exclusively to dense data such as images; DLGMs are rarely applied to... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
}
],
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
3
]
}
}... | [
[
{
"role": "PC",
"data": {
"comment": "The paper introduces a number of ideas / heuristics for learning and interpreting deep generative models of text (tf-idf weighting, a combination of using an inference networks with direct optimization of the variational parameters, a method for induci... | [
"Am I to understand that using tf-idf weighting is really the first contribution? Seems like a good idea, but weird to highlight to that level.",
"In Algorithm 1, step 8, it's not clear how psi is being reestimated.",
"Similarly, step 5 could be clearer about what exactly is being sampled and how this is used t... | [
[
24
],
[
26
],
[
3
],
[
16
],
[
21
],
[
1
],
[
8
],
[
20
],
[
22
],
[
29
],
[
23
],
[
25
],
[
18
],
[
0
],
[
12
],
[
6
],
[
2
],
[
4,
15,
27
],
[
7
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"c... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
6,
68
]
}
],
"category": [
"ORIG-MTH"
]
},
{
"sentences": [
{
"role"... | benchmark/PDF/ICLR2017_r1Chut9xl.pdf | openreview | benchmark/MD/ICLR2017_r1Chut9xl.md | ICLR 2017 |
BysvGP5ee | {
"title": "Variational Lossy Autoencoder",
"abstract": "Representation learning seeks to expose certain aspects of observed data in a learned representation that's amenable to downstream tasks like classification. \nFor instance, a good representation for 2D images might be one that describes only global structure... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4
]
}
... | [
[
{
"role": "PC",
"data": {
"comment": "The reviewers agree that this is a well executed paper, and should be accepted and will make a positive contribution to the conference. In any final version please try to make a connection to the other paper at this conference with the same aims and ex... | [
"In Table 1, when comparing the model with the IAF approximate posterior and the model with the equivalent AF prior, did you do a proper hyperparameter search for each model or did you use the same hyperparameters for both experiments?",
"Thanks for your question! I apologize for the late response due to NIPS.",
... | [
[
30
],
[
29
],
[
32
],
[
4,
11,
14
],
[
24
],
[
8
],
[
5
],
[
12
],
[
13
],
[
3,
6
],
[
16
],
[
17
],
[
1
],
[
2
],
[
7,
9
],
[
15,
22
],
[
20
],
[
28
],... | [
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_BysvGP5ee.pdf | openreview | benchmark/MD/ICLR2017_BysvGP5ee.md | ICLR 2017 |
rkFBJv9gg | {
"title": "Learning Features of Music From Scratch",
"abstract": "This paper introduces a new large-scale music dataset, MusicNet, to serve as a source \nof supervision and evaluation of machine learning methods for music research. \nMusicNet consists of hundreds of freely-licensed classical music recordings \nby ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4
]
}
... | [
[
{
"role": "PC",
"data": {
"comment": "There was some question as to weather ICLR is the right venue for this sort of dataset paper, I tend to think it would be a good addition to ICLR as people from the ICLR community are likely to be among the most interested. The problem of note identifi... | [
"I'm curious about the motivation for the task that was chosen for the second part of the paper: polyphonic pitch estimation on isolated fragments is a well-studied task for which many non-ML methods exist. It also looks like the paper only considers learned baselines. Could the authors comment on why this particu... | [
[
11
],
[
21
],
[
17
],
[
4
],
[
6,
10
],
[
14,
23
],
[
19
],
[
20
],
[
22
],
[
26
],
[
27
],
[
0,
15
],
[
1
],
[
2
],
[
3
],
[
5
],
[
7
],
[
8
],
[
9
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3,
4
]
}
],
"category": [
"QUAL-EXP",
"QUAL-CMP"
]
},
... | benchmark/PDF/ICLR2017_rkFBJv9gg.pdf | openreview | benchmark/MD/ICLR2017_rkFBJv9gg.md | ICLR 2017 |
SkgewU5ll | {
"title": "GRAM: Graph-based Attention Model for Healthcare Representation Learning",
"abstract": "Deep learning methods exhibit promising performance for predictive modeling in healthcare, but two important challenges remain:\n- Data insufficiency: Often in healthcare predictive modeling, the sample size is insuf... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The reviewers all agreed that the paper was well written, that the proposed approach is very sensible and intuitive and that the experiments are convincing. However, they are concerned that the proposed work is of limited interest to the ICLR community... | [
"Hi,",
"I do not understand well the argument at page 6, where you speculate on the role of initialized embeddings on different tasks.",
"I cannot understand why the initialized embeddings are more useful on a multiclass task than on a binary one. Can it be that it depends more on the input data that on the nat... | [
[
9
],
[
17
],
[
0
],
[
5
],
[
7
],
[
8
],
[
14
],
[
16
],
[
18
],
[
4
],
[
19
],
[
6
],
[
2
],
[
11
],
[
15
],
[
1
],
[
12
],
[
3
],
[
10
],
[
13
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1,
2
]
},
{
"role": "Author",
"data": [
4,
5
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences":... | benchmark/PDF/ICLR2017_SkgewU5ll.pdf | openreview | benchmark/MD/ICLR2017_SkgewU5ll.md | ICLR 2017 |
ryaFG5ige | {
"title": "Introducing Active Learning for CNN under the light of Variational Inference",
"abstract": "One main concern of the deep learning community is to increase the capacity of\nrepresentation of deep networks by increasing their depth. This requires to scale\nup the size of the training database accordingly.... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The reviewers agree that the paper pursues an interesting direction to explore active example selection for CNN training, but have unanimously raised serious concerns with regards to overall presentation which needs further improvement (I still see spe... | [
"In the approximation of Q(\\beta), which is crucial for the importance sampling of the data in the authors' framework of active learning, they assumed the current MLE estimator w_hat, is already a good approximation to the true parameter \\theta_Y^*; how can this be justified? How is the sampling distribution init... | [
[
4
],
[
18,
21
],
[
6
],
[
7
],
[
11
],
[
12
],
[
13
],
[
16
],
[
19
],
[
15
],
[
2
],
[
14
],
[
20
],
[
1
],
[
3
],
[
8
],
[
9
],
[
22
],
[
0
],
[
5
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
... | benchmark/PDF/ICLR2017_ryaFG5ige.pdf | openreview | benchmark/MD/ICLR2017_ryaFG5ige.md | ICLR 2017 |
B1-q5Pqxl | {
"title": "Machine Comprehension Using Match-LSTM and Answer Pointer",
"abstract": "Machine comprehension of text is an important problem in natural language processing. A recently released dataset, the Stanford Question Answering Dataset (SQuAD), offers a large number of real questions and their answers created b... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper provides two approaches to question answering: pointing to spans, and use of match-LSTM. The models are evaluated on SQuAD and MSMARCO. The reviewers we satisfied that, with the provision of additional comparisons and ablation studies submit... | [
"Hi,\nIn the introduction you mentioned some state-of-the-art results (Salesforce Resarch?) that are not discussed in the result section.",
"I have some trouble to understand equation (2) is it possible that you missed the index j in both \\alpha and G?",
"I am probably missing something because in this way th... | [
[
7
],
[
1
],
[
4
],
[
2
],
[
6
],
[
31
],
[
36
],
[
11
],
[
16
],
[
18
],
[
26
],
[
32
],
[
9
],
[
14
],
[
30
],
[
0
],
[
10
],
[
22
],
[
29
],
[
37
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
6,
35
]
}
],
"category": [
"QUAL-CMP"
]
},
{
"sentences": [
{
"role"... | benchmark/PDF/ICLR2017_B1-q5Pqxl.pdf | openreview | benchmark/MD/ICLR2017_B1-q5Pqxl.md | ICLR 2017 |
r1BJLw9ex | {
"title": "Adjusting for Dropout Variance in Batch Normalization and Weight Initialization",
"abstract": "We show how to adjust for the variance introduced by dropout with corrections to weight initialization and Batch Normalization, yielding higher accuracy. Though dropout can preserve the expected input to a neu... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This was a borderline paper. However, no reviewers were willing to champion the acceptance of the paper during the deliberation period. Furthermore, in practice, initialization itself is a hyperparameter that gets tuned automatically. To be a compellin... | [
"Figure 1 shows exponential blowups and decay for Xavier initialization but the Log-loss decay over epochs from Figure 2 looks very smooth and similar to that of presented approach. Can you please comment on this?",
"Why does batch normalization variance needs to be updated before testing?",
"How re-estimation ... | [
[
1
],
[
8
],
[
2,
20
],
[
11,
12
],
[
18
],
[
21
],
[
0
],
[
3
],
[
4
],
[
6
],
[
7
],
[
9
],
[
10
],
[
13
],
[
15
],
[
16
],
[
17
],
[
19
],
[
22
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
3,
4,
5
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_r1BJLw9ex.pdf | openreview | benchmark/MD/ICLR2017_r1BJLw9ex.md | ICLR 2017 |
BkLhzHtlg | {
"TL;DR": "",
"title": "Learning Recurrent Representations for Hierarchical Behavior Modeling",
"abstract": "We propose a framework for detecting action patterns from motion sequences and modeling the sensory-motor relationship of animals, using a generative recurrent neural network. The network has a discrimina... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "Originality and Significance:\n The paper develops a recurrently coupled discriminative / generative hierarchical model, as applied to fruit-fly behavior and online handwriting. Qualitative evaluation is provided by generating motions, in addition to ... | [
"The statement that the framework you propose is `modeling the behavior of animals' is quite general.",
"In particular you have considered the case of one insect, the fruit fly, whose motion and action states (behavior) are tightly correlated.",
"The motion of the pen and the hand-written digit is also correlat... | [
[
23
],
[
24
],
[
27
],
[
1
],
[
2
],
[
3
],
[
10
],
[
13
],
[
28,
36
],
[
29,
35
],
[
12
],
[
18
],
[
26
],
[
17
],
[
6,
15,
25,
34
],
[
7,
14,
31
],
[
8
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"incorrect",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect"... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1,
2,
3,
4
]
},
{
"role": "Author",
"data": [
5,
6,
7,
8,
9
]
}
],
... | benchmark/PDF/ICLR2017_BkLhzHtlg.pdf | openreview | benchmark/MD/ICLR2017_BkLhzHtlg.md | ICLR 2017 |
BJ5UeU9xx | {
"title": "Visualizing Deep Neural Network Decisions: Prediction Difference Analysis",
"abstract": "This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2
]
}
}
},
{
"role"... | [
[
{
"role": "PC",
"data": {
"comment": "Reviewers felt the paper was clearly written with necessary details given, leading to a paper that was pleasant to read. The differences with prior art raised by one of the reviewers were adequately addressed by the authors in a revision. The paper pre... | [
"Did you use some strategy to choose the example images you are presenting (e.g. for Figure 3, you could have picked the images where the difference between marginal sampling and conditional sampling was largest), did you choose randomly or did you handpick the examples?",
"We chose the images from among a small ... | [
[
10,
13
],
[
11
],
[
14
],
[
2
],
[
5
],
[
8
],
[
17,
18
],
[
0,
1,
7
],
[
6
],
[
19
],
[
3
],
[
4
],
[
15
],
[
16
],
[
12
],
[
9
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_BJ5UeU9xx.pdf | openreview | benchmark/MD/ICLR2017_BJ5UeU9xx.md | ICLR 2017 |
SJTQLdqlg | {
"title": "Learning to Remember Rare Events",
"abstract": "Despite recent advances, memory-augmented deep neural networks are still limited\nwhen it comes to life-long and one-shot learning, especially in remembering rare events.\nWe present a large-scale life-long memory module for use in deep learning.\nThe modu... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4,
5,
6
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"commen... | [
[
{
"role": "PC",
"data": {
"comment": "The primary contribution of this paper is showing that k-nearest-neighbor method based memory can be usefully incorporated in a variety of architectures and supervised learning tasks. The presentation is clear, and results are good. I like the syntheti... | [
"As I understand it, the value $v$ is the supervised desired target value which you use during training.",
"Then every memory operation will store this supervised value $v$?",
"So memory updates are only done during training? Or what value $v$ do you use during evaluation / inference?",
"The output of the mem... | [
[
2
],
[
7
],
[
11
],
[
14
],
[
19
],
[
6
],
[
25
],
[
10
],
[
12
],
[
17
],
[
23
],
[
26,
29
],
[
16
],
[
20
],
[
21
],
[
0
],
[
1
],
[
3
],
[
4
],
[
8
]... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1,
3
]
},
{
"role": "Author",
"data": [
7
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_SJTQLdqlg.pdf | openreview | benchmark/MD/ICLR2017_SJTQLdqlg.md | ICLR 2017 |
HyecJGP5ge | {
"title": "NEUROGENESIS-INSPIRED DICTIONARY LEARNING: ONLINE MODEL ADAPTION IN A CHANGING WORLD",
"abstract": "In this paper, we focus on online representation learning in non-stationary environments which may require continuous adaptation of model’s architecture. We propose a novel online dictionary-learning (spa... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper combines simple heuristics to adapt the size of a dictionary during learning. The heuristics are intuitive: augmenting the dictionary size when correlation between reconstruction and inputs falls below a certain pre-determined threshold, red... | [
"In Mairal et al (2009), the authors discuss a way of reinitialising 'dead' atoms when they present little activity. They use samples from the dataset (instead of noise). Are you using this feature? If not, it seems like a reasonable baseline to include.",
"In the case of sparse dictionary elements, is the online... | [
[
4
],
[
11
],
[
13
],
[
15
],
[
3
],
[
5
],
[
7
],
[
10
],
[
8
],
[
0
],
[
12
],
[
16
],
[
6
],
[
17
],
[
18
],
[
1
],
[
2
],
[
9
],
[
14
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
6,
7,
32,
33,
34
]
}
]... | benchmark/PDF/ICLR2017_HyecJGP5ge.pdf | openreview | benchmark/MD/ICLR2017_HyecJGP5ge.md | ICLR 2017 |
Bk2TqVcxe | {
"TL;DR": "",
"title": "Discovering objects and their relations from entangled scene representations",
"abstract": "Our world can be succinctly and compactly described as structured scenes of objects and relations. A typical room, for example, contains salient objects such as tables, chairs and books, and these ... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1
]
}
}
}
],
[
{
"role": "Revi... | [
[
{
"role": "PC",
"data": {
"comment": "This paper proposes RNs, relational networks, for representing and reasoning about object relations. Experiments show interesting results such as the capability to disentangling scene descriptions. AR3 praises the idea and the authors for doing this ni... | [
"Are there some examples of the images used in Sec. 5.2.2?",
"We agree that it is helpful to include example images as used in section 5.2.2 and we have now updated the paper to include some examples (Figure 11). Thank you for your suggestion and for your review.",
"1. Why do the scene descriptions have 16 row... | [
[
0
],
[
3
],
[
29
],
[
2
],
[
1
],
[
28
],
[
21
],
[
19
],
[
9
],
[
17
],
[
15
],
[
5
],
[
7,
27
],
[
10
],
[
12
],
[
14
],
[
16
],
[
20
],
[
22
],
[
26
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correc... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1
]
}
],
"category": [
"CLAR-FIG"
]
},
{
"sentences": [
{
"role": "Reviewer 2 ... | benchmark/PDF/ICLR2017_Bk2TqVcxe.pdf | openreview | benchmark/MD/ICLR2017_Bk2TqVcxe.md | ICLR 2017 |
HkuVu3ige | {
"title": "On orthogonality and learning recurrent networks with long term dependencies",
"abstract": "It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3
]
}
}
},
... | [
[
{
"role": "PC",
"data": {
"comment": "The work explores a very interesting way to deal with the problem of propagating information through RNNs. I think the approach looks promising but the reviewers point out that the experimental evaluation is a bit lacking. In particular, it focuses on ... | [
"1- Have you tried any language modeling tasks? How about the adding problem?",
"2- How do you compare the running time of the algorithm to gradient descent, for example when the model has 1000 hidden units?",
"3- Have you compared your methods such as vanilla RNN or Identity RNN?",
"Yes. A vanilla RNN perfor... | [
[
11
],
[
14
],
[
22
],
[
3
],
[
0,
6
],
[
2
],
[
4
],
[
7
],
[
10
],
[
12
],
[
15,
18
],
[
16
],
[
20
],
[
21
],
[
1,
9
],
[
8
],
[
5
],
[
13
],
[
17
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
5
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role": "Reviewer 1 ... | benchmark/PDF/ICLR2017_HkuVu3ige.pdf | openreview | benchmark/MD/ICLR2017_HkuVu3ige.md | ICLR 2017 |
S1X7nhsxl | {
"title": "Improving Generative Adversarial Networks with Denoising Feature Matching",
"abstract": "We propose an augmented training procedure for generative adversarial networks designed to address shortcomings of the original by directing the generator towards probable configurations of abstract discriminator fe... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
]
}
}
},
{
"role": "... | [
[
{
"role": "PC",
"data": {
"comment": "The idea of using a denoising autoencoder on features of the discriminator is sensible, and explored and described well here. The qualitative results are pretty good, but it would be nice to try some of the more recent likelihood-based methods for quan... | [
"Hi. Thank you for your paper.",
"1. I'd like to make sure I understand the role of the denoising autoencoder (DAE). Is it correct that the DAE operates entirely in feature space?",
"That is, it takes some features, corrupts them, and then tries to reconstruct them, without ever seeing any raw images?",
"2. Y... | [
[
3
],
[
4
],
[
1
],
[
5
],
[
8
],
[
33
],
[
26
],
[
14
],
[
25
],
[
34
],
[
2
],
[
7
],
[
15
],
[
23
],
[
27
],
[
30
],
[
6
],
[
10
],
[
11
],
[
12
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1,
2
]
},
{
"role": "A... | benchmark/PDF/ICLR2017_S1X7nhsxl.pdf | openreview | benchmark/MD/ICLR2017_S1X7nhsxl.md | ICLR 2017 |
B1M8JF9xx | {
"title": "On the Quantitative Analysis of Decoder-Based Generative Models",
"abstract": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of some popular models such as generative adversarial networks and ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1
]
}
}
}
],
[
{
"role": "Revi... | [
[
{
"role": "PC",
"data": {
"comment": "This paper describes a method to estimate likelihood scores for a range of models defined by a decoder.\n \n This work has some issues. The paper mainly applies existing ideas. As discussed on openreview, the isotropic Gaussian noise model used to crea... | [
"Does \"running the decoder on z\" (page 9) refer to sampling from $p_\\sigma(x | z)$ or deterministic reconstruction?",
"It refers to deterministic reconstruction, which can also be interpreted as the mean of the distribution $p_\\sigma(x | z)$.",
"Regarding equations leading to Equation 5, does f_1(z) repres... | [
[
4
],
[
14
],
[
28
],
[
29
],
[
30
],
[
32
],
[
1
],
[
7
],
[
8
],
[
0
],
[
9
],
[
13
],
[
17
],
[
18
],
[
31
],
[
10
],
[
23
],
[
6
],
[
12
],
[
16
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
"role": "Reviewer 2 ... | benchmark/PDF/ICLR2017_B1M8JF9xx.pdf | openreview | benchmark/MD/ICLR2017_B1M8JF9xx.md | ICLR 2017 |
S1vyujVye | {
"TL;DR": "",
"title": "Deep unsupervised learning through spatial contrasting",
"abstract": "Convolutional networks have marked their place over the last few years as the\nbest performing model for various visual tasks. They are, however, most suited\nfor supervised learning from large amounts of labeled data. ... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1
]
}
}
}
],
[
{
"role": "Revi... | [
[
{
"role": "PC",
"data": {
"comment": "The paper proposes a formulation for unsupervised learning of ConvNets based on the distance between patches sampled from the same and different images. The novelty of the method is rather limited as it's similar to [Doersch et al. 2015] and [Dosovitsk... | [
"Did the authors experiment at all with the patch size? Seems like this could be a valuable addition to the understanding of the model (including the two extremes!)",
"Thank you for your question. Patch size was indeed found to be important, where datasets dependent on more global features (such as MNIST, where s... | [
[
6
],
[
7
],
[
15
],
[
13
],
[
17
],
[
1,
12
],
[
2
],
[
19
],
[
0
],
[
10
],
[
11
],
[
16
],
[
18
],
[
14
],
[
3
],
[
4
],
[
8
],
[
9
],
[
20
],
[
5
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role": "Reviewer 2 ... | benchmark/PDF/ICLR2017_S1vyujVye.pdf | openreview | benchmark/MD/ICLR2017_S1vyujVye.md | ICLR 2017 |
SJJKxrsgl | {
"title": "Emergence of foveal image sampling from learning to attend in visual scenes",
"abstract": "We describe a neural attention model with a learnable retinal sampling lattice. The model is trained on a visual search task requiring the classification of an object embedded in a visual scene amidst background d... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This was a borderline case. All reviewers and the AC appeared to find the paper interesting, while having some reservations. Given the originality of the work, the PCs decided to lean toward acceptance. We do encourage however the authors to revise the... | [
"This is an interesting paper! The experiments are only conducted on digits dataset. Can authors provide results on other real datasets, such as on face dataset (Zheng et al 2015, http://dl.acm.org/citation.cfm?id=2776962), SVHN dataset and CUB_200_2011 birds datasets(Jaderberg 2015, https://papers.nips.cc/paper/58... | [
[
9
],
[
5
],
[
7,
19
],
[
12
],
[
0,
6,
11,
13
],
[
1
],
[
2
],
[
3
],
[
4
],
[
10
],
[
15
],
[
16
],
[
17
],
[
18
],
[
8
],
[
14
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3,
4,
5,
6,
7,
8
]
}
],
"category"... | benchmark/PDF/ICLR2017_SJJKxrsgl.pdf | openreview | benchmark/MD/ICLR2017_SJJKxrsgl.md | ICLR 2017 |
rJqFGTslg | {
"TL;DR": "",
"title": "Pruning Filters for Efficient ConvNets",
"abstract": "The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"summary_of_the_paper": null,
"value": {
"review": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10
]
},... | [
[
{
"role": "PC",
"data": {
"comment": "The paper presents a simple but effective approach for pruning ConvNet filters with extensive evaluation using several architectures on ImageNet and CIFAR-10."
}
}
]
] | [
"This paper proposes a very simple idea (prune low-weight filters from ConvNets) in order to reduce FLOPs and memory consumption. The proposed method is experimented on with VGG-16 and ResNets on CIFAR10 and ImageNet.\nPros:",
"- Creates *structured* sparsity, which automatically improves performance without chan... | [
[
3
],
[
14
],
[
13
],
[
1
],
[
16
],
[
7
],
[
11
],
[
2
],
[
5
],
[
6,
9
],
[
8
],
[
15
],
[
10
],
[
0
],
[
4
],
[
12
],
[
17
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1",
"data": [
1
]
}
],
"category": [
"ORIG-MTH"
]
},
... | benchmark/PDF/ICLR2017_rJqFGTslg.pdf | openreview | benchmark/MD/ICLR2017_rJqFGTslg.md | ICLR 2017 |
Sks9_ajex | {
"TL;DR": "",
"title": "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer",
"abstract": "Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4,
5
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
... | [
[
{
"role": "PC",
"data": {
"comment": "Important task (attention models), interesting distillation application, well-written paper. The authors have been responsive in updating the paper, adding new experiments, and being balanced in presenting their findings. I support accepting this paper... | [
"Here are a few short pre-review questions",
"1) Why don't you use the same settings for the activation-based and gradient-based attention transfer on the CIFAR-10, it would be nice to compare those two approaches as it not clear what are their relative advantages/drawbacks.",
"2) Intermediate hints in Fitnet h... | [
[
6,
26
],
[
15
],
[
22
],
[
24
],
[
27
],
[
19
],
[
30
],
[
3
],
[
1,
12,
18,
25,
28
],
[
2
],
[
4
],
[
9
],
[
11
],
[
14
],
[
16
],
[
21
],
[
23
],
[
31
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"incorrect",
... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_Sks9_ajex.pdf | openreview | benchmark/MD/ICLR2017_Sks9_ajex.md | ICLR 2017 |
SJMGPrcle | {
"title": "Learning to Navigate in Complex Environments",
"abstract": "Learning to navigate in complex environments with dynamic elements is an important milestone in developing AI agents. In this work we formulate the navigation question as a reinforcement learning problem and show that data efficiency and task p... | Accept (Poster) | [
[
{
"role": "Author",
"data": {
"value": {
"comment": [
0,
1,
2,
3
]
}
}
}
],
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
4
]
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper proposes an approach to navigating in complex environments using RL agents that have auxiliary tasks besides just the successful navigation itself (for instance, the task of predicting depth from images). The idea is a nice one, and the demon... | [
"We have just submitted an updated version of the paper, including additional references as well as new results on those agents that are enhanced with auxiliary tasks.\nSpecifically, we investigated:",
"1) a new way of performing depth prediction, by formulating it as a classification task (over the quantized dep... | [
[
0,
3,
15
],
[
1
],
[
17
],
[
9,
13
],
[
16
],
[
10
],
[
12
],
[
2
],
[
4
],
[
5
],
[
6
],
[
8
],
[
11
],
[
14
],
[
18
],
[
7
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 2 Further Reply",
"data": [
4
]
},
{
"role": "Author",
"data": [
6,
7,
8
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_SJMGPrcle.pdf | openreview | benchmark/MD/ICLR2017_SJMGPrcle.md | ICLR 2017 |
SkhU2fcll | {
"title": "Deep Multi-task Representation Learning: A Tensor Factorisation Approach",
"abstract": "Most contemporary multi-task learning methods assume linear models. This setting is considered shallow in the era of deep learning. In this paper, we present a new deep multi-task representation learning framework th... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The reviews for this paper were quite mixed, with one strong accept and a marginal reject. A fourth reviewer with strong expertise in multi-task learning and deep learning was brought in to read the latest manuscript. Due to time constraints, this four... | [
"1) As the proposed soft-sharing utilizes task dependent weights, it would be great to discuss the increase of model size compared to the conventional hard-sharing.",
"Furthermore, experiments that shown the gains of the proposed method are not coming from the extra params would make the paper more convincing.",
... | [
[
4
],
[
2
],
[
5
],
[
10
],
[
0
],
[
1
],
[
7
],
[
8
],
[
3
],
[
6
],
[
9
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
... | benchmark/PDF/ICLR2017_SkhU2fcll.pdf | openreview | benchmark/MD/ICLR2017_SkhU2fcll.md | ICLR 2017 |
BkJsCIcgl | {
"TL;DR": "",
"title": "The Predictron: End-To-End Learning and Planning",
"abstract": "One of the key challenges of artificial intelligence is to learn models that are effective in the context of planning. In this document we introduce the predictron architecture. The predictron consists of a fully abstract mod... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "There is potential here for a great paper, unfortunately in its current form there is too deep of a disconnect between the framing and promise of the presentation, and the empirical validation actually delivered by the experiments.\n The choice of the ... | [
"- I interpret setting the different discount factors as forcing the internal representation to represent different time-scales. Is this right?",
"- If not, how is the model deciding its own timescales? (which is more or less suggested in the intro)",
"- (and if so, do you think there a simple extension to this... | [
[
0,
1
],
[
4
],
[
6
],
[
8,
14
],
[
10
],
[
12
],
[
16
],
[
17
],
[
2
],
[
15
],
[
9
],
[
28
],
[
3
],
[
18
],
[
19
],
[
20
],
[
21
],
[
22
],
[
24,
25,
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6,
7,
8
]
}
],
"category": [
"CLAR-WRT"
]
},
{
... | benchmark/PDF/ICLR2017_BkJsCIcgl.pdf | openreview | benchmark/MD/ICLR2017_BkJsCIcgl.md | ICLR 2017 |
HJhcg6Fxg | {
"title": "Binary Paragraph Vectors",
"abstract": "Recently Le & Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4
]
}
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper proposes to binarize Paragraph Vector distributed representations in an end-to-end framework. Experiments demonstrate that this beats autoencoder-based binary codes. However, the performance is similar to using paragraph vectors followed by ... | [
"The paper is mostly compared against the full model, without binarization. Have you tried a simple baseline such that 1) first learn the network; 2) make an unsupervised learning for binarization; and 3) optionally, fine-tuning of the last layer?",
"Dear Reviewer,\nThank you for your helpful comment.",
"Follow... | [
[
6
],
[
29
],
[
30
],
[
35
],
[
4
],
[
5
],
[
7
],
[
8
],
[
11,
25
],
[
27
],
[
31
],
[
33
],
[
34
],
[
36
],
[
10
],
[
12,
13
],
[
15,
19
],
[
18
],
[
24
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_HJhcg6Fxg.pdf | openreview | benchmark/MD/ICLR2017_HJhcg6Fxg.md | ICLR 2017 |
BJKYvt5lg | {
"title": "PixelVAE: A Latent Variable Model for Natural Images",
"abstract": "Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty capturing small details. PixelCNN models d... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper provides a solution that combines best of latent variable models and auto-regressive models. The concept is executed well and will make a positive contribution to the conference."
}
}
]
] | [
"- What exactly are the architectures for the VAE and PixelCNN layers used in the LSUN and ImageNet experiments (in terms of number of layers and units per layer)?",
"- Do you have any guarantees for the robustness of the estimate of the NLL on the MNIST dataset?",
"- Could you please further explain in what se... | [
[
2
],
[
6
],
[
15
],
[
12,
17
],
[
14
],
[
20
],
[
5
],
[
7,
22
],
[
10
],
[
13
],
[
19
],
[
9
],
[
0
],
[
3,
21
],
[
4
],
[
1
],
[
8
],
[
11
],
[
16
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6,
7,
8,
9,
10,
11
]
}
],
"categor... | benchmark/PDF/ICLR2017_BJKYvt5lg.pdf | openreview | benchmark/MD/ICLR2017_BJKYvt5lg.md | ICLR 2017 |
Bk0FWVcgx | {
"title": "Topology and Geometry of Half-Rectified Network Optimization",
"abstract": "The loss surface of deep neural networks has recently attracted interest \nin the optimization and machine learning communities as a prime example of \nhigh-dimensional non-convex problem. Some insights were recently gained usin... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4
]
}
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper presents an analysis of deep ReLU networks, and contrasts it with linear networks. It makes good progress towards providing a theoretical explanation of the difficult problem of characterizing the critical points of this highly nonconvex func... | [
"At some point, would one expect overparameterization to lead to overfitting? Can anything be said about whether the amount of overparameterization needed to reduce problems from local minima would still allow one to avoid overfitting?",
"Absolutely, at some point, overparameterization would lead to overfitting.... | [
[
1
],
[
19
],
[
21
],
[
14
],
[
15
],
[
18
],
[
22
],
[
24
],
[
2
],
[
4,
20
],
[
5
],
[
13
],
[
23
],
[
0
],
[
17
],
[
8
],
[
9
],
[
12
],
[
16
],
[
3
]... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"c... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3,
4,
24
]
}
],
"category": [
"QUAL-EXP"
]
},
{
... | benchmark/PDF/ICLR2017_Bk0FWVcgx.pdf | openreview | benchmark/MD/ICLR2017_Bk0FWVcgx.md | ICLR 2017 |
rksfwnFxl | {
"TL;DR": "",
"title": "LSTM-Based System-Call Language Modeling and Ensemble Method for Host-Based Intrusion Detection",
"abstract": "In computer security, designing a robust intrusion detection system is one of the most fundamental and important problems. In this paper, we propose a system-call language-modeli... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This is a pure application paper: an application of LSTMs to host intrusion detection systems based upon observed system calls. And from the application standpoint, I don't believe this is a bad paper, the authors seem to achieve reasonable results fro... | [
"The context of the paragraph at the top of page 8 (starting with \"According to Creech and Hu) is not clear. Going back to that paper, it looks like tests were done with both ADFA-LD and KDD-98. Are the numbers quoted in this paragraph for ADFA-LD?",
"More importantly, it seems like the LSTM proposed in the pape... | [
[
0
],
[
16
],
[
8
],
[
10
],
[
18
],
[
11
],
[
12
],
[
13
],
[
1
],
[
7
],
[
14
],
[
19
],
[
4
],
[
6
],
[
2
],
[
9
],
[
17
],
[
3
],
[
5
],
[
15
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_rksfwnFxl.pdf | openreview | benchmark/MD/ICLR2017_rksfwnFxl.md | ICLR 2017 |
BkmM8Dceg | {
"TL;DR": "",
"title": "Warped Convolutions: Efficient Invariance to Spatial Transformations",
"abstract": "Convolutional Neural Networks (CNNs) are extremely efficient, since they exploit the inherent translation-invariance of natural images. However, translation is just one of a myriad of useful spatial transf... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The reviewers found the idea interesting and practical but had concerns about the novelty of the approach and the claims and theory presented in the paper. In particular, it seems that the reviewers feel that the authors' claim to present novel theory ... | [
"How does the method deal with local transformations? It seems like the method would only be able to deal with a single global transformation.",
"For example, the log-polar warp would turn rotations around x_0 into translations, but not rotations about arbitrary points. Is that correct?",
"Regarding computation... | [
[
3
],
[
4
],
[
11,
18,
21
],
[
12
],
[
22
],
[
25
],
[
27
],
[
33
],
[
19
],
[
23
],
[
26
],
[
28
],
[
30
],
[
31
],
[
0
],
[
1
],
[
5,
13
],
[
6
],
[
7
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
3,
4,
5
]
}
],
"category": [
"QUAL-MET"
]
},
{
"sentences":... | benchmark/PDF/ICLR2017_BkmM8Dceg.pdf | openreview | benchmark/MD/ICLR2017_BkmM8Dceg.md | ICLR 2017 |
r1fYuytex | {
"title": "Sparsely-Connected Neural Networks: Towards Efficient VLSI Implementation of Deep Neural Networks",
"abstract": "Recently deep neural networks have received considerable attention due to their ability to extract and represent high-level abstractions in data sets. Deep neural networks such as fully-conne... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "After discussion, the reviewers unanimously recommend accepting the paper."
}
}
]
] | [
"Does the Asic implementation of the sparsely-connected network also provide a speed-up over fully connected layers? Can you add the speed numbers to Table 6?",
"Do you have any plans to experiment with Imagenet dataset?",
"We thank the reviewer for the comments.",
"The number of inputs of each neuron determi... | [
[
11
],
[
12
],
[
13
],
[
2
],
[
3
],
[
6,
6
],
[
7
],
[
14
],
[
17
],
[
0
],
[
1
],
[
4,
4
],
[
5,
5
],
[
9
],
[
16
],
[
8
],
[
10
],
[
15
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
3,
4,
5
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_r1fYuytex.pdf | openreview | benchmark/MD/ICLR2017_r1fYuytex.md | ICLR 2017 |
rk5upnsxe | {
"TL;DR": "",
"title": "Normalizing the Normalizers: Comparing and Extending Network Normalization Schemes",
"abstract": "Normalization techniques have only recently begun to be exploited in supervised learning tasks. Batch normalization exploits mini-batch statistics to normalize the activations. This was shown... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "On the one hand, the topic is considered important and the paper is technically correct. On the ohter hand, novelty and theoretical depth are a bit lacking. Overall, this is a borderline paper. \n\nStill, the Program Chairs recommend it for a poster pr... | [
"- There does not seem to be consistently the best normalization method and the L1 regularization seems to be rather important. Do you have any intuition why BN* performs the best on CIFAR while DN* on the super-resolution task?",
"- Do you think that it would be possible to see similar effects on larger datasets... | [
[
3
],
[
18
],
[
21
],
[
15
],
[
24
],
[
10
],
[
25
],
[
5,
7
],
[
11
],
[
14,
16
],
[
20
],
[
0
],
[
1
],
[
6
],
[
8
],
[
12
],
[
26
],
[
2
],
[
17
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
... | benchmark/PDF/ICLR2017_rk5upnsxe.pdf | openreview | benchmark/MD/ICLR2017_rk5upnsxe.md | ICLR 2017 |
Hk8N3Sclg | {
"TL;DR": "",
"title": "Multi-Agent Cooperation and the Emergence of (Natural) Language",
"abstract": "The current mainstream approach to train natural language systems is to expose them to large amounts of text. This passive learning is problematic if we are in- terested in developing interactive machines, such... | Accept (Oral) | [
[
{
"role": "Reviewer",
"data": {
"summary_of_the_paper": null,
"value": {
"review": [
0,
1,
2,
3,
4,
5,
6,
7,
8
]
},
"scores": {
... | [
[
{
"role": "PC",
"data": {
"comment": "The authors present some initial findings on language emergence using multi-agent, referential games. The learning alternates between REINFORCE and supervised classification, which grounds the language. Pro- this is a relevant, novel paper. Con - exper... | [
"In this paper, a referential game is proposed between two agents. Both agents observe two images. The first agent, called the sender, receive a binary target variable (t) and must send a symbol (message) to the second agent, called the receiver, such that this agent can recover the target. The agents both get a re... | [
[
4
],
[
8
],
[
3
],
[
5
],
[
12
],
[
13
],
[
7
],
[
1
],
[
14
],
[
6
],
[
9
],
[
11
],
[
0
],
[
2
],
[
10
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect"
] | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0,
1,
2
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1",
"data": [
3
]
}
],
"category": [
... | benchmark/PDF/ICLR2017_Hk8N3Sclg.pdf | openreview | benchmark/MD/ICLR2017_Hk8N3Sclg.md | ICLR 2017 |
B16Jem9xe | {
"title": "Learning in Implicit Generative Models",
"abstract": "Generative adversarial networks (GANs) provide an algorithmic framework for constructing generative models with several appealing properties: they do not require a likelihood function to be specified, only a generating procedure; they provide samples... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper provides a unifying review of various forms of generative model. The paper offers some neat perspectives that could encourage links between areas of machine learning and statistics. However, there aren't specific new proposals, and so there ... | [
"Thank you for an interesting read.",
"I guess all the theory presented are already around but I still like the presentation which walked through each route in a very clear way.",
"The only question from a practitioner perspective is: which method should I choose when I want to learn an implicit model? Or can y... | [
[
9
],
[
1
],
[
2
],
[
5
],
[
8,
8,
18
],
[
12
],
[
3
],
[
6,
17
],
[
11,
21
],
[
13
],
[
4
],
[
7,
7
],
[
10,
10
],
[
19
],
[
24
],
[
25
],
[
15
],
[
2... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
}
],
"category": [
"CLAR-WRT... | benchmark/PDF/ICLR2017_B16Jem9xe.pdf | openreview | benchmark/MD/ICLR2017_B16Jem9xe.md | ICLR 2017 |
HJpfMIFll | {
"TL;DR": "",
"title": "Geometry of Polysemy",
"abstract": "Vector representations of words have heralded a transformational approach to classical problems in NLP; the most popular example is word2vec. However, a single vector does not suffice to model the polysemous nature of many (frequent) words, i.e., words ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper considers an important problem largely ignored by continuous word representation learning: polysemy. The approach is mathematically grounded and interesting and well explored."
}
}
]
] | [
"In the argument on section 2 around figure 1, it's not at all clear why it is surprising that the variance is explained with the top few principal components. I also don't understand the weird distribution plot, instead of the usual plot about the decay of the eigenvalues, since eigenvalues dropping quickly would ... | [
[
0
],
[
13
],
[
6
],
[
3
],
[
15,
17,
22
],
[
34
],
[
1,
5
],
[
14,
19,
24
],
[
27,
31
],
[
2
],
[
10
],
[
12
],
[
23
],
[
25,
33
],
[
26
],
[
28
],
[
29
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
5,
6,
7,
8,
10,
11,
14
]
}
],
"category": [
... | benchmark/PDF/ICLR2017_HJpfMIFll.pdf | openreview | benchmark/MD/ICLR2017_HJpfMIFll.md | ICLR 2017 |
r1G4z8cge | {
"title": "Mollifying Networks",
"abstract": "The optimization of deep neural networks can be more challenging than the traditional convex optimization problems due to highly non-convex nature of the loss function, e.g. it can involve pathological landscapes such as saddle-surfaces that can be difficult to escape ... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper presents a nice idea for using a sequence of progressively more expressive neural networks to train a model. Experiments are shown on CIFAR10, parity, language modeling to show that the methods performs well on these tasks.\n However, as note... | [
"It appears to me that apart from work in regularization through noise, this is most directly related to the large amount of research in shaping in reinforcement learning, e.g. see Andrew Ng's PhD thesis and references therein and lots of follow up work. Do you agree with this?",
"Why is comparing the learning/ge... | [
[
3,
22
],
[
12
],
[
13
],
[
21
],
[
10
],
[
11
],
[
0
],
[
6
],
[
7
],
[
9
],
[
16
],
[
17
],
[
18
],
[
25
],
[
1
],
[
2
],
[
5
],
[
14
],
[
15
],
[
19
]... | [
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5
]
}
],
"category": [
"QUAL-CMP"
]
},
{
"sentences":... | benchmark/PDF/ICLR2017_r1G4z8cge.pdf | openreview | benchmark/MD/ICLR2017_r1G4z8cge.md | ICLR 2017 |
BJK3Xasel | {
"title": "Nonparametric Neural Networks",
"abstract": "Automatically determining the optimal size of a neural network for a given task without prior information currently requires an expensive global search and training many networks from scratch. In this paper, we address the problem of automatically finding a g... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper presents a clean framework for optimizing for the network size during the training cycle. While the complexity of each iteration is increased, they argue that overall, the cost is significantly reduced since we do not need to train networks o... | [
"It is not so clear to me when and why you would be able to remove units. It seems that units can be removed only when there are all zero weights either into or out of the unit. Why would this occur during training? Once units have non-zero weights, do they ever evolve to have zero weights? Is this caused by re... | [
[
17
],
[
14,
20
],
[
18,
19
],
[
26
],
[
11
],
[
4,
10
],
[
1
],
[
2,
9
],
[
6
],
[
7
],
[
16
],
[
29
],
[
31,
32
],
[
33
],
[
0
],
[
5
],
[
21
],
[
23
]... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"incorrect",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
11,
1... | benchmark/PDF/ICLR2017_BJK3Xasel.pdf | openreview | benchmark/MD/ICLR2017_BJK3Xasel.md | ICLR 2017 |
HJgXCV9xx | {
"title": "Dialogue Learning With Human-in-the-Loop",
"abstract": "An important aspect of developing conversational agents is to give a bot the ability to improve through communicating with humans and to learn from the mistakes that it makes. Most research has focused on learning from fixed training sets of label... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4,
5,
6,
7
]
}
}
},
{
"role": "Author",
"data": {
"value": {
... | [
[
{
"role": "PC",
"data": {
"comment": "pros:\n - demonstration that using teacher's feedback to improve performance in a dialogue system can be made to work \n in a real-world setting\n - comprehensive experiments\n \n cons:\n - lack of technical novelty due to prior work\n - not all agree... | [
"At the end of section 4.1, it is stated:",
"\"The standard way MemN2N is trained is via a cross entropy criterion on known input-output pairs,",
"which we refer to as supervised or imitation learning. As our work is in a reinforcement learning",
"setup where our model must make predictions to learn, this pro... | [
[
0
],
[
8
],
[
35
],
[
42
],
[
11
],
[
9
],
[
12,
40
],
[
16
],
[
17
],
[
18
],
[
27
],
[
43
],
[
44
],
[
3,
48
],
[
5,
50
],
[
7
],
[
13
],
[
15,
28,
29,
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"incorrect",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1,
2,
3,
4,
5,
6
]
},
{
"role": "Author",
"data": [
8
]
}
],
"category": [
"CL... | benchmark/PDF/ICLR2017_HJgXCV9xx.pdf | openreview | benchmark/MD/ICLR2017_HJgXCV9xx.md | ICLR 2017 |
HyxQzBceg | {
"title": "Deep Variational Information Bottleneck",
"abstract": "We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for e... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4
]
}
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper discussses applying an information bottleneck to deep networks using a variational lower bound and reparameterization trick. The paper is well written and the examples are compelling. The paper can be improved with more convincing results on... | [
"Thank you for an interesting read.",
"Could you clarify how exactly did you generate the adversarial examples for each method you tested, especially for Deep VIB?",
"Nicholas Carlini shared his implementation of the attacks in his recent paper https://arxiv.org/abs/1608.04644 with us. We used the code for his... | [
[
7
],
[
13
],
[
14
],
[
4
],
[
6
],
[
3
],
[
16
],
[
28
],
[
5
],
[
8
],
[
10
],
[
27
],
[
15
],
[
18
],
[
20
],
[
21
],
[
25
],
[
26
],
[
2
],
[
11
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"role": "Author",
... | benchmark/PDF/ICLR2017_HyxQzBceg.pdf | openreview | benchmark/MD/ICLR2017_HyxQzBceg.md | ICLR 2017 |
SJGPL9Dex | {
"title": "Understanding Trainable Sparse Coding with Matrix Factorization",
"abstract": "Sparse coding is a core building block in many data analysis and machine learning pipelines. Typically it is solved by relying on generic optimization techniques, such as the Iterative Soft Thresholding Algorithm and its acce... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2
]
}
}
}
],
[
{
... | [
[
{
"role": "PC",
"data": {
"comment": "The work is fairly unique in that it provides a theoretical explanation for an empirical phenomenon in the world of sparse coding. The reviewers were overall favourable, although some reviewers thought parts of the paper were unclear or had confusion a... | [
"The plots suggest that the proposed FacNet is almost always outperformed by LISTA and L-FISTA. This is surprising since the paper seem to suggest that LISTA is in some sense a specialization of FacNet. Any comment about this?",
"Thank you for your question. FacNet shares the same network structure as LISTA, but ... | [
[
14
],
[
15
],
[
0
],
[
3
],
[
4
],
[
5
],
[
6
],
[
10
],
[
12
],
[
16
],
[
7
],
[
11
],
[
2
],
[
8
],
[
13
],
[
9
],
[
1
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_SJGPL9Dex.pdf | openreview | benchmark/MD/ICLR2017_SJGPL9Dex.md | ICLR 2017 |
ryuxYmvel | {
"TL;DR": "",
"title": "HolStep: A Machine Learning Dataset for Higher-order Logic Theorem Proving",
"abstract": "Large computer-understandable proofs consist of millions of intermediate\nlogical steps. The vast majority of such steps originate from manually\nselected and manually guided heuristics applied to in... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4
]
}
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper presents a new dataset and initial machine-learning results for an interesting problem, namely, higher-order logic theorem proving. This dataset is of great potential value in the development of deep-learning approaches for (mathematical) rea... | [
"In Section 2, you describe the extraction process for the dataset. It becomes clear that it was not a trivial undertaking and probably even more subtleties had to be taken into account to generate the final dataset. Could the same strategy/code be applied directly to other proofs?",
"In the conclusion, you state... | [
[
8
],
[
9
],
[
11
],
[
15
],
[
6
],
[
3
],
[
5
],
[
12
],
[
13
],
[
0
],
[
1
],
[
10
],
[
2
],
[
4
],
[
7
],
[
14
],
[
16
],
[
17
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
3,
4
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_ryuxYmvel.pdf | openreview | benchmark/MD/ICLR2017_ryuxYmvel.md | ICLR 2017 |
rJbbOLcex | {
"TL;DR": "",
"title": "TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency",
"abstract": "In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Becaus... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4
]
}
... | [
[
{
"role": "PC",
"data": {
"comment": "Though the have been attempts to incorporate both \"topic-like\" and \"sequence-like\" methods in the past (e.g, the work of Hanna Wallach, Amit Gruber and other), they were quite computationally expensive, especially when high-order ngrams are incorpo... | [
"- How long did it take to train TopicRNN on the PTB and IMBD datasets?",
"- How much worse are the TopicRNN results with LSTM cell compared to standard RNN cell?",
"- Could you please elaborate a little bit more your speculation that \"topic models are more effective than LSTM at capturing global semantic info... | [
[
12
],
[
17
],
[
23
],
[
13
],
[
21
],
[
3
],
[
6
],
[
7
],
[
16
],
[
2
],
[
8
],
[
8
],
[
9
],
[
11
],
[
11
],
[
11
],
[
18
],
[
19
],
[
5,
22
],
[
0
],... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
3
]
}
],
"category": [
"QUAL-REP",
"QUAL-EXP"
]
},
{
"sentences": [
{
"r... | benchmark/PDF/ICLR2017_rJbbOLcex.pdf | openreview | benchmark/MD/ICLR2017_rJbbOLcex.md | ICLR 2017 |
ByqiJIqxg | {
"TL;DR": "",
"title": "Online Bayesian Transfer Learning for Sequential Data Modeling",
"abstract": "We consider the problem of inferring a sequence of hidden states associated with a sequence of observations produced by an individual within a population. Instead of learning a single sequence model for the pop... | Accept (Poster) | [
[
{
"role": "Author",
"data": {
"value": {
"comment": [
0
]
}
}
}
],
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
1,
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "Though the method does not seem to really break new ground in transfer learning (see Reviewer 1), the reviewers do not question the validity of the approach. The online aspect of the approach as well as an application of Bayesian moment matching to HMM... | [
"Added comparison between online transfer learning technique and EM algorithm.",
"Can you provide some comparisons with the other Bayesian inference methods?",
"You should at least discuss the comparison of MAP-EM, Variational Bayes, and MCMC approaches.",
"These are applied to GMM/HMM and also have some onli... | [
[
21
],
[
28
],
[
1
],
[
5
],
[
23
],
[
15
],
[
2
],
[
12,
14
],
[
0
],
[
6
],
[
16
],
[
25
],
[
26
],
[
4
],
[
8,
19
],
[
13
],
[
18
],
[
17,
22
],
[
27
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"incorrect",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"incorrect",
"correct",
"correct",
"incorrect",
"correc... | [
{
"sentences": [
{
"role": "Reviewer 2 Further Reply",
"data": [
1,
2,
3
]
},
{
"role": "Author",
"data": [
7,
8,
9,
10,
11
]
}
],
"category": [
"... | benchmark/PDF/ICLR2017_ByqiJIqxg.pdf | openreview | benchmark/MD/ICLR2017_ByqiJIqxg.md | ICLR 2017 |
SkYbF1slg | {
"title": "An Information-Theoretic Framework for Fast and Robust Unsupervised Learning via Neural Population Infomax",
"abstract": "A framework is presented for unsupervised learning of representations based on infomax principle for large-scale neural populations. We use an asymptotic approximation to the Shannon... | Accept (Poster) | [
[
{
"role": "Author",
"data": {
"value": {
"comment": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11
]
}
}
}
],
[
... | [
[
{
"role": "PC",
"data": {
"comment": "The reviewers were not completely happy with the presentation, but it seems the theory is solid and interesting enough. I think ICLR needs more papers like this, which have convincing mathematical theory instead of merely relying on empirical results."... | [
"My main research interests are in computational neuroscience, information theory, machine learning and deep learning. For those of us who engage in AI related research, we all want to learn from the human brain how to process information to achieve intelligence.",
"For example, deep learning is now said to be br... | [
[
7
],
[
8
],
[
9
],
[
1,
4
],
[
10
],
[
12
],
[
15
],
[
17
],
[
0
],
[
5
],
[
2
],
[
6
],
[
14
],
[
16
],
[
3
],
[
11
],
[
13
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
12
]
}
],
"category": [
"ORIG-MTH"
]
},
{
"sentences": [
{
"role": "Reviewer 2",
"data": [
13,
14,
15
]
},
{
"r... | benchmark/PDF/ICLR2017_SkYbF1slg.pdf | openreview | benchmark/MD/ICLR2017_SkYbF1slg.md | ICLR 2017 |
B1Igu2ogg | {
"title": "Efficient Vector Representation for Documents through Corruption",
"abstract": "We present an efficient document representation learning framework, Document Vector through Corruption (Doc2VecC). Doc2VecC represents each document as a simple average of word embeddings. It ensures a representation generat... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3
]
}
}
}
],... | [
[
{
"role": "PC",
"data": {
"comment": "The introduced method for producing document representations is simple, efficient and potentially quite useful. Though we could quibble a bit that the idea is just a combination of known techniques, the reviews generally agree that the idea is interest... | [
"- Were the word embeddings for the Word2Vec+ baselines only trained on the training data? Did you try training them on a much bigger (unlabeled) corpus?",
"- Why did you exclude the RNN-LM baseline for the document classification task?\nThanks!",
"Thank you for your questions. For both tasks, a bigger unlabele... | [
[
15
],
[
14
],
[
3
],
[
4,
9
],
[
6
],
[
1
],
[
5
],
[
12
],
[
0
],
[
2
],
[
8
],
[
10
],
[
13
],
[
16
],
[
11
],
[
7
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role": "Reviewer 1 ... | benchmark/PDF/ICLR2017_B1Igu2ogg.pdf | openreview | benchmark/MD/ICLR2017_B1Igu2ogg.md | ICLR 2017 |
Sy2fzU9gl | {
"title": "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework",
"abstract": "Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that i... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper proposes a modification of the variational ELBO in encourage 'disentangled' representations, and proposes a measure of disentanglement. The main idea and is presented clearly enough and explored through experiments. This whole area still see... | [
"(v,w) categorizes the latent representations into disentangled dimensions and non-disentangled dimensions. Is it possible to consider group-wise disentangling? This is more practical for complex factors, such as human identity.",
"The measurement proposed in Section 3 is intuitively to measure if a certain dimen... | [
[
14
],
[
20
],
[
13
],
[
2,
34
],
[
4,
5,
6,
7,
8,
12,
33
],
[
35
],
[
39
],
[
31
],
[
9,
11
],
[
32
],
[
43
],
[
3
],
[
10
],
[
15
],
[
16
],
[
17
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"incorrect",
"correct",
"incorrect",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
4,
5,
6
]
}
],
"category": [
"QUAL-MET"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_Sy2fzU9gl.pdf | openreview | benchmark/MD/ICLR2017_Sy2fzU9gl.md | ICLR 2017 |
HJ9rLLcxg | {
"title": "Dataset Augmentation in Feature Space",
"abstract": "Dataset augmentation, the practice of applying a wide array of domain-specific transformations to synthetically expand a training set, is a standard tool in supervised learning. While effective in tasks such as visual recognition, the set of transform... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2
]
}
}
}
],
[
{
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper proposes to regularize neural networks by adding synthetic data created by interpolating or extrapolating in an abstract feature space, learning by an autoencoder.\n \n The main idea is sensible, and clearly presented and motivated. Overall ... | [
"The proposed method is interesting and the paper is well presented. Just some thoughts on the augmentation methods.",
"Besides adding random noise, interpolation and extrapolation, one other way of improving model robustness is to randomly corrupt the representation. Wondering how that works within the proposed ... | [
[
1
],
[
13
],
[
14
],
[
15
],
[
17
],
[
3
],
[
4
],
[
5
],
[
6
],
[
7
],
[
9
],
[
10
],
[
16
],
[
19
],
[
20
],
[
21
],
[
0
],
[
12
],
[
2
],
[
8
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2
]
}
],
"category": [
"QUAL-MET"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_HJ9rLLcxg.pdf | openreview | benchmark/MD/ICLR2017_HJ9rLLcxg.md | ICLR 2017 |
S1jmAotxg | {
"title": "Stick-Breaking Variational Autoencoders",
"abstract": "We extend Stochastic Gradient Variational Bayes to perform posterior inference for the weights of Stick-Breaking processes. This development allows us to define a Stick-Breaking Variational Autoencoder (SB-VAE), a Bayesian nonparametric version of t... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper will make a positive contribution to the conference, especially since it is one of the first to look at stick-breaking as it applies to deep generative models. The paper will make a positive contribution to the conference."
}
}
]... | [
"The technique is potentially an interesting step towards fusing Black Box Variational Inference and Bayesian Nonparametrics.",
"However, I'm wondering why the model was not compared with the results in \"Semi-supervised Learning with Deep Generative Models\" for the semi-supervised learning experiments, they see... | [
[
5
],
[
21
],
[
6
],
[
11
],
[
12
],
[
13
],
[
14
],
[
30
],
[
33
],
[
0
],
[
24
],
[
28
],
[
9
],
[
1,
22
],
[
10,
17
],
[
3
],
[
4
],
[
15
],
[
16
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"incorr... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
}
],
"category": [
"ORIG-COM"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"r... | benchmark/PDF/ICLR2017_S1jmAotxg.pdf | openreview | benchmark/MD/ICLR2017_S1jmAotxg.md | ICLR 2017 |
SJzCSf9xg | {
"title": "On Detecting Adversarial Perturbations",
"abstract": "Machine learning and deep learning in particular has advanced tremendously on perceptual tasks in recent years. However, it remains vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system whil... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper explores the automatic detection of adversarial examples by training a classifier to recognize them. This is an interesting direction, even though they are obviously concerns about training an adversary to circumvent this model. Nonetheless, ... | [
"One application of an adversary-detector would be 'flag' potentially pathological images for an ML system.",
"Although this work demonstrated the utility of an adversary-detector for flagging such images, would it not be possible for the adversary to be constructed such that it is an adversary to *both* the orig... | [
[
3
],
[
8
],
[
9
],
[
11
],
[
21
],
[
5,
14,
17
],
[
18
],
[
20
],
[
0
],
[
2
],
[
7,
10
],
[
12
],
[
15
],
[
16
],
[
19
],
[
1
],
[
22
],
[
13
],
[
6
],... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
72
]
}
],
"category": [
"QUAL-EXP"
... | benchmark/PDF/ICLR2017_SJzCSf9xg.pdf | openreview | benchmark/MD/ICLR2017_SJzCSf9xg.md | ICLR 2017 |
Skq89Scxx | {
"title": "SGDR: Stochastic Gradient Descent with Warm Restarts",
"abstract": "Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient s... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "All reviewers viewed the paper favourably, with the only criticism being that seeing how the method complements other approaches (momentum, Adam) would make the paper more complete. We encourage the authors to include such a comparison in the camera re... | [
"A quick clarification about appendix 7.2: does this mean that in the main paper, your method always uses twice the samples/compute per epoch compared to the baselines?",
"Then all the learning curves (with epochs as x-axis) would be misleading?",
"My main question is about annealing versus restarts -- numerous... | [
[
3
],
[
6
],
[
11
],
[
5
],
[
10
],
[
1
],
[
17
],
[
0
],
[
2
],
[
9
],
[
12
],
[
15,
18
],
[
16
],
[
8
],
[
4
],
[
7
],
[
13
],
[
14
],
[
19
],
[
20
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
5
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_Skq89Scxx.pdf | openreview | benchmark/MD/ICLR2017_Skq89Scxx.md | ICLR 2017 |
ry4Vrt5gl | {
"title": "Learning to Optimize",
"abstract": "Algorithm design is a laborious process and often requires many iterations of ideation and validation. In this paper, we explore automating algorithm design and present a method to learn an optimization algorithm. We approach this problem from a reinforcement learning... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "The authors propose an approach to learning optimization algorithms by framing the problem as a policy search task, then using the guided policy search algorithm. The method is a nice contribution to the \"learning to learn\" framework, and actually wa... | [
"In each experiment you perform, you change the number of objective functions that the optimizer was trained on. Why? How sensitive is the optimizer to this choice?",
"Have you tried this method on a real dataset?",
"In general, generalization ability of the learned optimizer improves with the number of objecti... | [
[
20
],
[
2
],
[
4
],
[
13
],
[
22
],
[
7
],
[
6,
11
],
[
8
],
[
0
],
[
1
],
[
14
],
[
15
],
[
16
],
[
17
],
[
19
],
[
21
],
[
23
],
[
25
],
[
27
],
[
5
]... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4
]
}
],
"category": [
"QUAL-EXP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_ry4Vrt5gl.pdf | openreview | benchmark/MD/ICLR2017_ry4Vrt5gl.md | ICLR 2017 |
Bks8cPcxe | {
"title": "DeepDSL: A Compilation-based Domain-Specific Language for Deep Learning",
"abstract": "In recent years, Deep Learning (DL) has found great success in domains such as multimedia understanding. However, the complex nature of multimedia data makes it difficult to develop DL-based software. The state-of-the... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "All reviewers find value in the contributions. "
}
}
]
] | [
"Does DeepDSL allow to manipulate symbolic gradients (e.g. to allow the symbolic representation of second-order gradients)?",
"Hi, glad to answer your question!",
"First, regarding the second-order gradient support:",
"1. We already support second-order gradients for scalar values;",
"2. We do not support s... | [
[
0
],
[
4
],
[
5
],
[
6,
15
],
[
8
],
[
18
],
[
1
],
[
12
],
[
14,
16
],
[
2
],
[
3
],
[
13
],
[
10
],
[
11
],
[
7
],
[
9
],
[
17
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
... | benchmark/PDF/ICLR2017_Bks8cPcxe.pdf | openreview | benchmark/MD/ICLR2017_Bks8cPcxe.md | ICLR 2017 |
HJF3iD9xe | {
"title": "Deep Learning with Sets and Point Clouds",
"abstract": "We introduce a simple permutation equivariant layer for deep learning with set structure. This type of layer, obtained by parameter-sharing, has a simple implementation and linear-time complexity in the size of each set. We use deep permutation-inv... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3,
4
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper studies neural models that can be applied to set-structured inputs and thus require permutation invariance or equivariance. After a first section that introduces necessary and sufficient conditions for permutation invariance/equivariance, th... | [
"If I understand correctly, one issue with the invariance to permutation of the graph is that one can loose discriminative information (example 4.3). Might it be possible to recover the loss of information due to the averaging?",
"I understand why one needs a set structure for the set anomaly detection, but not f... | [
[
14
],
[
12
],
[
17
],
[
26
],
[
1
],
[
5
],
[
10,
21
],
[
13
],
[
15
],
[
16
],
[
22
],
[
27
],
[
28
],
[
19
],
[
6
],
[
30
],
[
25
],
[
29
],
[
36
],
[
3... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
5
]
}
],
"category": [
"QUAL-MET"
]
},
{
"sentences": [
{
"role": "Reviewer 1 ... | benchmark/PDF/ICLR2017_HJF3iD9xe.pdf | openreview | benchmark/MD/ICLR2017_HJF3iD9xe.md | ICLR 2017 |
SyOvg6jxx | {
"title": "#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning",
"abstract": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper proposes a simple approach to exploration that uses a hash of the current state within a exploration bonus approach (there are some modifications to learned hash codes, but this is the basic approach). The method achieves reasonable performan... | [
"I assume pixel-SimHash is to use the entire image as input to SimHash without any preprocessing. Is this correct? For learned embedding, from Eqn.4 it seems that from the optimization of AE, we already get a binary representation for each state.",
"So why there is an additional step applying SimHash on (almost) ... | [
[
3
],
[
8
],
[
25
],
[
10
],
[
20
],
[
21
],
[
12
],
[
16,
22
],
[
26
],
[
27
],
[
6
],
[
7
],
[
11
],
[
13
],
[
14
],
[
15,
18
],
[
17
],
[
0
],
[
2
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2
]
}
],
"category": [
"QUAL-REP"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_SyOvg6jxx.pdf | openreview | benchmark/MD/ICLR2017_SyOvg6jxx.md | ICLR 2017 |
By14kuqxx | {
"title": "Bit-Pragmatic Deep Neural Network Computing",
"abstract": "We quantify a source of ineffectual computations when processing the multiplications of the convolutional layers in Deep Neural Networks (DNNs) and propose Pragrmatic (PRA), an architecture that exploits it improving performance and energy effic... | Invite to Workshop Track | [
[
{
"role": "Author",
"data": {
"value": {
"comment": [
0
]
}
}
}
],
[
{
"role": "Reviewer",
"data": {
"summary_of_the_paper": null,
"value": {
"review": [
1,
2,
... | [
[
{
"role": "PC",
"data": {
"comment": "Unfortunately, none of the reviewers, nor the AC, have strongly supported for the acceptance of this paper. The fact that fixed-point arithmetic is the focus of this work, while floating-point arithmetic is much more common, is also a concern. The PCs ... | [
"We added an appending with the essential bit distributions but I mislabeled the graphs in that revision. We updated the original PDF with the correct labels and explanation. The main paper body remains as it was in Nov 4.",
"An interesting idea, and seems reasonably justified and well-explored in the paper, thou... | [
[
1
],
[
3
],
[
9
],
[
7
],
[
0
],
[
6
],
[
2
],
[
4
],
[
5
],
[
8
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 2",
"data": [
1
]
},
{
"role": "Author",
"data": [
5
]
}
],
"category": [
"QUAL-MET"
]
},
{
"sentences": [
{
"role": "Reviewer 2",
"dat... | benchmark/PDF/ICLR2017_By14kuqxx.pdf | openreview | benchmark/MD/ICLR2017_By14kuqxx.md | ICLR 2017 |
ByQPVFull | {
"title": "Training Group Orthogonal Neural Networks with Privileged Information",
"abstract": "Learning rich and diverse feature representation are always desired for deep convolutional neural networks (CNNs). Besides, when auxiliary annotations are available for specific data, simply ignoring them would be a gre... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper was reviewed by three experts. While they find interesting ideas in the manuscript, all three point to deficiencies (lack of clean experiments, clarity in the manuscript, etc) and recommend rejection. I believe there are promising ideas here... | [
"Could you please discuss the relationship between your method and the \"DeCov\" method presented in https://arxiv.org/abs/1511.06068? How would your approach compare to this baseline? Thanks.",
"\"DeCov\" and our \"GoCNN\" share similar motivation: more diverse learned features have stronger generalization abili... | [
[
17,
26,
31
],
[
25
],
[
9,
24
],
[
22
],
[
29
],
[
2,
3,
19
],
[
5
],
[
14
],
[
20
],
[
0
],
[
12
],
[
1,
11,
15
],
[
6
],
[
7
],
[
10
],
[
21
],
[
23,
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"c... | [
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3,
4,
5,
6,
7,
8
]
}
],
"category"... | benchmark/PDF/ICLR2017_ByQPVFull.pdf | openreview | benchmark/MD/ICLR2017_ByQPVFull.md | ICLR 2017 |
rJiNwv9gg | {
"title": "Lossy Image Compression with Compressive Autoencoders",
"abstract": "We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithm... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper optimizes autoencoders for lossy image compression. Minimal adaptation of the loss makes autoencoders competitive with JPEG2000 and computationally efficient, while the generalizability of trainable autoencoders offers the added promise of a... | [
"* What is the motivation for using mirror-padding and valid convolutions in the encoder but zero-padded convolutions in the decoder?",
"* In figure 3B there are large jumps in the loss even for non-incremental training. Do you have any insight into what the cause might be?",
"* The masked incremental training ... | [
[
4
],
[
36
],
[
32
],
[
18,
25
],
[
23,
27
],
[
19
],
[
20
],
[
22,
26
],
[
24
],
[
31
],
[
33
],
[
1
],
[
2
],
[
3
],
[
5
],
[
6
],
[
7,
17
],
[
8
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"correct",
"incorrect",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
5,
6,
7
]
}
],
"category": [
"QUAL-MET"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_rJiNwv9gg.pdf | openreview | benchmark/MD/ICLR2017_rJiNwv9gg.md | ICLR 2017 |
HJTzHtqee | {
"title": "A Compare-Aggregate Model for Matching Text Sequences",
"abstract": "Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. Matching the important units between sequences is a key to solve these problems. In this paper, we present a... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2,
3
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper proposes a framework whereby, to an attention mechanism relating one text segment to another piecewise, an aggregation mechanism is added to yield an architecture matching words of one segment to another. Different vector comparison operatio... | [
"I'm having a hard time understanding Equation 1. You refer to a recurrent neural network, but the equations don't show any time index to recur over.",
"Are A and Q (bold capital letters, no bar) the raw word vectors, or the output of some earlier recurrent component that does preprocessing?",
"Do you actually ... | [
[
13
],
[
0
],
[
1
],
[
2,
5
],
[
11
],
[
15
],
[
6
],
[
8
],
[
3
],
[
7
],
[
12
],
[
9
],
[
4
],
[
10
],
[
14
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
}
],
"category": [
"CLAR-NOT"
]
},
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
1
]
},
{
"r... | benchmark/PDF/ICLR2017_HJTzHtqee.pdf | openreview | benchmark/MD/ICLR2017_HJTzHtqee.md | ICLR 2017 |
rkYmiD9lg | {
"title": "Exponential Machines",
"abstract": "Modeling interactions between features improves the performance of machine learning solutions in many domains (e.g. recommender systems or sentiment analysis). In this paper, we introduce Exponential Machines (ExM), a predictor that models all interactions of every or... | Invite to Workshop Track | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2
]
}
}
}
],
[
{
... | [
[
{
"role": "PC",
"data": {
"comment": "Modeling nonlinear interactions between variables by imposing Tensor-train structure on parameter matrices is certainly an elegant and novel idea. All reviewers acknowledge this to be the case. But they are also in agreement that the experimental secti... | [
"Is the tensor decomposition in figure 1 Tucker decomposition or Paraface decomposition?",
"Neither of both; it is a tensor-train decomposition. You can read more about it here: http://epubs.siam.org/doi/abs/10.1137/090752286",
"Shortly, it can be computed using SVD (PARAFAC can not), and does not have intrins... | [
[
18
],
[
23
],
[
0
],
[
11
],
[
13,
24
],
[
15
],
[
14
],
[
8
],
[
9
],
[
10
],
[
19
],
[
2
],
[
3
],
[
4
],
[
12
],
[
16
],
[
17
],
[
21
],
[
22
],
[
25
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2
]
}
],
"category": [
"CLAR-WRT"
]
},
{
"sentences": [
{
"role":... | benchmark/PDF/ICLR2017_rkYmiD9lg.pdf | openreview | benchmark/MD/ICLR2017_rkYmiD9lg.md | ICLR 2017 |
r1Usiwcex | {
"title": "Counterpoint by Convolution",
"abstract": "Machine learning models of music typically break down the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end. On the contrary, human composers write music in a nonlinear fashion, scribbling motifs... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3,
4,
5
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper applies an existing idea (Yao's block Gibbs sampling of NADE) to a music model. There is also prior art for applying NADE to music. The main novel and interesting result is that block Gibbs sampling (an approximation) actually improves perfo... | [
"Will the authors release the code to github?",
"We started this work at Magenta[1] and as such the code will be checked into the Magenta github[2] soon.",
"However our (messy) research code is already available in our development repository[3].",
"[1] https://magenta.tensorflow.org",
"[2] https://github.co... | [
[
13
],
[
5
],
[
6,
14
],
[
9
],
[
1
],
[
3
],
[
10
],
[
12
],
[
15
],
[
8
],
[
0
],
[
2
],
[
11
],
[
4
],
[
7
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3,
4,
5
]
}
],
"category": [
"QUAL-REP"
]
},
{
... | benchmark/PDF/ICLR2017_r1Usiwcex.pdf | openreview | benchmark/MD/ICLR2017_r1Usiwcex.md | ICLR 2017 |
ByldLrqlx | {
"TL;DR": "",
"title": "DeepCoder: Learning to Write Programs",
"abstract": "We develop a first line of attack for solving programming competition-style problems from input-output examples using deep learning. The approach is to train a neural network to predict properties of the program that generated the outpu... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"summary_of_the_paper": null,
"value": {
"review": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
... | [
[
{
"role": "PC",
"data": {
"comment": "This is a well written paper that attempts to craft a practical program synthesis approach by training a neural net to predict code attributes and exploit these predicted attributes to efficiently search through DSL constructs (using methods developed ... | [
"This paper presents an approach to learn to generate programs.",
"Instead of directly trying to generate the program, the authors propose to train a neural net to estimate a fix set of attributes, which then condition a search procedure. This is an interesting approach, which make sense, as building a generative... | [
[
9
],
[
14
],
[
0
],
[
1
],
[
2
],
[
3
],
[
4
],
[
5
],
[
7
],
[
11
],
[
12
],
[
13
],
[
15
],
[
6
],
[
8
],
[
10
],
[
16
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1",
"data": [
0,
1
]
}
],
"category": [
"ORIG-MTH"
]
},
{
"sentences": [
{
"role": "Reviewer",
"data": [
2
]
}
],
"category": [
"QUAL... | benchmark/PDF/ICLR2017_ByldLrqlx.pdf | openreview | benchmark/MD/ICLR2017_ByldLrqlx.md | ICLR 2017 |
HJjiFK5gx | {
"TL;DR": "",
"title": "Neural Program Lattices",
"abstract": "We propose the Neural Program Lattice (NPL), a neural network that learns to perform complex tasks by composing low-level programs to express high-level programs. Our starting point is the recent work on Neural Programmer-Interpreters (NPI), which ca... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
1,
2,
3
]
}
}
},
... | [
[
{
"role": "PC",
"data": {
"comment": "This paper demonstrates a novel (although somewhat obvious) extension to NPI, namely moving away from training exclusively on full traces (in order to model the nested calling of subprograms) to training, in part on low-level program traces. By exploit... | [
"The stack in this work resembles the one proposed by Mikolov et al (https://arxiv.org/abs/1503.01007). Was there a particular requirement that led to the choice of this type of stack, versus other neural stack/queue models, e.g. https://arxiv.org/abs/1506.02516?",
"For each timestep, our lattice maintains a grid... | [
[
16
],
[
12
],
[
0
],
[
1,
3,
4,
5,
6,
7,
8,
24
],
[
18
],
[
19
],
[
22
],
[
23
],
[
25
],
[
26
],
[
9,
13
],
[
17
],
[
11,
15
],
[
20
],
[
21
],
[
2
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"incorrect",
"incorrect",
"corr... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
1,
2,
3
]
}
],
"category": [
"QUAL-CMP"
]
},
{
"sentences": [
{
... | benchmark/PDF/ICLR2017_HJjiFK5gx.pdf | openreview | benchmark/MD/ICLR2017_HJjiFK5gx.md | ICLR 2017 |
HycUbvcge | {
"title": "Deep Generalized Canonical Correlation Analysis",
"abstract": "We present Deep Generalized Canonical Correlation Analysis (DGCCA) – a method for learning nonlinear transformations of arbitrarily many views of data, such that the resulting transformations are maximally informative of each other. While me... | Reject | [
[
{
"role": "Reviewer",
"data": {
"summary_of_the_paper": null,
"value": {
"review": [
0,
1,
2,
3,
4,
5,
6,
7,
8
]
},
"scores": {
... | [
[
{
"role": "PC",
"data": {
"comment": "This is largely a clearly written paper that proposes a nonlinear generalization of a generalized CCA approach for multi-view learning. In terms of technical novelty, the generalization follows rather straightforwardly. Reviewers have expressed the nee... | [
"The proposed method is simple and elegant; it builds upon the huge success of gradient based optimization for deep non-linear function approximators and combines it with established (linear) many-view CCA methods. A major contribution of this paper is the derivation of the gradients with respect to the non-linear ... | [
[
3
],
[
4
],
[
1
],
[
10
],
[
9
],
[
6
],
[
13
],
[
7
],
[
8
],
[
15
],
[
2
],
[
5
],
[
0
],
[
12
],
[
14
],
[
11
]
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1",
"data": [
0
]
}
],
"category": [
"QUAL-MET"
]
},
{
"sentences": [
{
"role": "Reviewer 1",
"data": [
1,
2
]
},
{
"role": "Author",... | benchmark/PDF/ICLR2017_HycUbvcge.pdf | openreview | benchmark/MD/ICLR2017_HycUbvcge.md | ICLR 2017 |
HkcdHtqlx | {
"TL;DR": "",
"title": "Gated-Attention Readers for Text Comprehension",
"abstract": "In this paper we study the problem of answering cloze-style questions over documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop architecture with a novel attention mechanism, which is based on multiplic... | Reject | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
2,
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper proposes several extensions to popular attention-enhanced models for cloze-style QA. The results are near state of the art, and an ablation study hows that the different features (multiplicative interaction, gating) contribute to the model's ... | [
"Hi,\nis there a specific reason why you used a character GRU instead of a convolutional neural network?",
"Do you have any intuitions on whether the two approaches are equivalent or one can be better than the other?",
"There is some previous work showing the effectiveness of recurrent neural networks for model... | [
[
18
],
[
6
],
[
17
],
[
10
],
[
0
],
[
2,
3
],
[
4
],
[
7
],
[
9
],
[
14,
23
],
[
15
],
[
19,
20
],
[
21
],
[
11
],
[
13
],
[
8
],
[
1
],
[
5
],
[
12
],
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct"
] | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0,
1
]
},
{
"role": "Author",
"data": [
2,
3,
4,
5,
6
]
}
],
"category": [
"QUAL-EXP"
]... | benchmark/PDF/ICLR2017_HkcdHtqlx.pdf | openreview | benchmark/MD/ICLR2017_HkcdHtqlx.md | ICLR 2017 |
HkYhZDqxg | {
"title": "Tree-structured decoding with doubly-recurrent neural networks",
"abstract": "We propose a neural network architecture for generating tree-structured objects from encoded representations. The core of the method is a doubly-recurrent neural network that models separately the width and depth recurrences a... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"value": {
"question": [
0,
1,
2
]
}
}
},
{
"role": "Author",
"data": {
"value": {
"comment": [
3,
4,
5,
... | [
[
{
"role": "PC",
"data": {
"comment": "The paper introduces a new model for generating trees decorated with node embeddings. Interestingly the authors do not assume that even leaf nodes in the tree are known a-priori. There has been very little work on this setting, and, the problem is quit... | [
"There are two reasons of not using EOS token in a tree decoder (P-4), while I am less convinced why the second is crucial. If it is crucial, do you think it will also help in a sequential decoder, as the argument on P-5 also holds for a sequential decoder -- *rationale behind this is that the label of a node will ... | [
[
2
],
[
3
],
[
22
],
[
1
],
[
4
],
[
6,
20
],
[
8
],
[
21
],
[
23,
24
],
[
25
],
[
0,
18
],
[
7
],
[
9
],
[
12
],
[
13,
14,
15,
16,
17
],
[
10
],
[
26
],... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"c... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
{
"sentences": [
{
"role": "Reviewer 1 Further Reply",
"data": [
0
]
},
{
"role": "Author",
"data": [
3,
4,
5,
6,
7
]
}
],
"category": [
"QUAL-MET"
]
},
{
... | benchmark/PDF/ICLR2017_HkYhZDqxg.pdf | openreview | benchmark/MD/ICLR2017_HkYhZDqxg.md | ICLR 2017 |
rJ0JwFcex | {
"title": "Neuro-Symbolic Program Synthesis",
"abstract": "Recent years have seen the proposal of a number of neural architectures for the problem of Program Induction. Given a set of input-output examples, these architectures are able to learn mappings that generalize to new test inputs. While achieving impressiv... | Accept (Poster) | [
[
{
"role": "Reviewer",
"data": {
"summary_of_the_paper": null,
"value": {
"review": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
... | [
[
{
"role": "AC",
"data": {
"comment": "Dear reviewers, do you have any reactions after the authors responded to your reviews?"
}
}
],
[
{
"role": "PC",
"data": {
"comment": "There is a bit of a spread in the reviewer scores and unfortunately it wasn't p... | [
"This paper proposes a model that is able to infer a program from input/output example pairs, focusing on a restricted domain-specific language that captures a fairly wide variety of string transformations, similar to that used by Flash Fill in Excel. The approach is to model successive “extensions” of a program t... | [
[
8
],
[
4
],
[
12
],
[
16,
17
],
[
20
],
[
25
],
[
9
],
[
21
],
[
2,
10
],
[
3
],
[
6
],
[
13
],
[
19
],
[
22
],
[
23
],
[
24
],
[
18
],
[
5
],
[
7
],
[
... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"cor... | [
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"incorrect",
"correct",
"correct",
"correct",
"correct",
"correct",
"incorrect",
"correct",
... | [
{
"sentences": [
{
"role": "Reviewer",
"data": [
0
]
}
],
"category": [
"N/A"
]
},
{
"sentences": [
{
"role": "Reviewer 1",
"data": [
1,
2
]
}
],
"category": [
"SIGN-DOM"... | benchmark/PDF/ICLR2017_rJ0JwFcex.pdf | openreview | benchmark/MD/ICLR2017_rJ0JwFcex.md | ICLR 2017 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.