paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
7
9.67k
point
stringlengths
55
634
ICLR_2022_1067
ICLR_2022
As the term “online” appears in the title of the paper, this modality should be better introduced and motivated. A clear explanation appears in the related work at the end of the continual learning paragraph. However, the term remains not properly defined and seems to be related with the problem of imbalance. What exac...
3) of this review, the MNIST family datasets typically do not “transfer” their performance to more complex and bigger datasets. In other words, using the gradient on MNIST does not imply that the gradient on CIFAR or bigger datasets is a good choice. Is the gradient computationally expensive for deeper neural networks?...
ICLR_2021_1603
ICLR_2021
Weakness: 1.The authors claimed that compared to Diakonikolas 19, they improved the error from eps to sqrt(eps). However, the eps result relies on the fact that the gradient of good data has bounded norm, and I believe in that setting Diakonikolas 19 also achieves eps error. 2. In paragraphs close to Lemma 1 and Lemma ...
1.The authors claimed that compared to Diakonikolas 19, they improved the error from eps to sqrt(eps). However, the eps result relies on the fact that the gradient of good data has bounded norm, and I believe in that setting Diakonikolas 19 also achieves eps error.
NIPS_2018_30
NIPS_2018
- As mentioned in section 3.2, MetaAnchor does not show significant improvement for two-stage anchor-based object detection. - The experiment evaluation is only done for one method and on one dataset. It is not very convincing that MetaAnchor is able to work with most of the anchor-based object detection system. Rebutt...
- As mentioned in section 3.2, MetaAnchor does not show significant improvement for two-stage anchor-based object detection.
NIPS_2020_310
NIPS_2020
The main weakness of the paper is its lack of focus, which is most evident in empirical evaluations and theoretical results that don’t seem relevant to the main ideas of the paper. I don’t think this is because the empirical and theoretical results are not relevant, but because the paper emphasizes the wrong aspects of...
- In terms of organization, it seems odd that Theorem 3.1 is introduced in page 3, but is referenced until page 6 after Proposition 1. It would be easier on the reader to have these two results close together.
NIPS_2022_2367
NIPS_2022
1) Since this method needs a forward-backward training process, does it require more time to train the network? How many extra parameters are introduced in the newly proposed method compared with the previously proposed method VolMinNet[13]? 2) This paper states that it tries to estimate the transition matrix under the...
1) Since this method needs a forward-backward training process, does it require more time to train the network? How many extra parameters are introduced in the newly proposed method compared with the previously proposed method VolMinNet[13]?
NIPS_2022_1719
NIPS_2022
1. The new iteration complexity results for nonlinear SA hold under the smoothness assumption on the fixed point. While the paper has justified it in several applications, it does not improve the existing complexity of SA without this smoothness assumption. 2. The paper can do a better job on discussing and highlightin...
1. The new iteration complexity results for nonlinear SA hold under the smoothness assumption on the fixed point. While the paper has justified it in several applications, it does not improve the existing complexity of SA without this smoothness assumption.
ACL_2017_494_review
ACL_2017
- fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) - General Discussion: The paper describes "morph-fitting", a type of retrofitting for vector spaces that focuses specifically on incorporating morphological constraints into the ...
- Comments for Authors1) I don't really understand the need for the morph-simlex evaluation set. It seems a bit suspect to create a dataset using the same algorithm that you ultimately aim to evaluate. It seems to me a no-brainer that your model will do well on a dataset that was constructed by making the same assumpti...
ICLR_2022_1789
ICLR_2022
Strengths Clear and thoughtful discussion of the motivation, namely the importance of grounded language learning and continual learning. I enjoyed reading it! Straightforward description of the main technical contribution Weaknesses As presented, I'm not sure if the presented approach has signicant technical novelty. T...
4: Regarding the function A ( t ) = β t + m , could the authors give a brief explain for their choice of a affine transformation function here? Was this motivated by some previously observed property of the CLIP embedding space? Why should an affine transform be preferred over any other non-linear transform, e.g. quadr...
NIPS_2017_631
NIPS_2017
1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is ...
5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question informa...
NIPS_2018_902
NIPS_2018
Weakness 1) The technical contribution is limited. Throughout this work, there is no any insight technical contribution in terms of either algorithm or framework. 2) The experimental setting is problematic. There is no evaluate criteria. There is no compared method. So we cannot know the superiority of the proposed com...
1) The technical contribution is limited. Throughout this work, there is no any insight technical contribution in terms of either algorithm or framework.
1N5Ia3KLX8
EMNLP_2023
- Overall, it is difficult to understand how actually the whole framework works (lack of a pseudocode or a schema). After pretraining, GMMs is estimated (with Expectation-Maximization? = it is not specified). Then the probabilities from GMM are used in a loss function to learn thresholds ξ. Later, somehow cross-entropy...
- Lack of some ablation studies, e.g. how well the method works without N-pair loss pretraining?
ICLR_2023_2048
ICLR_2023
1. Although the authors mentioned that they proposed a robust DCRRNN model to achieve the goal of accurate streamflow prediction, the detailed model architecture is not explained in the article. It should be clarified in the methodology section. 2. The authors only use a bunch of quantitative analysis to show the effec...
1. Although the authors mentioned that they proposed a robust DCRRNN model to achieve the goal of accurate streamflow prediction, the detailed model architecture is not explained in the article. It should be clarified in the methodology section.
ICLR_2021_1785
ICLR_2021
The paper is not clearly written, making it is hard to follow. There are two main reasons. First, the motivation of directed spanning set (DSS) is not clear. When Wx=y has multiple solutions or Wx<0 has solutions, ReLU(Wx) will precludes injectivity, where y>=0 is a constant. Are the two cases related to DSS? Second, t...
1. In the first paragraph of subsection 2.3, ReLU(x)-ReLU(y) should be ReLU(Wx)-ReLU(Wy) .
NIPS_2019_1089
NIPS_2019
- The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - T...
- How can the model be modified to remain useful when there are noisy or missing modalities?
7vVWiCrFnd
ICLR_2024
1. The first paragraph is problematic. “implicitly assume that node representations learnt by GNNs are independent conditioned on node features and edges, thereby ignoring the joint dependency among nodes...” does not represent the related works accurately. These works do not ignore dependency among the nodes, which is...
1. The introduction of phantom nodes and edges is not a novel development and closely resembles the other methods like CIN. However, the problem of inefficiency remains in such methods i.e. the computational complexity of finding maximal cliques which can be used for phantom nodes to guarantee the inferential capacity.
5FXKgOxmb2
ICLR_2025
1. One of the main limitation of fragment-based methods is the absence of synthesizability considerations in the framework design. This significantly limits the applicability of such methods since, to be tested in physical and biological assays, the proposed molecules either require individual and expansive custom synt...
1. One of the main limitation of fragment-based methods is the absence of synthesizability considerations in the framework design. This significantly limits the applicability of such methods since, to be tested in physical and biological assays, the proposed molecules either require individual and expansive custom synt...
XNnFTKCacy
EMNLP_2023
1. The motivation is somewhat unclear to me. Generally, the authors claim that previous efforts can not handle entity interaction regarding topic coherence and category coherence. The authors can provide more intuitive examples by comparing existing methods, e.g., generative entity linking techniques. 2. The performanc...
2. The performance improvements are evident, but whether they are from better topic and category coherence is not verified. Figures 4 and 5's visualization just prove that the same or similar topics/categories will generate grouped embeddings, and more in-depth analyses are needed from the perspective of coherence.
NIPS_2017_71
NIPS_2017
- The paper is a bit incremental. Basically, knowledge distillation is applied to object detection (as opposed to classification as in the original paper). - Table 4 is incomplete. It should include the results for all four datasets. - In the related work section, the class of binary networks is missing. These networks...
* XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, ECCV 2016 * Binaryconnect: Training deep neural networks with binary weights during propagations, NIPS 2015 Overall assessment: The idea of the paper is interesting. The experiment section is solid. Hence, I recommend acceptance of the pape...
NIPS_2022_2646
NIPS_2022
Weakness 1. Overclaims or several claims are not evaluated. There are many claims in this paper and I fail to find supports for some of them. Thus, this paper could have some overclaims: In L36, how to evaluate that MiRe does not have the "data-dependent'' spurious correlations? I fear that MiRe is still a data-driven ...
6. The second part of the method is too complicated and computationally expensive for multiple domains. Thus I fear the comparison to other methods is not fair. Regarding the reproducibility, this approach has introduced many extra hyperparameters to be tuned, to name a few, the hyperparameters in Grad-Cam, threshold i...
SnFmGmKTn1
EMNLP_2023
- The work is partially motivated by the challenges faced by triple-based approaches when dealing with long-tail entities. However, the manuscript does not highlight any particular considerations for long-tail triples, and I understand that any performance improvements for such cases (presented in Section 4.4) can be m...
- The work is partially motivated by the challenges faced by triple-based approaches when dealing with long-tail entities. However, the manuscript does not highlight any particular considerations for long-tail triples, and I understand that any performance improvements for such cases (presented in Section 4.4) can be m...
ICLR_2022_2323
ICLR_2022
Weakness: 1. The literature review is inaccurate, and connections to prior works are not sufficiently discussed. To be more specific, there are three connections, (i) the connection of (1) to prior works on multivariate unlabeled sensing (MUS), (ii) the connection of (1) to prior works in unlabeled sensing (US), and (i...
2. I find Assumption 1 not very intuitive; and it is unclear to me why "otherwise the influence of the permutation will be less significant". Is it that the unknown permutation is less harmful if the magnitudes of A and B are close?
ACL_2017_494_review
ACL_2017
- fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) - General Discussion: The paper describes "morph-fitting", a type of retrofitting for vector spaces that focuses specifically on incorporating morphological constraints into the ...
2) I really liked the morph-fix baseline, thank you for including that. I would have liked to see a baseline based on character embeddings, since this seems to be the most fashionable way, currently, to side-step dealing with morphological variation. You mentioned it in the related work, but it would be better to actua...
ICLR_2021_853
ICLR_2021
are listed as follows: Strengths: 1). The authors propose a simple but efficient indicator-free method to prevent skip connection from dominating the superNet. They also demonstrate the effectiveness of auxiliary branch from the view of gradient flow and the convergence of network weight. 2). Extensive experiments on m...
1). The authors propose a simple but efficient indicator-free method to prevent skip connection from dominating the superNet. They also demonstrate the effectiveness of auxiliary branch from the view of gradient flow and the convergence of network weight.
ICLR_2022_2726
ICLR_2022
1. One of the key points in SpaceMAP is to define the similarity based on EEDs, as given in (8) and (10). However, it is not explained why the EED used in (8) is inversed. If α t and β t are factors from the ambient space to the intrinsic space, then the EED in (8) should be α t R i j β t , as given in the paragraph af...
2. I find the definition of d_global and d_local not clear. In the second to last paragraph in section 1, d_global is the dimension of the manifold, and d_local is the dimension of a local neighborhood. However, for a manifold its dimension is defined as the dimension of an open subset that a neighborhood around each p...
NIPS_2017_65
NIPS_2017
1) the evaluation is weak; the baselines used in the paper are not even designed for fair classification 2) the optimization procedure used to solve the multi-objective optimization problem is not discussed in adequate detail Detailed comments below: Methods and Evaluation: The proposed objective is interesting and uti...
3. In Section 4, more details about the choice of train/test splits need to be provided (see above). While this paper proposes a useful framework that can handle multiple notions of fairness, there is scope for improving it quite a bit in terms of its experimental evaluation and discussion of some of the technical deta...
NIPS_2019_1089
NIPS_2019
- The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - T...
- The paper performs good empirical analysis. They have been thorough in comparing with some of the existing state-of-the-art models for multimodal fusion including those from 2018 and 2019. Their model shows consistent improvements across 2 multimodal datasets.
NIPS_2021_37
NIPS_2021
, * Typos/Comments) Overall, I like and value the research topic and motivation of this paper and lean positive. However, some details are not clear enough. I would update my rating depending on the authors' feedback. The details are as follows. + Interesting and important research problem. This paper focuses on how to...
- Table 4 reported a much lower performance of "swapping" on BAR compared to the other three datasets. Is there any explanation for this, like the difference of datasets?
ARR_2022_110_review
ARR_2022
1. My biggest concern is that the analysis doesn't provide insights on WHICH language benefits WHICH other language in a multilingual setup. The only comparison provided is mono-vs-tri-lingual, but we should also compare vs bi-lingual (For example -- comparing En+Hi, En+Te, Hi+Te vs En+Hi+Te). This analysis may point t...
7. Are LSTMs a better choice for the navigation step? Would a transformer/attention mechanism be better suited for learning longer sequences? This architectural choice should be justified.
NIPS_2017_201
NIPS_2017
++++++++++ Novelty/Significance: The reformulation of the robust regression problem (Eq 6 in the paper) shows that robust regression is reducible to standard k-sparse recovery. Therefore, the proposed CRR algorithm is basically the well-known IHT algorithm (with a modified design matrix), and IHT has been (re)introduce...
- Not entirely clear to me why one would need a 2-stage analysis procedure since the algorithm does not change. Some intuition in the main paper explaining this would be good (and if this two-stage analysis is indeed necessary, then it would add to the novelty of the paper). +++++++++ Update after authors' response +++...
zpayaLaUhL
EMNLP_2023
- Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function,...
- I do not really agree with the argument in Section 5 that word embedding contributes to relative position-dependent attention patterns. The target head is in layer 8, and the changes caused by large deviations from the input, such as only position embedding, are quite large at layer 8. It is likely that the behavior ...
NIPS_2016_9
NIPS_2016
Weakness: The authors do not provide any theoretical understanding of the algorithm. The paper seems to be well written. The proposed algorithm seems to work very all on the experimental setup, using both synthetic and real-world data. The contributions of the papers are enough to be considered for a poster presentatio...
1. The paper does not provide an analysis on what type of data the algorithm work best and on what type of data the algorithm may not work well.
NIPS_2021_537
NIPS_2021
Weakness: The main weakness of the approach is the lack of novelty. 1. The key contribution of the paper is to propose a framework which gradually fits the high-performing sub-space in the NAS search space using a set of weak predictors rather than fitting the whole space using one strong predictor. However, this high-...
5. Some intuitions are provided in the paper on what I commented in Point 3 in Weakness above. However, more thorough experiments or theoretical justifications are needed to convince potential users to use the proposed heuristic (a simplified version of BO) rather than the original BO for NAS.
ICLR_2021_1189
ICLR_2021
weakness of the paper is its experiments section. 1. Lack of large scale experiments: The models trained in the experiments section are quite small (80 hidden neurons for the MNIST experiments and a single convolutional layer with 40 channels for the SVHN experiments). It would be nice if there were at least some exper...
2. D^{+} used in Condition 1 is used before it’s defined in Condition 2.
NIPS_2019_104
NIPS_2019
. Despite the great technical material covered in the paper, its choice of organization makes the paper hard to follow; see detailed comments below. In addition, some related and recent literature on regret minimization in RL seem to be missing. Besides, I have some technical comments, which I detail below. 1. Organiza...
. Despite the great technical material covered in the paper, its choice of organization makes the paper hard to follow; see detailed comments below. In addition, some related and recent literature on regret minimization in RL seem to be missing. Besides, I have some technical comments, which I detail below.
ICLR_2021_2838
ICLR_2021
#ERROR!
1. the baselines are too weak as none of them are designed to specifically handle oriented bounding boxes. Even if comparisons against works such as [1]-[4] can’t be made, I would at least want to see comparisons against “sequence-based” models that directly outputs OBBs.
ICLR_2022_999
ICLR_2022
(I'll expand on this part a bit so that the authors can address the issues): Although the analysis via kGLM is novel, the perspective of considering DEQ iterations from the angle of classical optimization problems is not. For instance, the monotone DEQ paper (sort of) already implies this connection; i.e., the existenc...
1: "Do commonly implemented DEQs correspond with any optimization problem?" While the paper addresses the problem of convolution by considering the circulant matrix form of the filters, with the formulation in Appendix E, wouldn't the convolutional kernel a) have symmetric weights; and b) be prohibited from performing ...
NIPS_2016_39
NIPS_2016
: One could eventually object that adversarial domain adaptation is not new, and neither are projections into shared and private spaces and orthogonality constraints. However, these are minor points. I still think that the whole package is sufficiently novel even for a high level conference as NIPS. I am also wondering...
- In equation (5), I think the loss should be HH^T and not H^T H if orthogonality is supposed to be favored and features are rows.
aURCCzSuhc
EMNLP_2023
1. The paper assumes that the relationship between new and old categories (non-overlapping, subclasses, etc.) is known. However, in reality, a category relationship judgment module is also needed. 2. The theoretically more optimized PLM algorithm is similar in performance to simple cross-annotation and does not show a ...
1. The paper assumes that the relationship between new and old categories (non-overlapping, subclasses, etc.) is known. However, in reality, a category relationship judgment module is also needed.
NIPS_2018_888
NIPS_2018
weakness of the paper is that some parts are a bit vague or unclear, and that it contains various typos or formatting issues. I shall provide details in the following list; typos I found will be listed afterwards. 1. In the section about related work, I was wondering whether there is no related literature from the robu...
18: missing comma between „Here“ and „the effort...“ -- The dollar sign (used in the context of the ski rental problem) somehow looks awkward; actually, could you not simply refrain from using a specific currency? -- Regarding preemptions, I think „resume“ is more common than „restart“ (ll. 54, 186) -- ...
ICLR_2023_3875
ICLR_2023
The approach is rather complicated, involving multiple encoders, deep feature losses, and a final vocoder. This premise mentioned in the abstract is absolutely incorrect: "Existing approaches model speech, potentially of multiple speakers, for denoising. Such approaches have an inherent drawback as a separate model is ...
6 (1984): 1109-1121. Ephraim, Yariv, and David Malah. "Speech enhancement using a minimum mean-square error log-spectral amplitude estimator." IEEE transactions on acoustics, speech, and signal processing 33, no.
ICLR_2023_3377
ICLR_2023
Weakness: 1) The time complexity of the proposed method is not clearly. In the experiment, why the proposed method is not verified on large-scale dataset. 2) In the proposed method, a querying strategy that encourages diversity with k-means++ is proposed method. However, the main contributions of this strategy are not ...
3) In the proposed method, the equation(3) mainly deals with binary classification. How does the proposed method deal with the multiple classification setting.
NIPS_2016_232
NIPS_2016
weakness of the suggested method. 5) The literature contains other improper methods for influence estimation, e.g. 'Discriminative Learning of Infection Models' [WSDM 16], which can probably be modified to handle noisy observations. 6) The authors discuss the misestimation of mu, but as it is the proportion of missing ...
7) As noted, the assumption of random missing entries is not very realistic. It would seem worthwhile to run an experiment to see how this assumption effects performance when the data is missing due to more realistic mechanisms.
ICLR_2023_314
ICLR_2023
1. The manuscript does not include the analysis of the computational complexities of the proposed method. 2. Regarding the experiments. I think it would be better to include a table summarizing the data sets, e.g., the input dimension, the number of samples. I also think that it would be better if the previous work, Wa...
1. The manuscript does not include the analysis of the computational complexities of the proposed method.
NIPS_2020_29
NIPS_2020
* Relevant to a minority of the NeurIPS community * The experimental section -limited to on-line regression problems - is a bit disappointing. The experimental part should have addressed true and realistic contextual bandits problems (e.g. all the concrete examples and cases cited in the second paragraph of the introdu...
* Relevant to a minority of the NeurIPS community * The experimental section -limited to on-line regression problems - is a bit disappointing. The experimental part should have addressed true and realistic contextual bandits problems (e.g. all the concrete examples and cases cited in the second paragraph of the introdu...
NIPS_2022_1948
NIPS_2022
"Shaping representations" and "leverage from the learned representations" is what loss-based approaches (IsoMax, Scaled Cosine, SNGP, DUQ, and IsoMax+) have been doing since 2019. The fact that these loss-based approaches for OOD detection were not even cited may explain why the paper understand "shaping representation...
3. Very limited novelty. Considering that point #1 above showed that leveraging representations/distributions learned during training to perform OOD detection has already been proposed. Moreover, noticing that point #2 above showed that unit hypersphere representations have also already been proposed, we conclude that ...
7oaWthT9EO
ICLR_2025
1. The main contributions in this paper, i.e. discretization and persistent training, are common tricks for training WGANs[1,2], which are not novel enough in practical implementation. For example, as is shown in the proof of Proposition 4.1, persistent training seems equal to just increasing the generator's iterations...
2. Obtaining Kantorovich potential is a challenging and important step in ODE-based WGAN's training, but in this paper, it's still the same as the original WGAN, leaving some problems for persistent training as discussed in Remark 4.2, harming the consistency between theory and practice.
NIPS_2021_2257
NIPS_2021
- Missing supervised baselines. Since most experiments are done on datasets of scale ~100k images, it is reasonable to assume that full annotation is available for a dataset at this scale in practice. Even if it isn’t, it’s an informative baseline to show where these self-supervised methods are at comparing to a fully ...
- The discussion in section 3 is interesting and insightful. The authors compared training datasets such as object-centric versus scene-centric ones, and observed different properties that the model exhibited. One natural question is then what would happen if a model is trained on \emph{combined} datasets. Can the SSL ...
h57gkDO2Yg
ICLR_2024
Despite the overall positive impression, I see several weaknesses: - In the experiments the authors distill the data sets into 1000-2000 examples, for self-supervised learning, without augmentation. The authors do not comment on augmentations when training on the distilled data. This approach might work for the small m...
- Some baselines might be weak; for example MobileNet and ResNet10 from scratch get < 4% accuracy on Cars. Minor comments:
CMMpcs9prj
ICLR_2025
1. The improvement on theoretical convergence result is not significant. Compared to CEDAS, it seems that the only improvement is removing the need for an additional unbiased compressor. To better illustrate this improvement, it is expected to validate whether using contractive compressors are more efficient than using...
2. The numerical experiments are not persuasive enough. The compared baselines are Choco-SGD and BEER, which are in 2022 or earlier, and their convergence rate is clearly worse than SOTA as illustrated in Table 1. In contrast, CEDAS that seems closer to SOTA convergence rate is not compared. Maybe the authors can make ...
ICLR_2021_1193
ICLR_2021
(cons) 1. The authors should compare their work with GNNs with non-local operations, e.g., LatentGNN [1]. The paper also studies the limitations of local GNNs (not specifically LUMP) but the resulting model is similar to memory augmented GNNs and it has skip connections and augmented by convolution in the latent node s...
3. Depending on the edge weights, the models may behave differently. The handcrafted edge weights from the truncated diffusion matrix naturally raise the question of whether they are necessary to show the effectiveness of the proposed technique. Question.
NIPS_2020_770
NIPS_2020
I'm not sure how readable this is by people unfamiliar with homomorphic encryption. Unfortunately, with the low page limit, it just may be that such material is not possible to present at NeurIPS, and/or is simply outside the scope of the conference. There are a couple of issues I have with the paper: - I didn't see da...
- You mention that HEAAN supports floating point computations better. Is there a reason it was not used, instead of BGV?
NIPS_2017_330
NIPS_2017
- Section 4 is very tersely written (maybe due to limitations in space) and could have benefitted with a slower development for an easier read. - Issues of convergence, especially when applying gradient descent over a non-Euclidean space, is not addressed In all, a rather thorough paper that derives an efficient way to...
- Issues of convergence, especially when applying gradient descent over a non-Euclidean space, is not addressed In all, a rather thorough paper that derives an efficient way to compute gradients for optimization on LDSs modeled using extended subspaces and kernel-based similarity. At one hand, this leads to improvement...
NIPS_2018_756
NIPS_2018
It looks complicated to assess the practical impact of the paper. On the one hand, the thermodynamic limit and the Gaussianity assumption may be hard to check in practice and it is not straightforward to extrapolate what happens in the finite dimensional case. The idea of identifying the problem's phase transitions is ...
- Given an observed tensor, is it possible to determine the particular phase it belongs to? [1] Rong Ge and Tengyu Ma, 2017, On the Optimization Landscape of Tensor Decompositions
0C5C70C3n8
EMNLP_2023
1. The improvement of the proposed method over the baseline in terms of automatic evaluation metrics is not obvious, and further validation of the effectiveness of the proposed method in terms of producing informative summaries is needed. 2. In the manual evaluation, the authors counted the entity hallucinations, synta...
2. In the manual evaluation, the authors counted the entity hallucinations, syntactic agreement errors, and misspelling errors respectively, why not further classify the intrinsic entity hallucinations more finely into the two intrinsic hallucinations proposed to be solved in this paper: i.e. the entity-entity hallucin...
ACL_2017_108_review
ACL_2017
The problem itself is not really well motivated. Why is it important to detect China as an entity within the entity Bank of China, to stay with the example in the introduction? I do see a point for crossing entities but what is the use case for nested entities? This could be much more motivated to make the reader inter...
- In GENIA dataset: On the GENIA dataset - outperforms by about 0.4 point_s_: I would not call that "outperform" - that _the_ GENIA dataset - this low recall: which one?
ICLR_2023_217
ICLR_2023
1)In Figure 1 and Figure 5, what is the representation of the curve radian? 2)The α in Figure 2 is not explained in the main paper. 3)Does the model pre-trained on public data introduce security issues to users? How do we make trade-offs between security and performance?
2)The α in Figure 2 is not explained in the main paper.
aHmNpLlUlb
ICLR_2024
1. The paper is hard to read. It is unclear what is input to new meeting components. 2. It is hard to understand the general idea of the model, and Fig. 2 is completely unclear. 3. Genera formulas (3) and (4) are unceler. 4. In the experimental section we do not have experiments with the ShapeNet-based dataset (see pix...
2. It is hard to understand the general idea of the model, and Fig. 2 is completely unclear.
7wJhlDMNH7
EMNLP_2023
The reasons for the rejection are as follows: 1. The proposed problem in the manuscript is no doubt a great contribution, but the insights from the editing techniques and observations are not primarily clear. 2. An analysis of the performance degradation, for instance, after editing with 10 instances, is not available ...
3. An appendix with the implementation details can help with reproducibility. Additionally, the wall clock time to edit one instance and GPU consumption details are an add-on to show the analysis could also help in the insights.
NIPS_2018_917
NIPS_2018
- Results on bAbI should be taken with a huge grain of salt and only serve as a unit-test. Specifically, since the bAbI corpus is generated from a simple grammar and sentence follow a strict triplet structure, it is not surprising to me that a model extracting three distinct symbol representations from a learned senten...
- p.9: Recurrent entity networks (RENs) [12] is not just an arXiv paper but has been published at ICLR 2017.
ICLR_2021_973
ICLR_2021
. Clearly state your recommendation (accept or reject) with one or two key reasons for this choice. I recommend acceptance. The number of updates needed to learn realistic brain-like representations is a fair criticism of current models, and this paper demonstrates that this number can be greatly reduced, with moderate...
- Bottom of pg.4: I think 37 bits / synapse (Zador, 2019) relates to specification of the target neuron rather than specification of the connection weight. So I’m not sure its obvious how this relates to the weight compression scheme. The target neurons are already fully specified in CORnet-S.
ACL_2017_33_review
ACL_2017
- Very close to distant supervision - Mostly poorly informed baselines General Discussion: This paper presents an extension of the vanilla LSTM model that incorporates sentiment information through regularization. The introduction presents the key claims of the paper: Previous CNN approaches are bad when no phrase-leve...
- Very close to distant supervision - Mostly poorly informed baselines
NIPS_2016_395
NIPS_2016
- I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differenti...
5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially p...
5vJe8XKFv0
ICLR_2024
I think the proposed method is novel, and that it can give comparable performance. What is not clear from the presentation both theoretical justification and/or empirical evidence is the benefits that it can have over FNO. Please see questions section for specific. The writing/overall presentation can be improved. - Fo...
- Reference to Table 7 is broken - The caption of figure 1 is not coherent with the figure and/or text description of the model. These are important, as readers will get super-confused if these are not in place.
NIPS_2021_386
NIPS_2021
1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS...
1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS...
NIPS_2018_695
NIPS_2018
Weakness: a) There is no quantitative comparison between AE-NAM and VAE-NAM. It is necessary to answer that when one-to-many is not concerned, which one, AE-NAM or VAE-NAM should be used. In another word, the superior of VAE-NAM comes from V or AE? b) It contains full of little mistakes or missing references. For examp...
2): use E() but the context is discussing C(); iv. Line 174: what is mu_y? missing \ in latex? v. Line 104: gramma error?
8Ezv4kDDee
ICLR_2025
- Limited Practical Impact of Theoretical Formulation: Section 3 introduces a theoretical framework with equations involving KL divergence and mutual information to motivate the role of task descriptions. However, these equations, especially Equation (5), are not integrated into the experiments and thus do not guide th...
- Limited Practical Impact of Theoretical Formulation: Section 3 introduces a theoretical framework with equations involving KL divergence and mutual information to motivate the role of task descriptions. However, these equations, especially Equation (5), are not integrated into the experiments and thus do not guide th...
ICLR_2022_2531
ICLR_2022
I have several concerns about the clinical utility of this task as well as the evaluation approach. - First of all, I think clarification is needed to describe the utility of the task setup. Why is the task framed as generation of the ECG report rather than framing the task as multi-label classification or slot-filling...
- What kind of tokenization is used in the model? Which Spacy tokenizer?
ICLR_2022_2123
ICLR_2022
of this submission and make suggestions for improvement: Strengths - The authors provide a useful extension to existing work on VAEs, which appears to be well-suited for the target application they have in mind. - The authors include both synthetic and empirical data as test cases for their method and compare it to a r...
- Have you applied multiple testing correction for the FID comparisons across diagnoses. If so which? If not, you should apply it and please, state that clearly in the main manuscript.
6srsYdjLnV
EMNLP_2023
Some major concerns: - Gender neutrality assumption in the source language. Though some nouns are lexically gender-neutral in English, document context could render these nouns gender-specific. Relevant questions: 1) is context taken into consideration while filtering gender-neutral English sources? 2) did the survey i...
1) is context taken into consideration while filtering gender-neutral English sources?
gInIbukM0R
ICLR_2025
Major: - Conclusions are made before results are presented (L62, L256, L266), and sometimes some claims are made with no supporting results at all (L242, L302). Furthermore, many results are presented as "validating our hypothesis". I would suggest removing this hypothesis-based presentation, and focus on an analysis o...
- Claims of significance should be backed by statistical tests. Minor - Repeated reference for (Li et al, 2023) - Remove "intuitively" in L251 if no intuition is provided.
ICLR_2023_1490
ICLR_2023
Weakness: 1. The paper provides much about the experimental results, while the contents for the method and motivation seem insufficient. 2. Not much insights have been given as to why spiking self-attention should be designed in this way. 3. Other concerns detailed in Summary.
2. Not much insights have been given as to why spiking self-attention should be designed in this way.
QVVSb0GMXK
ICLR_2024
1. The overall contributions (i.e., Fig. 3b and the discsusion below in page 5) appear somewhat one-dimensional and lack the significance to make this work distinct. The essence of the proposed NME appears to be a straightforward (and a brute-force) rescaling by considering all potential factors (i.e., k). 2. Several c...
1. The overall contributions (i.e., Fig. 3b and the discsusion below in page 5) appear somewhat one-dimensional and lack the significance to make this work distinct. The essence of the proposed NME appears to be a straightforward (and a brute-force) rescaling by considering all potential factors (i.e., k).
ARR_2022_15_review
ARR_2022
- The modeling of the mixture of multiple aspects is not explored deeper enough, and thus the so-called "first to explore" (in the Introduction) sounds more like an overclaim to me. - The qualitative results (Table 5) and human evaluation both show that there are still limitations of the proposed method and the improve...
- The modeling of the mixture of multiple aspects is not explored deeper enough, and thus the so-called "first to explore" (in the Introduction) sounds more like an overclaim to me.
ICLR_2023_4713
ICLR_2023
• [Major] Though CLP achieves better attack impact compared to traditional attacks with similar or lower attack budget on average, this method requires more clients to perturb. It is not clear how the performance difference is when the traditional attack methods utilize all these clients. • [Major] In the paragraph on ...
• [Major] In the paragraph on improved resilience against defenses, the paper simply claims that a lower average budget implies better robustness. With the varying number of malicious clients, I don't think this is obvious, and I suggest the authors provide a quantitative comparison.
NIPS_2018_874
NIPS_2018
--- None of these weaknesses stand out as major and they are not ordered by importance. * Role of and relation to human judgement: Visual explanations are useless if humans do not interpret them correctly (see framework in [1]). This point is largely ignored by other saliency papers, but I would like to see it addresse...
* The presentation would be better if it presented the proposed approach as one metric (e.g., with a name), something other papers could cite and optimize for.
NIPS_2020_628
NIPS_2020
1. The model consists of spectral normalized hidden layers to guarantee a bounded Lipchitz constant for top NN layers, and a random Fourier feature approximated Gaussian process as the last layer. The combination is new, but the overall method is not end-to-end. Thus it can be hard to balance these two components to le...
2. The main strength is the empirical performance, but the paper does not release code.
ARR_2022_299_review
ARR_2022
The main concern is the measure of the inference speed. The authors claimed that "the search complexity of decoding with refinement as consistent as that of the original decoding with beam search" (line 202), and empirically validated that in Table 1 (i.e., #Speed2.). Even with local constraint, the model would conduct...
2. Ablation study in Section 4.1.3 should be conducted on validation sets instead of test sets (similar to Section 4.1.2). In addition, does the refinement mask in Table 2 denote that randomly selecting future target words no greater than N in model training (i.e., Line 254)?
NIPS_2020_1606
NIPS_2020
- All theoretical contributions are based on an unrealistic model in Eq. 1. The proposed model over-siplifies he problem by assuming the noise only on sources and not sensors. - The method section is very dense and difficult to follow. - Some related works (such as DL) are excluded from experimental comparisons. - The ...
1. The proposed model over-siplifies he problem by assuming the noise only on sources and not sensors.
MbKRJUowYX
EMNLP_2023
1.In the introduction and the experimental analysis, the authors marked the magnitude of the metrics enhancement(8.53% in PPL,16.7% in Dist-2,8.34% in Acc),It is suggested that the change be made to the absolute value of the enhancement rather than the percentage to reduce ambiguity; 2.The performance of emotion intens...
1.In the introduction and the experimental analysis, the authors marked the magnitude of the metrics enhancement(8.53% in PPL,16.7% in Dist-2,8.34% in Acc),It is suggested that the change be made to the absolute value of the enhancement rather than the percentage to reduce ambiguity;
NIPS_2019_1130
NIPS_2019
weakness is the lack of focus of the discussion. I feel that too many points are scattered and there lacks a central message on the insights gained. Below are some specific questions and concerns: 1. Line 100-101: The theoretical results in [36] and also [26] do not assume that the gradient noise $Z_k$ is Gaussian. The...
4. Line 277-286: This is an interesting observation. However, I have some concerns on its validity in general settings. It is well-known that 1D SDEs with multiplicative noise can be written as a noisy gradient flow of a modified potential function, but this fails to hold in high dimensions. It appears to me that by as...
HzecOxOGAS
EMNLP_2023
- A few parts lack details. E.g., Line 220, we sift through the financial corpora to isolate sentences that include these metrics. (how to identify these metrics). - It is good to run the experiments multiple times and report the mean and std.
- A few parts lack details. E.g., Line 220, we sift through the financial corpora to isolate sentences that include these metrics. (how to identify these metrics).
ICLR_2023_4130
ICLR_2023
weakness: 1.The authors mention that “we focus on quantum vision transformers applied to image classification tasks”. Can the authors explain more about why previous quantum transformers are not suitable for image classification task? What makes this work perform well in the classification task rather than others? 2.So...
4.What are the differences between A- Orthogonal Patch-wise scheme (Table 2) and the classical transformer? It seems that both can be formulated as Vx_i.