paper_id stringlengths 10 19 | venue stringclasses 14
values | focused_review stringlengths 7 8.09k | point stringlengths 54 690 |
|---|---|---|---|
NIPS_2020_897 | NIPS_2020 | 1. Not clear how this method can be applied outside of fully cooperative settings, as the authors claim. The authors should justify this claim theoretically or empirically, or else remove it. 2. Missing some citations to set this in context of other MARL work e.g. recent papers on self-play and population-play with res... | 2. Missing some citations to set this in context of other MARL work e.g. recent papers on self-play and population-play with respect to exploration and coordination (such as https://arxiv.org/abs/1806.10071, https://arxiv.org/abs/1812.07019). |
ICLR_2023_1645 | ICLR_2023 | 1. Can this method be used on both SEEG and EEG simultaneously? 2. It would be better to compare with other self-supervised learning methods that are not based on contrastive learning. | 2. It would be better to compare with other self-supervised learning methods that are not based on contrastive learning. |
t8cBsT9mcg | ICLR_2024 | 1. The abstract should be expanded to encompass key concepts that effectively summarize the paper's contributions. In the introduction, the authors emphasize the significance of interpretability and the challenges it poses in achieving high accuracy. By including these vital points in the abstract, the paper can provid... | 2. Regarding the abstention process, it appears to be based on a prediction probability threshold, where if the probability is lower than the threshold, the prediction is abstained? How does it different from a decision threshold used by the models? Can authors clarify that? |
NIPS_2021_1743 | NIPS_2021 | 1. While the paper claim the importance of language modeling capability of pre-trained models, the authors did not conduct experments on generation tasks that are more likely to require a well-performing language model. Experiments on word similarity and SquAD in section 5.3 cannot really reflect the capability of lang... | 3. The comparison with Megatron is a little overrated. The performance of Megatron and COCO-LM is close to other approaches, for examples, RoBERTa, ELECTRA, and DeBERTa, which are with similar sizes as COCO-LM. If the author claim that COCO-LM is parameter-efficient, the conclusion is also applicable to the above relat... |
NIPS_2018_865 | NIPS_2018 | weakness of this paper are listed: 1) The proposed method is very similar to Squeeze-and-Excitation Networks [1], but there is no comparison to the related work quantitatively. 2) There is only the results on image classification task. However, one of success for deep learning is that it allows people leverage pretrain... | 4) The analysis from line 128 to 149 is not convincing enough. From the histogram as shown in Fig 3, the GS-P-50 model has smaller class selectivity score, which means GS-P-50 shares more features and ResNet-50 learns more class specific features. And authors hypothesize that additional context may allow the network to... |
kfFmqu3zQm | ICLR_2025 | 1. Some conclusions are not convincing. For example, the paper contends that *We believe that continuous learning with unlabeled data accumulates noise, which is detrimental to representation quality.* The results might come from the limited exploration of combination methods. In rehearsal-free continual learning, feat... | 1. Some conclusions are not convincing. For example, the paper contends that *We believe that continuous learning with unlabeled data accumulates noise, which is detrimental to representation quality.* The results might come from the limited exploration of combination methods. In rehearsal-free continual learning, feat... |
TY9mstpD02 | ICLR_2025 | - **generalizability to other models**: the proposed framework is validated using gpt-4-turbo, a costly language model, which may compromise the applicability of the framework at scale. The paper could be further improved by showing how running the experiments using a cheaper model (e.g., gpt-4o) and/or open source mod... | - **lack of meaningful baselines**: despite mentioning various model criticism techniques in Section 2, the authors limit their comparisons to simple naive baselines. For example, the authors could compare with a chain-of-thought prompting approach. |
ICLR_2022_2531 | ICLR_2022 | I have several concerns about the clinical utility of this task as well as the evaluation approach.
- First of all, I think clarification is needed to describe the utility of the task setup. Why is the task framed as generation of the ECG report rather than framing the task as multi-label classification or slot-filling... | - Do you pretrain the cardiac signal representation learning model on the entire dataset or just the training set? If the entire set, how well does this generalize to setting where you don’t have the associated labels? |
NIPS_2022_2152 | NIPS_2022 | The authors clearly addressed some potential limitations of the work: 1) Some observations and subsequent design decisions might be hardware and software dependent; 2) The NAS procedure, specifically the latency-driven slimming procedure is less involved and could be a direction for future exploration. | 1) Some observations and subsequent design decisions might be hardware and software dependent; |
ICLR_2021_147 | ICLR_2021 | the empirical validation is weak. Therefore, more new models need to be compared. For more details, please refer to “Reasons for reject”
Reasons for accept: 1. The structure of this paper is clear and easy to read. Specifically, the motivation of this paper is clear and the structure is well organized; the related work... | 1. The structure of this paper is clear and easy to read. Specifically, the motivation of this paper is clear and the structure is well organized; the related work is elaborated in detail; the experimental setup is complete. |
NIPS_2022_738 | NIPS_2022 | W1) The paper states that "In order to introduce epipolar constraints into attention-based feature matching while maintaining robustness to camera pose and calibration inaccuracies, we develop a Window-based Epipolar Transformer (WET), which matches reference pixels and source windows near the epipolar lines." It claim... | 1). Is the ground truth sufficiently accurate enough that such a small difference is actually noticeable / measurable or is the difference due to noise or randomness in the training process? b) Similarly, there is little difference between the results reported for the ablation study in Tab. |
39n570rxyO | ICLR_2025 | This paper has weaknesses to address:
* The major weakness of this paper is the extremely limited experiments section. There are many experiments, yet almost no explanation of how they're run or interpretation of the results. Most of the results are written like an advertisement, mostly just stating the method outperfo... | * Similarly, many important experimental details are missing or relegated to the Appendix, and the Appendix also includes almost no explanations or interpretations. For example, the PCA experiments in Figures 3, 7, and 8 aren't explained. |
NIPS_2017_236 | NIPS_2017 | Weakness:
1. The real applications that the proposed method can be applied to seem to be rather restricted. It seems the proposed algorithm can only be used as a fast evaluation of residual error for 'guessing' or 'predetermining' the range of Tucker ranks, not the real ranks.
2. Since the sampling size 's' depends on ... | 2. Since the sampling size 's' depends on the exponential term 2^[1/(e^2K-2)] (in Theorem 3.4), it could be very large if one requires the error tolerance 'e' to be relatively small and the order of tensor 'K' to be high. In that situation, there won't be much benefits to use this algorithm. Question: |
UaZe4SwQF2 | EMNLP_2023 | - This paper is a bit difficult to follow. There are some unclear statements, such as motivation.
- In the introduction, the summarized highlights need to be adequately elaborated, and the relevant research content of this paper needs to be detailed.
- No new evaluation metrics are proposed. Only existing evaluation me... | - No new evaluation metrics are proposed. Only existing evaluation metrics are linearly combined. In the experimental analysis section, there needed to be an in-depth exploration of the reasons for these experimental results. |
NIPS_2021_1759 | NIPS_2021 | The extension from the EH model is natural. In addition, there has been literature that proves the power of FNN from a theoretical point of view, whereas this paper fails to make a review in this regard. Among other works, Schmidt-Hieber (2020) gave an exact upper bound of the approximation error for FNNs involving the... | 2). The notation K is abused too: it is used both for a known kernel function (e.g., L166) and the number of layers (e.g., L176). |
NIPS_2022_2005 | NIPS_2022 | Originality: Main Result 1 relies on known formulas for low-rank matrix factorization. It is not clearly explained what are the major technical challenges, if any, in obtaining this result.
Clarity: The community labels in (3) and the model (4) are such that the E X
does not have sparse columns if k
is small. For this ... | 2. The weak recovery problem studied here is primarily of theoretical interest, and it is not clear if the AMP algorithm is useful for non-Gaussian problems. So practical impact may be limited. |
NIPS_2017_349 | NIPS_2017 | - The paper is not self contained
Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility.
I also hereby request the authors to release the source code of their experiments to allow reproduction of their results.
- Use of deep-reinfo... | - Unclear whether bringing connections to human cognition makes sense As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adapt... |
EODzbQ2Gy4 | ICLR_2024 | - Wording is overly exaggerated in the conclusion: " ... our pioneering
contributions herald a new era in robotic adaptability ... ". Word choice is a bit flamboyant in multiple places in the writing.
- This paper seems to only be tackle in-distribution task-transfer where typically transfer is thought of as learning t... | - Wording is overly exaggerated in the conclusion: " ... our pioneering contributions herald a new era in robotic adaptability ... ". Word choice is a bit flamboyant in multiple places in the writing. |
Va4t6R8cGG | ICLR_2024 | - This paper does not seem to be the first work of fully end-to-end spatio-temporal localization, while TubeR has proposed to directly detect an action tubelet in a video by simultaneously performing action localization and recognition before. This weakens the novelty of this paper. The authors claim the differences wi... | - The authors need to perform ablation experiments to compare the proposed method with other methods (e.g., TubeR) in terms of the number of learnable parameters and GFLOPs. |
ICLR_2021_2804 | ICLR_2021 | are listed as follows. Strengths:
The paper is easy to read, and the proposed idea is also easy to follow. Figure 1 can help the understanding of the proposed model.
The proposed model does not need the manual labeled relationship between semantic knowledge and target categories, and this may further reduce the supervi... | 2) how the number of graph neural layers affects the overall performance. Overall, the paper is easy to read. The idea of integrating semantic information in few-shot classification is interesting while it has been widely explored in existing works. Given the reported results of the proposed model and lack of analyses ... |
NIPS_2018_125 | NIPS_2018 | - Some missing references and somewhat weak baseline comparisons (see below) - Writing style needs some improvement, although, it is overall well written and easy to understand. Technical comments and questions: - The idea of active feature acquisition, especially in the medical domain was studied early on by Ashish Ka... | - The biggest weakness of the paper is that it does not compare to simple feature acquisition baselines like expected utility or some such measure to prove the effectiveness of the proposed approach. Writing style and other issues: |
NIPS_2021_2123 | NIPS_2021 | This paper still existed some problems that I hope the authors could illustrate in a clearer way.
The authors argued that they were the first time to directly training deep SNNs with more than 100 layers. I don’t think this is the core contribution in this paper, because of the residual block, the spiking could be deep... | 11 is wonderful, how about other bit operations? Fig. 5 a seems strange, please give more explanations. When the input is aer format, how did you deal with DVS input? If you can analyze the energy consumption as reference[15] did, this paper would be more solid. |
ICLR_2022_21 | ICLR_2022 | However, some key architectural details can be clarified further for full reproducibility and analysis. Specifically: 1. How are historical observations combined with inputs known over all time given differences in sequence lengths (L vs L+M)? The text mentions separate embedding and addition with positional encoding, ... | 1. How are historical observations combined with inputs known over all time given differences in sequence lengths (L vs L+M)? The text mentions separate embedding and addition with positional encoding, but clarifications on how the embeddings are combined and fed into the CSCM are needed. |
50RNY6uM2Q | ICLR_2025 | 1. As mentioned in the article itself, the introduction of multi-granularity and multi-scale to enhance model performance is a common approach to convolutional networks, and merely migrating this approach to the field of MLMs is hardly an innovative contribution. Some of the algorithms used in the article from object d... | 1. As mentioned in the article itself, the introduction of multi-granularity and multi-scale to enhance model performance is a common approach to convolutional networks, and merely migrating this approach to the field of MLMs is hardly an innovative contribution. Some of the algorithms used in the article from object d... |
ICLR_2023_2286 | ICLR_2023 | 1. The paper is poorly organized. It is hard to quickly get the motivations and main ideas of the proposed methods.
2. The thermal sensor and environment setting for data collection is not described in details. From Figure 2, why is the quality of thermal image significantly higher than RegDB and SYSU-MM01 ? Does the t... | 5. The sensitivity of hyper-parameters such as $m_1$, $m_2$, $\lambda$ is not discussed. In particular, their values are not specified in the paper. |
NIPS_2021_2338 | NIPS_2021 | Weakness: 1. Regarding the adaptive masking part, the authors' work is incremental, and there have been many papers on how to do feature augmentation, such as GraphCL[1], GCA[2]. The authors do not experiment with widely used datasets such as Cora, Citeseer, ArXiv, etc. And they did not compare with better baselines fo... | 3. I am concerned whether the similarity-aware positive sample selection will accelerate GNN-based encoder over-smoothing, i.e., similar nodes or graphs will be trained with features that converge excessively and discard their own unique features. In addition, whether selecting positive samples in the same dataset with... |
NIPS_2020_1185 | NIPS_2020 | - The theory (thm 2, cor 1) on the representational power of SMPs is only for simple unlabeled graphs. Is there any similar result for graphs with node and/or edge features ? - The experiments are quite limited. I wish to have seen SMPs in the context of graphs with node and edge features and on standard benchmarks use... | - is fast SMP less expressive than SMP ? I wish to have seen more discussion on the power of different architectures. |
ICLR_2023_2163 | ICLR_2023 | Fully training group labels baselines could include more recent methods such as SGDRO (non-flat version of GDRO proposed in Goel et al.). There is a misleading sentence on page 3 when describing the Waterbirds dataset: “The bird images are then modified with either a water or land background.” There is no “modification... | 6) To obtain robust results, it would have been better to evaluate the methods across different splits of train-val-test, not simply different initialisation seeds. |
NIPS_2017_631 | NIPS_2017 | 1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is ... | 6. The first two bullets about contributions (at the end of the intro) can be combined together. |
vg55TCMjbC | EMNLP_2023 | - Although the situations are checked by human annotators, the seed situations are generated by ChatGPT. The coverage of situation types might be limited.
- The types of situations/social norms (e.g., physical/psychological safety) are not clear in the main paper.
- It’s a bit hard to interpret precision on NormLens-MA... | - The types of situations/social norms (e.g., physical/psychological safety) are not clear in the main paper. |
NIPS_2016_370 | NIPS_2016 | , and while the scores above are my best attempt to turn these strengths and weaknesses into numerical judgments, I think it's important to consider the strengths and weaknesses holistically when making a judgment. Below are my impressions. First, the strengths: 1. The idea to perform improper unsupervised learning is ... | 3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses: |
NIPS_2016_221 | NIPS_2016 | weakness: 1. To my understanding, two aspects which are the keys to the segmentation performance are: (1) The local DNN evaluation of shape descriptors in terms of energy, and (2) The back-end guidance of (super)voxel agglomeration. Although experiment showed gains of the proposed method over GALA, it is yet not clear ... | 1. I understand this paper targets a problem which somewhat differs from general segmentation problems. And I do very much appreciate its potential benefit to the neuroscience community. This is indeed a plus for the paper. However, an important question is how much this paper can really improve over the existing solut... |
ICLR_2022_2754 | ICLR_2022 | I feel the motivation of the work is confusing. I can understand the authors want to improve CQL somehow further. But it is never made clear:
what existing problems are and why they matter; Is it the lower bound on the exiting CQL is too loose? Why is improving the bound important?
what is the effect you want to achiev... | 2. I expect more baselines to be compared and more domains to be tested. As I mentioned, the choices of the weighting and the way of learning density functions are not strongly motivated. In this case, I have to ask for stronger empirical results: baselines with other design choices and more domains. |
NIPS_2016_153 | NIPS_2016 | weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model whic... | 1. Please define the dashed lines in fig. 2A-B and 4B. |
ICLR_2022_2196 | ICLR_2022 | weakness] Modeling:
The rewards are designed based on a discriminator. As we know, generative adversarial networks are not easy to train since generative networks and discriminative networks are trained alternatively. In the proposed method, the policy network and the discriminator are trained alternatively. I doubt if... | - Since the results are not comparable to the existing methods, there seems not too much significance for the proposed methods. |
53kW6e1uNN | ICLR_2024 | 1. Limited novelty. The paper seems like a straightforward application of existing literature, specifically the DeCorr [1] that focuses on general deep graph neural networks, in a specific application domain. The contribution of this study is mainly the transposition of DeCorr's insights into graph collaborative filter... | 1. Limited novelty. The paper seems like a straightforward application of existing literature, specifically the DeCorr [1] that focuses on general deep graph neural networks, in a specific application domain. The contribution of this study is mainly the transposition of DeCorr's insights into graph collaborative filter... |
NIPS_2017_575 | NIPS_2017 | - While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and ... | - The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3. |
ICLR_2021_1465 | ICLR_2021 | 1. The complexity analysis is insufficient. In the draft, the author only provide the rough overall complexity. A better way is to show the comparison between the proposed method and some other methods, including the number of model parameter and network forwarding time.
2. In the converting of point cloud to concentri... | 3. The Figure 2 is a little ambiguous, where some symbols are not explained clearly. And the reviewer is curious about that whether there is information redundancy and interference in the multi-sphere icosahedral discretization process. |
NIPS_2017_585 | NIPS_2017 | weakness of the paper is in the experiments: there should be more complete comparisons in computation time, and comparisons with QMC-based methods of Yang et al (ICML2014). Without this the advantage of the proposed method remains unclear.
- The limitation of the obtained results:
The authors assume that the spectrum o... | - The limitation of the obtained results: The authors assume that the spectrum of a kernel is sub-gaussian. This is OK, as the popular Gaussian kernels are in this class. However, another popular class of kernels such as Matern kernels are not included, since their spectrum only decay polynomially. In this sense, the r... |
ICLR_2023_4654 | ICLR_2023 | Weakness:
1), The proposed approach is straightforward (not a demerit), and is a native extension on how to extend the DETR into few-shot, although there exist some specific mechanism designs in this paper to facilitate such extension. However, similar ideas also can be found in existing papers such as [1], which appea... | 2), From the data in Table 4, it indicates that the unsupervised pretraining is a key factor on the performance gain. However, there is no detailed discussion on the unsupervised pretraining in the main paper, which might be a problem. In fact, compared with ablation study of Table 5, the unsupervised pretraining is mu... |
Ie040B4nFm | EMNLP_2023 | - The proposed system seems to deter the model in terms of their BLEU scores (system degrades in 2 out of the 3 settings). This leads me to think that while the model seems to do well on speaker specific terms/inflections, the overall translations degrade.
- How would we choose which ELM to pick (male/female)? Does thi... | - How would we choose which ELM to pick (male/female)? Does this require us to know the speaker’s gender beforehand, i.e., at inference time? This seems like a drawback as the accuracy should be calculated after using a gender detection model in the pipeline (at least in the cases where vocal traits match speaker ident... |
ARR_2022_187_review | ARR_2022 | 1. Not clear if the contribution of the paper are sufficient for a long *ACL paper. By tightening the writing and removing unnecessary details, I suspect the paper will make a nice short paper, but in its current form, the paper lacks sufficient novelty. 2. The writing is difficult to follow in many places and can be s... | 2. The writing is difficult to follow in many places and can be simplified. |
NIPS_2017_217 | NIPS_2017 | - The paper is incremental and does not have much technical substance. It just adds a new loss to [31].
- "Embedding" is an overloaded word for a scalar value that represents object ID.
- The model of [31] is used in a post-processing stage to refine the detection. Ideally, the proposed model should be end-to-end witho... | - The paper is incremental and does not have much technical substance. It just adds a new loss to [31]. |
ikX6D1oM1c | ICLR_2024 | - I found Sec 5.1 and 5.2 difficult to read and I think clarity can be improved. What confused me initially was that you suggest fixing $P^*(U|x, a)$ but then the $\sup$ in Eq. 5 is also over the distributions $p(u|x, A)$. Reading it further, the sup is only for $A \neq a$ but I think clarifying that you only fix for t... | - It would also be nice to have some intuition of the proof of Theorem 1. Also, the invertible function $f^*$ would depend on the fixed $P^*$. Does certain distributions $P^*$ make it easier to determine $f^*$. In practice, how should you determine which $P^*$ to fix? |
NIPS_2018_15 | NIPS_2018 | - The hGRU architecture seems pretty ad-hoc and not very well motivated. - The comparison with state-of-the-art deep architectures may not be entirely fair. - Given the actual implementation, the link to biology and the interpretation in terms of excitatory and inhibitory connections seem a bit overstated. Conclusion: ... | - I would have expected Eqs. (7) and (10) to be analogous, but instead one uses X and the other one H^(1). Why is that? |
NIPS_2021_2024 | NIPS_2021 | below). Using the related literature on active interventions would require full identification of the underlying DAG. It is emphasized that matching only the means can be done with significantly smaller number of interventions, and this is the difference from previous works. - Identifiability in terms of Markov equival... | - Although the causal matching problem seems interesting and new, it is not well motivated. To the reviewer’s knowledge, interventions on a causal model are tied to inferring the underlying structure (it does not need to be the whole structure of the model). In this regard, it is not clear how exactly matching the mean... |
NIPS_2016_279 | NIPS_2016 | Weakness: 1. The main concern with the paper is the applicability of the model to real-world diffusion process. Though the authors define an interesting problem with elegant solutions, however, it will be great if the authors could provide empirical evidence that the proposed model captures the diffusion phenomena in r... | 1. The main concern with the paper is the applicability of the model to real-world diffusion process. Though the authors define an interesting problem with elegant solutions, however, it will be great if the authors could provide empirical evidence that the proposed model captures the diffusion phenomena in real-world. |
V8PhVhb4pp | ICLR_2024 | The main weaknesses of this paper are the lack of enough qualitative results and the ambiguity of explanation.
1. In the ablation study of 4.3, only one particular qualitative example is shown to demonstrate the effectiveness of different components. This is far from being convincing. The authors should have included m... | 2. In the "bidirectional guidance" part of section 4.3 Ablation Studies, the results shown at the top row of figure 6 seem to be totally different shapes. I understand this can happen for the 2D diffusion model. However the text also says "... and the 3D diffusion model manifests anomalies in both texture and geometric... |
ICLR_2023_1553 | ICLR_2023 | of the papers in my opinion are as follows: 1)The mehtod is only tested on two datasets. Have the authors tried more datasets to get a better idea of the performance. 2) The codes for the paper are not released. | 1)The mehtod is only tested on two datasets. Have the authors tried more datasets to get a better idea of the performance. |
NIPS_2017_53 | NIPS_2017 | Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should menti... | 5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it ... |
bIlnpVM4bc | ICLR_2025 | - The main contribution of combining attention with other linear mechanisms is not novel, and, as noted in the paper, a lot of alternatives exist.
- A comprehensive benchmarking against existing alternatives is lacking. Comparisons are only made to their proposed variants and Sliding Window Attention in fair setups. A ... | - The main contribution of combining attention with other linear mechanisms is not novel, and, as noted in the paper, a lot of alternatives exist. |
pUOesbrlw4 | ICLR_2024 | 1. The paper is lacking a clear and precise definition of unlearning. Its is important to show the definition of unlearning that you want to achieve through your algorithm.
2. The proposed algorithm is an empirical algorithm without any theoretical guarantees. It is important for unlearning papers to provide unlearning... | 7. Since the method is applied on each layer, the authors should provide a plot of how different different weights of the model move, for instance plot the relative weight change after unlearning to see which layers are affected the most after unlearning. |
NIPS_2017_415 | NIPS_2017 | Weakness:
1. From the methodology aspect, the novelty of paper appears to be rather limited. The ENCODE part is already proposed in [10] and the incremental contribution lies in the decomposition part which just factorizes the M_v into factor D and slices Phi_v.
2. For the experiment, I'd like to the effect of optimize... | 1. From the methodology aspect, the novelty of paper appears to be rather limited. The ENCODE part is already proposed in [10] and the incremental contribution lies in the decomposition part which just factorizes the M_v into factor D and slices Phi_v. |
ICLR_2022_1653 | ICLR_2022 | Weakness]: (1) There is a large gap in the proof of Theorem 1. (2) Missing discussion of the line of research using random matrix theory to understand the input-output Jacobian [1], which also consider the operator norm of the input-out Jacobian and draws a very similar conclusion, e.g., the squared operator norm must ... | 1.) What is the domain of the inputs? It seems they are lying in the same sphere, not mentioned in the paper. |
jfTrsqRrpb | ICLR_2024 | 1. This paper generate candidate object regions through unsupervised segmentation methods. However, it cannot be guaranteed that these unsupervised methods can generate object regions that cover all regions. Especially when the number of categories increases, I question the performance of the unsupervised segmentation ... | 3. [A] also proproses a CLN (region proposal generation algorithm). What's about performance comparision with this work. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | - The presentation is at times too equation-driven and the notation, especially in chapter 3, quite convoluted and hard to follow. An illustrative figure of the key concepts in section 3 would have been helpful. |
NIPS_2016_499 | NIPS_2016 | - The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? - The proposed method reduces the computation time drastically compared to... | - Second rule in Lemma 2, i.e., Eq (7) and the definition of minimal conditional dependence seem to be conflicting. Taking Zâ in this definition to be the empty set, we should have that x and y are independent given W, but Eq. (7) says otherwise. |
7EK2hqWmvz | ICLR_2025 | 1. The paper does not clearly position itself with respect to existing retrieval-augmented methods that used to accelerate the model’s inference. A more thorough literature review is needed to highlight how RAEE differs from and improves upon prior work.
2. While the data presented in figure3 is comprehensive, I notice... | 2. While the data presented in figure3 is comprehensive, I noticed that the visual presentation, specifically the subscripts, could be enhanced for better readability and aesthetic appeal. |
NIPS_2019_757 | NIPS_2019 | Weakness 1. Online Normalization introduces two additional hype-parameters: forward and backward decay factors. The authors use a logarithmic grid sweep to search the best factors. This operation largely increases the training cost of Online Normalization. Question: 1. The paper mentions that Batch Normalization has th... | 1. The paper mentions that Batch Normalization has the problem of gradient bias because it uses mini-batch to estimate the real gradient distribution. In contrast, Online Normalization can be implemented locally within individual neurons without the dependency on batch size. It sounds like that Online Normalization and... |
ICLR_2023_2322 | ICLR_2023 | ---
W1. The authors have clearly reduced whitespace throughout the paper; equations are crammed together, captions are too close to the figures. This by itself is grounds for rejection since it effectively violates the 9-page paper limit.
W2. An important weakness that is not mentioned anywhere is that the factors A ( ... | --- W1. The authors have clearly reduced whitespace throughout the paper; equations are crammed together, captions are too close to the figures. This by itself is grounds for rejection since it effectively violates the 9-page paper limit. |
NIPS_2019_95 | NIPS_2019 | of the submission. * originality: I enjoyed reading this paper. It introduces a new and interesting twist on the secretary problem, thereby providing a stylized theoretical version capturing the main essence of the task of ranking in many online settings. Some part of the analysis also provides some novel techniques th... | * significance: In terms of significance, I believe that this work would be of interest to a small fraction of researchers within NIPS. In fact, in terms of fit, this looks like more a submission to be found within SODA. |
ICLR_2022_2110 | ICLR_2022 | Weakness: 1) Although each part of the proposed method is effective, the overall algorithm is still cumbersome. It has multiple stages. In contrast, many of existing pruning methods do not need fine-tuning. 2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or proc... | 2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or procedure novelty. |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - T... | - The concept of local interactions is not as clear as the rest of the paper. Is it local in that it refers to the interactions within a time window, or is it local in that it is within the same modality? |
NIPS_2020_1228 | NIPS_2020 | - The method section looks not self-contained and lacks descriptions of some key components. In particular: * What is Eq.(9) for? Why "the SL is the negative logarithm of a polynomial in \theta" -- where is the "negative logarithm" in Eq.(9)? * Eq.(9) is not practically tractable. It looks its practical implementation ... | - The paper claims better results in the Molecule generation experiment (Table.3). However, it looks adding the proposed constrained method actually yields lower validity and diversity. |
NIPS_2019_651 | NIPS_2019 | (large relative error compared to AA on full dataset) are reported. - Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-imp... | 1. Algorithm 2 provides the coreset C and the query Q consists of the archetypes z_1, â¦, z_k which are initialised with the FurthestSum procedure. However, it is not quite clear to me how the archetype positions are updated after initialisation. Could the authors please comment on that? |
ICLR_2022_2123 | ICLR_2022 | of this submission and make suggestions for improvement:
Strengths - The authors provide a useful extension to existing work on VAEs, which appears to be well-suited for the target application they have in mind. - The authors include both synthetic and empirical data as test cases for their method and compare it to a r... | - There are important information about the empirical study missing that should be mentioned in the supplement, such as recording parameters for the MRI, preprocessing steps, was the resting-state recorded under eyes-open or eyes-closed condition? A brief explanation of the harmonization technique would also be appreci... |
NIPS_2019_82 | NIPS_2019 | 1. One major risk of methods that exploit relationships between action units is that the relationships can be very different accross datasets (e.g. AU6 can occur both in an expression of pain and in happiness, and this co-occurence will be very different in a positive salience dataset such as SEMAINE compared to someth... | 1. One major risk of methods that exploit relationships between action units is that the relationships can be very different accross datasets (e.g. AU6 can occur both in an expression of pain and in happiness, and this co-occurence will be very different in a positive salience dataset such as SEMAINE compared to someth... |
NIPS_2020_867 | NIPS_2020 | - As someone without a linguistics background, it was at times difficult for me to follow some parts of the paper. For example, it’s not clear to me why we care about the speaker payoff and listener payoff (separate from listener accuracy), rather than just a means to obtain higher accuracy --- is it important that the... | - I would have liked more description of the Starcraft environment (potentially in an Appendix?) |
NIPS_2019_499 | NIPS_2019 | of the method. Are there any caveats to practitioners due to some violation of the assumptions given in Appendix. B or for any other reasons? Clarity: the writing is highly technical and rather dense, which I understand is necessary for some parts. However, I believe the manuscript would be readable to a broader audien... | - line 47 - 48 "over-parametrization invariably overfits the data and results in worse performance": over-parameterization seems to be very helpful for supervised learning of deep neural networks in practice ... Also, I have seen a number of theoretical work showing the benefits of over-parametrisation e.g. [1]. |
NIPS_2016_313 | NIPS_2016 | Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sp... | 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be d... |
uSiyu6CLPh | ICLR_2025 | * I suggest that the authors show a more intuitive figure to visualize the framework that includes the images and labels in the original dataset and also the corrected images. This will help the readers to gain more intuition for your method.
* The authors combine two existing techniques to get the framework without in... | * The authors combine two existing techniques to get the framework without innovation. The adversarial attack or correction method and the domain adaptation method used by the authors are proposed by prior work. And the adopted domain adaptation method here is a very old and simple method which is proposed eight years ... |
qb2QRoE4W3 | ICLR_2025 | Despite the idea being interesting, I have found some technical issues that weakened the overall soundness. I enumerate them as follows:
1. The assumption that generated URLs are always meaningfully related to the core content of the document from where the premises are to be fetched is not true by and large. It works ... | 7. A discussion on the prompt dataset (for the few-shot case) creation together with its source should be discussed. |
CoEuk8SNI1 | EMNLP_2023 | - Very difficult to follow the motivation of this paper. And it looks like an incremental engineering paper.
- The abstract looks a little vague. For example, “However, it is difficult to fully model interaction between utterances …” What is 'interaction between utterances' and why is it difficult to model? This inform... | - Very difficult to follow the motivation of this paper. And it looks like an incremental engineering paper. |
NIPS_2020_1253 | NIPS_2020 | 1. Perhaps the most important limitation I can see is the artificial environments used. In games and especially those old Atari ones, audio events can be repeated exaclty the same and it's quite easy for the network to learn to distinguish new sounds, whereas this might not be the case in more realistic environments, w... | 6. An ablation on the weighting method of the cross-entropy loss would be nice to see. The authors note for example that in Atlantis their method underperforms because "the game has repetitive background sounds". This is a scenario I'd expect the weighting might have helped remedy. |
ARR_2022_52_review | ARR_2022 | 1. A critical weakness of the paper is the lack of novelty and incremental nature of work. The paper addresses a particular problem of column operations in designing semantic parsers for Text-to-SQL. They design a new dataset which is a different train/test split of an existing dataset SQUALL. The other synthetic bench... | 1. A critical weakness of the paper is the lack of novelty and incremental nature of work. The paper addresses a particular problem of column operations in designing semantic parsers for Text-to-SQL. They design a new dataset which is a different train/test split of an existing dataset SQUALL. The other synthetic bench... |
NIPS_2018_901 | NIPS_2018 | Weakness: - The experiments are only done on one game environment. More experiments are necessary. - This method seems not generalizable for other games e.g. FPS game. People can hardly do this on realistic scenes such as driving. Static Assumption too strong. | - The experiments are only done on one game environment. More experiments are necessary. |
NIPS_2020_1451 | NIPS_2020 | 1. Unlike the works HaoChen and Sra and Nagaraj et.al, this work uses the fact that all component functions f_i are mu strongly convex. 2. The authors need to explain why removing some of the assumptions like bounded variance and bounded gradients is an important contribution via. solid examples. 3. The quantity sigma^... | 2. The authors need to explain why removing some of the assumptions like bounded variance and bounded gradients is an important contribution via. solid examples. |
NIPS_2019_900 | NIPS_2019 | -no consideration for approximate number schemes in related work. -no support for float numbers. -At many points in the paper, it is not clear if unecrypted model is a model with PAA or a model with ReLU activation. -what is TCN? the abbreviation is explained way too late into the paper -Tables in chapter 5 are overloa... | - Although the authors claim they implement ImageNet for the first time, it is very slow and accuracy is very low; "SHE needs 1 day and 2.5 days to test an ImageNet picture by AlexNet and ResNet-18, respectively" and accuracy is around 70% |
NIPS_2019_165 | NIPS_2019 | of the approach and experiments or list future direction for readers. The writeup is exceptionally clear and well organized-- full marks! I have only minor feedback to improve clarity: 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between ... | 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between the learning curves and M-PHATE. Why do you want to want me to look at the learning curves? Does worse performing model always result in structural collapse? What is the accuracy number... |
elMKXvhhQ9 | ICLR_2024 | 1. The paper should acknowledge related works that are pertinent to the proposed learnable data augmentation, such as [a] and [b]. It is crucial to cite and discuss the distinctions between these works and the proposed approach, providing readers with a clear understanding of the novel contributions made by this study.... | 3. While consistency training might usually be deployed on unlabeled data, I wonder if it would be beneficial to utilize labeled data for consistency training as well. Specifically, labeled data has exact labels, which might provide effective information for consistency training the model in dealing with the taks of gr... |
zkzf0VkiNv | ICLR_2024 | 1. Figure 2 shows that, without employing data augmentation and similarity-based regularization, the performance of CR-OSRS is comparable to RS-GM.
2. Could acceleration be achieved by incorporating entropy regularization into the optimization process?
3. It would be beneficial if the authors could provide an analysis ... | 3) The experimental part needs to be reorganized and further improved. The experimental section has a lot of content, but the experimental content listed in the main text does not highlight the superiority of the method well, so it needs to be reorganized. Based on the characteristics of the article, the experimental s... |
NIPS_2018_681 | NIPS_2018 | Weakness: However, I'm not very convinced with experimental results and I a bit doubt that this method would work in general and is useful in any sense. 1. The authors propose a new classification network, but I a bit doubt that its classification error is universally as good as the standard softmax network. It is a bi... | 1. The authors propose a new classification network, but I a bit doubt that its classification error is universally as good as the standard softmax network. It is a bit dangerous to build a new model for better detecting out-of-distribution samples, while losing its classification accuracy. Could the authors report the... |
NIPS_2017_28 | NIPS_2017 | - Most importantly, the explanations are very qualitative and whenever simulation or experiment-based evidence is given, the procedures are described very minimally or not at all, and some figures are confusing, e.g. what is "sample count" in fig. 2? It would really help adding more details to the paper and/or suppleme... | - Although in principle the argument that in case of recognition lists are recalled based on items makes sense, in the most common case of recognition, old vs new judgments, new items comprise the list of all items available in memory (minus the ones seen), and it's hard to see how such an exhaustive list could be effe... |
ICLR_2023_2630 | ICLR_2023 | - The technical novelty and contributions are a bit limited. The overall idea of using a transformer to process time series data is not new, as also acknowledged by the authors. The masked prediction was also used in prior works e.g. MAE (He et al., 2022). The main contribution, in this case, is the data pre-processing... | - The experimental comparison with other methods seems to be a bit unfair. As the proposed method was pre-trained before the fine-tuning stage, it is unclear if the compared methods were also initialised with the same (or similar scale) pre-trained model. If not, as shown in Table 1, the proposed method without SSL per... |
NIPS_2017_390 | NIPS_2017 | + Intuitive and appealingly elegant method, that is simple and fast.
+ Authors provide several interpretations which draw connections drawn to other methods and help the reader understand well.
+ Some design choices are well explained , e.g. Euclidean distance outperforms cosine for good reason.
+ Good results
- Some o... | - For the results of zero-shot learning on CUB dataset, i.e., Table 3 page 7, the meta-data used here are âattributeâ. This is good for fair comparison. However, from the perspective of getting better performance, better meta-data embeddings options are available. Refer to table 1 in âLearning Deep Representation... |
xrtM8r0zdU | ICLR_2025 | 1. **Limited Applicability**: While the paper claims that SGC offers a more flexible, fine-grained tradeoff, PEFT methods typically target compute-constrained scenarios, where such granular control may require extra tuning that reduces practicality. It would be beneficial to include a plot with sparsity on the x-axis a... | 1. **Limited Applicability**: While the paper claims that SGC offers a more flexible, fine-grained tradeoff, PEFT methods typically target compute-constrained scenarios, where such granular control may require extra tuning that reduces practicality. It would be beneficial to include a plot with sparsity on the x-axis a... |
NIPS_2020_341 | NIPS_2020 | - For theorem 5.1 and 5.2, is there a way to decouple the statement, i.e., separating out the optimization part and the generalization part? It would be clearer if one could give a uniform convergence guarantee first followed by how the optimization output can instantiate such uniform convergence. - In the experiments,... | - In the experiments, is it reasonable for the German and Law school dataset to have shorter training time in Gerrymandering than Independent? Since in Experiment 2, ERM and plug-in have similar performance to Kearns et al. and the main advantage is its computation time, it would be good to have the code published. |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1