Datasets:
review_point
stringlengths 45
642
| paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 200
10.5k
| batch
int64 2
10
| actionability
dict | actionability_label
stringclasses 5
values | actionability_label_type
stringclasses 1
value | id
int64 31
1.53k
|
|---|---|---|---|---|---|---|---|---|
- Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. mentioning Li et al. (2019a) earlier, to make it clear and precise.
|
ARR_2022_65_review
|
ARR_2022
|
1. The paper covers little qualitative aspects of the domains, so it is hard to understand how they differ in linguistic properties. For example, I think it is vague to say that the fantasy novel is more “canonical” (line 355). Text from a novel may be similar to that from news articles in that sentences tend to be complete and contain fewer omissions, in contrast to product comments which are casually written and may have looser syntactic structures. However, novel text is also very different from news text in that it contains unusual predicates and even imaginary entities as arguments. It seems that the authors are arguing that syntactic factors are more significant in SRL performance, and the experimental results are also consistent with this. Then it would be helpful to show a few examples from each domain to illustrate how they differ structurally.
2. The proposed dataset uses a new annotation scheme that is different from that of previous datasets, which introduces difficulties of comparison with previous results. While I think the frame-free scheme is justified in this paper, the compatibility with other benchmarks is an important issue that needs to be discussed. It may be possible to, for example, convert frame-based annotations to frame-free ones. I believe this is doable because FrameNet also has the core/non-core sets of argument for each frame. It would also be better if the authors can elaborate more on the relationship between this new scheme and previous ones. Besides eliminating the frame annotation, what are the major changes to the semantic role labels?
- In Sec. 3, it is a bit confusing why there is a division of source domain and target domain. Thus, it might be useful to mention explicitly that the dataset is designed for domain transfer experiments.
- Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. mentioning Li et al. (2019a) earlier, to make it clear and precise.
- More information about the annotators would be needed. Are they all native Chinese speakers? Do they have linguistics background?
- Were pred-wise/arg-wise consistencies used in the construction of existing datasets? I think they are not newly invented. It is useful to know where they come from.
- In the SRL formulation (Sec. 5), I am not quite sure what is “the concerned word”. Is it the predicate? Does this formulation cover the task of identifying the predicate(s), or are the predicates given by syntactic parsing results?
- From Figure 3 it is not clear to me how ZX is the most similar domain to Source. Grouping the bars by domain instead of role might be better (because we can compare the shapes). It may also be helpful to leverage some quantitative measure (e.g. cross entropy).
- How was the train/dev/test split determined? This should be noted (even if it is simply done randomly).
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 31
|
- Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers? I thank the authors for their response.
|
ACL_2017_726_review
|
ACL_2017
|
- Claims of being comparable to state of the art when the results on GeoQuery and ATIS do not support it. General Discussion: This is a sound work of research and could have future potential in the way semantic parsing for downstream applications is done. I was a little disappointed with the claims of “near-state-of-the-art accuracies” on ATIS and GeoQuery, which doesn’t seem to be the case (8 points difference from Liang et. al., 2011)). And I do not necessarily think that getting SOTA numbers should be the focus of the paper, it has its own significant contribution. I would like to see this paper at ACL provided the authors tone down their claims, in addition I have some questions for the authors.
- What do the authors mean by minimal intervention? Does it mean minimal human intervention, because that does not seem to be the case. Does it mean no intermediate representation? If so, the latter term should be used, being less ambiguous.
- Table 6: what is the breakdown of the score by correctness and incompleteness?
What % of incompleteness do these queries exhibit?
- What is expertise required from crowd-workers who produce the correct SQL queries? - It would be helpful to see some analysis of the 48% of user questions which could not be generated.
- Figure 3 is a little confusing, I could not follow the sharp dips in performance without paraphrasing around the 8th/9th stages. - Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers?
I thank the authors for their response.
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 33
|
781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details.
|
ACL_2017_818_review
|
ACL_2017
|
1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technical details, without clearly explaining the overall approach and why it is a good idea.
2) The experiments and the discussion need to be finished. In particular, there is no discussion of the results of one of the two tasks tackled (lower half of Table 2), and there is one obvious experiment missing: Variant B of the authors' model gives much better results on the first task than Variant A, but for the second task only Variant A is tested -- and indeed it doesn't improve over the baseline. - General Discussion: The paper needs quite a bit of work before it is ready for publication. - Detailed comments: 026 five dimensions, not six Figure 1, caption: "implies physical relations": how do you know which physical relations it implies?
Figure 1 and 113-114: what you are trying to do, it looks to me, is essentially to extract lexical entailments (as defined in formal semantics; see e.g. Dowty 1991) for verbs. Could you please explicit link to that literature?
Dowty, David. " Thematic proto-roles and argument selection." Language (1991): 547-619.
135 around here you should explain the key insight of your approach: why and how does doing joint inference over these two pieces of information help overcome reporting bias?
141 "values" ==> "value"?
143 please also consider work on multimodal distributional semantics, here and/or in the related work section. The following two papers are particularly related to your goals: Bruni, Elia, et al. "Distributional semantics in technicolor." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012.
Silberer, Carina, Vittorio Ferrari, and Mirella Lapata. " Models of Semantic Representation with Visual Attributes." ACL (1). 2013.
146 please clarify that your contribution is the specific task and approach -- commonsense knowledge extraction from language is long-standing task.
152 it is not clear what "grounded" means at this point Section 2.1: why these dimensions, and how did you choose them?
177 explain terms "pre-condition" and "post-condition", and how they are relevant here 197-198 an example of the full distribution for an item (obtained by the model, or crowd-sourced, or "ideal") would help.
Figure 2. I don't really see the "x is slower than y" part: it seems to me like this is related to the distinction, in formal semantics, between stage-level vs. individual-level predicates: when a person throws a ball, the ball is faster than the person (stage-level) but it's not true in general that balls are faster than people (individual-level).
I guess this is related to the pre-condition vs. post-condition issue. Please spell out the type of information that you want to extract.
248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them?
326 How do you know whether the frame is under- or over-generating?
Table 1: are the partitions made by frame, by verb, or how? That is, do you reuse verbs or frames across partitions? Also, proportions are given for 2 cases (2/3 and 3/3 agreement), whereas counts are only given for one case; which?
336 "with... PMI": something missing (threshold?)
371 did you do this partitions randomly?
376 "rate *the* general relationship" 378 "knowledge dimension we choose": ? ( how do you choose which dimensions you will annotate for each frame?)
Section 4 What is a factor graph? Please give enough background on factor graphs for a CL audience to be able to follow your approach. What are substrates, and what is the role of factors? How is the factor graph different from a standard graph?
More generally, at the beginning of section 4 you should give a higher level description of how your model works and why it is a good idea.
420 "both classes of knowledge": antecedent missing.
421 "object first type" 445 so far you have been only talking about object pairs and verbs, and suddenly selectional preference factors pop in. They seem to be a crucial part of your model -- introduce earlier? In any case, I didn't understand their role.
461 "also"?
471 where do you get verb-level similarities from?
Figure 3: I find the figure totally unintelligible. Maybe if the text was clearer it would be interpretable, but maybe you can think whether you can find a way to convey your model a bit more intuitively. Also, make sure that it is readable in black-and-white, as per ACL submission instructions.
598 define term "message" and its role in the factor graph.
621 why do you need a "soft 1" instead of a hard 1?
647ff you need to provide more details about the EMB-MAXENT classifier (how did you train it, what was the input data, how was it encoded), and also explain why it is an appropriate baseline.
654 "more skimp seed knowledge": ?
659 here and in 681, problem with table reference (should be Table 2). 664ff I like the thought but I'm not sure the example is the right one: in what sense is the entity larger than the revolution? Also, "larger" is not the same as "stronger".
681 as mentioned above, you should discuss the results for the task of inferring knowledge on objects, and also include results for model (B) (incidentally, it would be better if you used the same terminology for the model in Tables 1 and 2) 778 "latent in verbs": why don't you mention objects here?
781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details.
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 37
|
- The abstract is written well and invokes intrigue early - could potentially be made even better if, for "evaluating with gold answers is inconsistent with human evaluation" - an example of the inconsistency, such as models get ranked differently is also given there.
|
ARR_2022_227_review
|
ARR_2022
|
1. The case made for adopting the proposed strategy for a new automated evaluation paradigm - auto-rewrite (where the questions that are not valid due to a coreference resolution failure in terms of the previous answer get their entity replaced to be made consistent with the gold conversational history) - seems weak. While the proposed strategy does seem to do better in terms of being closer to how humans evaluated the 4 models (all in the context of one specific English dataset), it is not clear how the proposed strategy - a) does better than the previously proposed strategy of using model-predicted history (auto-pred). Looking at the comparison results for different evaluations - in terms of table 1, there definitely does not seem to be much difference between the two strategies (auto-rewrite and auto-pred). In fig 5, for some (2/6) pairs, the pred-history strategy has higher agreement than the proposed auto-rewrite strategy while they are all at the same agreement for 1/6 pairs. b) gets to the fundamental problem with automated evaluation raised in the paper, which is that "when placed in realistic settings, the models never have access to the ground truth (gold answers) and are only exposed to the conversational history and the passage." The proposed strategy seems to need gold answers as well, which is incompatible with the real-world use case. The previously proposed auto-pred strategy, however, uses only the questions and the model's own predictions to form the conversational history - which seems to be more compatible with the real-world use case. In summary, it is not clear why the proposed new way of automatically evaluating CQA systems is better or should be adopted as opposed to the previously proposed automated evaluation method of using a model's predictions as the conversational history (auto-pred), and the comparison between the results for these two automated strategies seems to be a missing exploration and discussion.
Questions to the authors (which also act as suggestions): Q1. - Line 151: "four representative CQA models" - what does representative mean here? representative in what sense? In terms of types or architectures of models? This needs clarification and takes on importance because the discrepancy, in terms of how models get evaluated on human vs automated evaluation, depends on these four models in a sense. Q2. Line 196: "We noticed that the annotators are biased when evaluating the correctness of answers" - are any statistics on this available? Q3. Section 3.1: For Mechanical Turk crowdsourcing work, what was the compensation rate for the annotators? This should be mentioned, if not in the main text, then add to appendix and point to it in the main text. Also, following the work in Card et al (2020) ("With Little Power Comes Great Responsibility.") - were there any steps taken to ensure the human annotation collection study was appropriately powered? ( If not, consider noting or discussing this somewhere in the paper as it helps with understanding the validity of human experiments) Q4. Lines 264-265: "The gap between HAM and ExCorD is significant in Auto-Gold" - how is significance measured here?
Q5. Lines 360-364: "We determine whether e∗_j = e_j by checking if F1(s∗_{j,1}, s_{j,1}) > 0 .... .... as long as their first mentions have word overlap." Two questions here - 5a. It is not clear why word overlap was used and not just an exact match here? What about cases where there is some word overlap but the two entities are indeed different, and therefore, the question is invalid (in terms of coreference resolution) but deemed valid?
5b. How accurate is this invalid question detection strategy? In case this has not already been measured, perhaps a sample of instances where predicted history invalidates questions via unresolved coreference (marked by humans) can be used to then detect if the automated method catches these instances accurately. Having some idea of how well invalid question detection happens is needed to get a sense of if or how many of the invalid questions will get rewritten. Comments, suggestions, typos: - Line 031: "has the promise to revolutionize" - this should be substantiated further, seems quite vague. - Line 048: "extremely competitive performance of" - what is 'performance' for these systems? ideally be specific since, at this point in the paper, we do not know what is being measured, and 'extremely competitive' is also quite vague. - The abstract is written well and invokes intrigue early - could potentially be made even better if, for "evaluating with gold answers is inconsistent with human evaluation" - an example of the inconsistency, such as models get ranked differently is also given there. - Line 033: "With recent development of large-scale datasets" -> the* recent development, but more importantly - which languages are these datasets in? And for this overall work on CQA, the language which is focused on should be mentioned early on in the introduction and ideally in the abstract itself. - Line 147: "more modeling work has been done than in free-form question answering" - potential typo, maybe it should be "maybe more modeling work has been done 'in that'" - where that refers to extractive QA?
- Line 222: "In total, we collected 1,446 human-machine con- versations and 15,059 question-answer pairs" - suggestion: It could be reasserted here that this dataset will be released as this collection of conversations is an important resource and contribution and does not appear to have been highlighted as much as it could. - Figure 2: It is a bit unintuitive and confusing to see the two y-axes with different ranges and interpret what it means for the different model evaluations. Can the same ranges on the y-axes be used at least even if the two metrics are different? Perhaps the F1 can use the same range as Accuracy - it would mean much smaller gold bars but hopefully, still get the point across without trying to keep two different ranges in our head? Still, the two measures are different - consider making two side-by-side plots instead if that is feasible instead of both evaluations represented in the same chart. - Lines 250-252: "the absolute numbers of human evaluation are much higher than those of automatic evaluations" - saying this seems a bit suspect - what does absolute accuracy numbers being higher than F1 scores mean? They are two different metrics altogether and should probably not be compared in this manner. Dropping this, the other implications still hold well and get the point across - the different ranking of certain models, and Auto-Gold conveying a gap between two models where Human Eval does not. - Line 348: "background 4, S∗ - latex styling suggestion, add footnote marker only right after the punctuation for that renders better with latex, so - "background,\footnote{} ..." in latex.
- Footnote 4: 'empirically helpful' - should have a cite or something to back that there. - Related Work section: a suggestion that could make this section but perhaps also the broader work stronger and more interesting to a broader audience is making the connection to how this work fits with other work looking at different NLP tasks that looks at failures of the popular automated evaluation strategy or metrics failing to capture or differing significantly from how humans would evaluate systems in a real-world setting.
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 44
|
1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results.
|
ACL_2017_699_review
|
ACL_2017
|
1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results.
2. The evaluation process shows that the current system (which extracts 1.
Present and 2. Absent both kinds of keyphrases) is evaluated against baselines (which contains only "present" type of keyphrases). Here there is no direct comparison of the performance of the current system w.r.t. other state-of-the-arts/benchmark systems on only "present" type of key phrases. It is important to note that local phrases (keyphrases) are also important for the document. The experiment does not discuss it explicitly. It will be interesting to see the impact of the RNN and Copy RNN based model on automatic extraction of local or "present" type of key phrases.
3. The impact of document size in keyphrase extraction is also an important point. It is found that the published results of [1], (see reference below) performs better than (with a sufficiently high difference) the current system on Inspec (Hulth, 2003) abstracts dataset. 4. It is reported that current system uses 527,830 documents for training, while 40,000 publications are held out for training baselines. Why are all publications not used in training the baselines? Additionally, The topical details of the dataset (527,830 scientific documents) used in training RNN and Copy RNN are also missing. This may affect the chances of repeating results.
5. As the current system captures the semantics through RNN based models. So, it would be better to compare this system, which also captures semantics. Even, Ref-[2] can be a strong baseline to compare the performance of the current system.
Suggestions to improve: 1. As, per the example, given in the Figure-1, it seems that all the "absent" type of key phrases are actually "Topical phrases". For example: "video search", "video retrieval", "video indexing" and "relevance ranking", etc.
These all define the domain/sub-domain/topics of the document. So, In this case, it will be interesting to see the results (or will be helpful in evaluating "absent type" keyphrases): if we identify all the topical phrases of the entire corpus by using tf-idf and relate the document to the high-ranked extracted topical phrases (by using Normalized Google Distance, PMI, etc.). As similar efforts are already applied in several query expansion techniques (with the aim to relate the document with the query, if matching terms are absent in document).
Reference: 1. Liu, Zhiyuan, Peng Li, Yabin Zheng, and Maosong Sun. 2009b. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 257–266.
2. Zhang, Q., Wang, Y., Gong, Y., & Huang, X. (2016). Keyphrase extraction using deep recurrent neural networks on Twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 836-845).
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"4",
"4",
"4"
]
}
|
4
|
gold
| 46
|
- In figure 5, the y-axis label may use "Exact Match ratio" directly.
|
ARR_2022_113_review
|
ARR_2022
|
The methodology part is a little bit unclear. The author could describe clearly how the depth-first path completion really works using Figure 3. Also, I'm not sure if the ZIP algorithm is proposed by the authors and also confused about how the ZIP algorithm handles multiple sequence cases.
- Figure 2, it is not clear about "merge target". If possible, you may use a shorter sentence.
- Line 113 (right column), will the lattice graph size explode? For a larger dataset, it may impossible to just get the lattice graph, am I right? How should you handle that case?
- Algorithm 1 step 4 and 5, you may need to give the detailed steps of *isRecomb* and *doRecomb* in the appendix.
- Line 154 left, "including that it optimizes for the wrong objective". Can you clearly state what objective? why the beam search algorithm is wrong? Beam search is a greedy algorithm that can recover the best output with high probability.
- For the ZIP method, one thing unclear to me is how you combine multiple sequences by if they have different lengths of shared suffixes?
- Line 377, is BFSZIP an existing work? If so, you need to cite their work. - In figure 5, the y-axis label may use "Exact Match ratio" directly.
- Line 409, could you cite the "R2" metric?
- Appendix A, the authors state "better model score cannot result in better hypothesis". You'd better state clearly what idea hypothesis you want. " a near-optimal model score" this sentence is unclear to me, could you explain in detail?
- In line 906, it is clear from the previous papers that Beam search results lack diversity and increase the beam size does not work. Can you simplify the paragraph?
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 51
|
- In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they will correct the figure.
|
ACL_2017_71_review
|
ACL_2017
|
-The explanation of methods in some paragraphs is too detailed and there is no mention of other work and it is repeated in the corresponding method sections, the authors committed to address this issue in the final version.
-README file for the dataset [Authors committed to add README file] - General Discussion: - Section 2.2 mentions examples of DBpedia properties that were used as features. Do the authors mean that all the properties have been used or there is a subset? If the latter please list them. In the authors' response, the authors explain in more details this point and I strongly believe that it is crucial to list all the features in details in the final version for clarity and replicability of the paper.
- In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they will correct the figure.
- Based on section 2.4 it seems that topical relatedness implies that some features are domain dependent. It would be helpful to see how much domain dependent features affect the performance. In the final version, the authors will add the performance results for the above mentioned features, as mentioned in their response.
- In related work, the authors make a strong connection to Sil and Florian work where they emphasize the supervised vs. unsupervised difference. The proposed approach is still supervised in the sense of training, however the generation of training data doesn’t involve human interference
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 56
|
- Two things must be improved in the presentation of the model: (1) What is the pooling method used for embedding features (line 397)? and (2) Equation (7) in line 472 is not clear enough: is E_i the random variable representing the *type* of AC i, or its *identity*? Both are supposedly modeled (the latter by feature representation), and need to be defined. Furthermore, it seems like the LHS of equation (7) should be a conditional probability.
|
ACL_2017_483_review
|
ACL_2017
|
- 071: This formulation of argumentation mining is just one of several proposed subtask divisions, and this should be mentioned. For example, in [1], claims are detected and classified before any supporting evidence is detected.
Furthermore, [2] applied neural networks to this task, so it is inaccurate to say (as is claimed in the abstract of this paper) that this work is the first NN-based approach to argumentation mining.
- Two things must be improved in the presentation of the model: (1) What is the pooling method used for embedding features (line 397)? and (2) Equation (7) in line 472 is not clear enough: is E_i the random variable representing the *type* of AC i, or its *identity*? Both are supposedly modeled (the latter by feature representation), and need to be defined. Furthermore, it seems like the LHS of equation (7) should be a conditional probability.
- There are several unclear things about Table 2: first, why are the three first baselines evaluated only by macro f1 and the individual f1 scores are missing?
This is not explained in the text. Second, why is only the "PN" model presented? Is this the same PN as in Table 1, or actually the Joint Model? What about the other three?
- It is not mentioned which dataset the experiment described in Table 4 was performed on.
General Discussion: - 132: There has to be a lengthier introduction to pointer networks, mentioning recurrent neural networks in general, for the benefit of readers unfamiliar with "sequence-to-sequence models". Also, the citation of Sutskever et al. (2014) in line 145 should be at the first mention of the term, and the difference with respect to recursive neural networks should be explained before the paragraph starting in line 233 (tree structure etc.).
- 348: The elu activation requires an explanation and citation (still not enough well-known).
- 501: "MC", "Cl" and "Pr" should be explained in the label.
- 577: A sentence about how these hyperparameters were obtained would be appropriate.
- 590: The decision to do early stopping only by link prediction accuracy should be explained (i.e. why not average with type accuracy, for example?).
- 594: Inference at test time is briefly explained, but would benefit from more details.
- 617: Specify what the length of an AC is measured in (words?).
- 644: The referent of "these" in "Neither of these" is unclear.
- 684: "Minimum" should be "Maximum".
- 694: The performance w.r.t. the amount of training data is indeed surprising, but other models have also achieved almost the same results - this is especially surprising because NNs usually need more data. It would be good to say this.
- 745: This could alternatively show that structural cues are less important for this task.
- Some minor typos should be corrected (e.g. "which is show", line 161).
[1] Rinott, Ruty, et al. "Show Me Your Evidence-an Automatic Method for Context Dependent Evidence Detection." EMNLP. 2015.
[2] Laha, Anirban, and Vikas Raykar. " An Empirical Evaluation of various Deep Learning Architectures for Bi-Sequence Classification Tasks." COLING. 2016.
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 64
|
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they even mentioned/discussed again). I found this not only misleading, but I would have also liked the paper to go deeper into the respective topics, at least to some extent.
|
ARR_2022_215_review
|
ARR_2022
|
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they even mentioned/discussed again). I found this not only misleading, but I would have also liked the paper to go deeper into the respective topics, at least to some extent. 2. It seemed a little disappointing to me that the 212 new pairs have _not_ been translated to English (if I'm not mistaken). To really make this dataset a bilingual resource, it would be good to have all pairs in both languages. In the given way, it seems that ultimately only the French version was of interest to the study - unlike it is claimed initially.
3. Almost no information about the reliability of the translations and the annotations is given (except for the result of the translation checking in line 285), which seems unsatisfying to me. To assess the translations, more information about the language/translation expertise of the authors would be helpful (I don't think this violates anonymity). For the annotations, I would expect some measure of inter-annotator agreement.
4. The metrics in Tables 4 and 5 need explanation, in order to make the paper self-contained. Without going to the original paper on CrowS-pairs, the values are barely understandable. Also, information on the values ranges should be given as well as whether higher or lower values are better.
- 066: social contexts >> I find this term misleading here, since the text seems to be about countries/language regions.
- 121: Deviding 1508 into 16*90 = 1440 cases cannot be fully correct. What about the remaining 68 cases?
- 241: It would also be good to state the maximum number of tasks done by any annotator.
- Table 3: Right-align the numeric columns.
- Table 4 (1): Always use the same number of decimal places, for example 61.90 instead of 61.9 to match the other values. This would increase readability. - Table 4 (2): The table exceeds the page width; that needs to be fixed.
- Tables 4+5 (1): While I undersand the layout problem, the different approaches would be much easier to compare if tables and columns were flipped (usually, one approach per row, one metric per column). - Tables 4+5 (2): What's the idea of showing the run-time? I didn't see for what this is helpful.
- 305/310: Marie/Mary >> I think these should be written the same.
- 357: The text speaks of "53", but I believe the value "52.9" from Table 4 is meant. In my view, such rounding makes understanding harder rather than helping.
- 575/577: "1/" and "2/" >> Maybe better use "(1)" and "(2)"; confused me first.
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"4",
"4",
"4"
]
}
|
4
|
gold
| 65
|
2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work.
|
ARR_2022_121_review
|
ARR_2022
|
1. The writing needs to be improved. Structurally, there should be a "Related Work" section which would inform the reader that this is where prior research has been done, as well as what differentiates the current work with earlier work. A clear separation between the "Introduction" and "Related Work" sections would certainly improve the readability of the paper.
2. The paper does not compare the results with some of the earlier research work from 2020. While the authors have explained their reasons for not doing so in the author response along the lines of "Those systems are not state-of-the-art", they have compared the results to a number of earlier systems with worse performances (Eg. Taghipour and Ng (2016)).
Comments: 1. Please keep a separate "Related Work" section. Currently "Introduction" section of the paper reads as 2-3 paragraphs of introduction, followed by 3 bullet points of related work and again a lot of introduction. I would suggest that you shift those 3 bullet points ("Traditional AES", "Deep Neural AES" and "Pre-training AES") to the Related work section.
2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work.
3. While the out of domain experiment is pre-trained on other prompts, it is still fine-tuned during training on the target prompt essays. Typos: 1. In Table #2, Row 10, the reference for R2BERT is Yang et al. (2020), not Yang et al. (2019).
Missing References: 1. Panitan Muangkammuen and Fumiyo Fukumoto. " Multi-task Learning for Automated Essay Scoring with Sentiment Analysis". 2020. In Proceedings of the AACL-IJCNLP 2020 Student Research Workshop.
2. Sandeep Mathias, Rudra Murthy, Diptesh Kanojia, Abhijit Mishra, Pushpak Bhattacharyya. 2020. Happy Are Those Who Grade without Seeing: A Multi-Task Learning Approach to Grade Essays Using Gaze Behaviour. In Proceedings of the 2020 AACL-IJCNLP Main Conference.
| 2
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 66
|
2. It would be nice to include the hard prompt baseline in Table 1 to see the increase in performance of each method.
|
gybvlVXT6z
|
EMNLP_2023
|
1. I feel that paper has insufficiant baseline. For example, CoCoOp (https://arxiv.org/abs/2203.05557) is a widely used baseline for prompt tuning research in CLIP. Moreover, it would be nice to include the natural data shift setting as in most other prompt tuning papers for CLIP.
2. It would be nice to include the hard prompt baseline in Table 1 to see the increase in performance of each method.
3. I think the performance drop seen with respect to the prompt length (Figure 4) is a major limitation of this approach. For example, this phenomenon might make it so that using just a general hard prompt of length 4 ('a photo of a') would outperform the CBBT with length 4 or even CBBT with length 1.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 75
|
1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones.
|
NIPS_2020_295
|
NIPS_2020
|
1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones. 2. Some methods use epochs and pretrain epochs as 200, while the reported InvP uses 800 epochs. What are the results of InvP with epochs as 200? It would be more clear after adding these results into the tables. 3. The proposed method adopts memory bank to update vi, as detailed in the beginning of Sec.3. What the results would be when adopting momentum queue and current batch of features? As the results of SimCLR and MoCo are better than InsDis, it would be nice to have those results.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 77
|
2: the callout to table 5 should go to table 3, instead. Page 7, section 5, last par.: figure 6 callout is not directing properly
|
ICLR_2023_977
|
ICLR_2023
|
the evaluation section has 2 experiments, but only 2 very insightful detailed examples. The paper can use a few more examples to illustrate more differences of the output sequences. This would allow the reader to internalize how the non-monotonicity in a deeper way.
Questions: In details, how does the decoding algorithm actually avoid repetitions? In other way, how does other models actually degrade validation perplexity using their decoding algorithm?
Typos, Grammar, etc.: Page 7, section 4.2, par. 2: the callout to table 5 should go to table 3, instead. Page 7, section 5, last par.: figure 6 callout is not directing properly
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 83
|
2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform DeiT-T in APTOS2019, and it does not outperform DeiT-S on APTOS2019, ISIC2019 and CheXpert (0.1% won't be significant). However, CNNs can give more almost consistent model improvements as the capacity goes up except on the ISIC2019.
|
ICLR_2022_1794
|
ICLR_2022
|
1 Medical imaging are often obtained in 3D volumes, not only limited to 2D images. So experiments should include the 3D volume data as well for the general community, rather than all on 2D images. And the lesion detection is another important task for the medical community, which has not been studied in this work.
2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform DeiT-T in APTOS2019, and it does not outperform DeiT-S on APTOS2019, ISIC2019 and CheXpert (0.1% won't be significant). However, CNNs can give more almost consistent model improvements as the capacity goes up except on the ISIC2019.
3 On the segmentation mask involved with cancer on CSAW-S, the segmentation results of DEEPLAB3-DEIT-S cannot be concluded as better than DEEPLAB3-RESNET50. The implication that ViTs outperform CNNs in this segmentation task cannot be validly drawn from an 0.2% difference with larger variance.
Questions: 1 For the grid search of learning rate, is it done on the validation set?
Minor problems: 1 The n number for Camelyon dataset in Table 1 is not consistent with the descriptions in the text in Page 4.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 85
|
- The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3).
|
ICLR_2022_3352
|
ICLR_2022
|
+ The problem studied in this paper is definitely important in many real-world applications, such as robotics decision-making and autonomous driving. Discovering the underlying causation is important for agents to make reasonable decisions, especially in dynamic environments.
+ The method proposed in this paper is interesting and technically correct. Intuitively, using GRU to extract sequential information helps capture the changes of causal graphs.
- The main idea of causal discovery by sampling intervention set and causal graphs for masking is similar to DCDI [1]. This paper is more like using DCDI in dynamic environments, which may limit the novelty of this paper.
- The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3).
- The contribution of this paper is not fully supported by experiments.
Main Questions
(1) During the inference stage, why use samples instead of directly taking the argmax of Bernoulli distribution? How many samples are required? Will this sampling cause scalability problems?
(2) In the experiment part, the authors only compare with one method (V-CDN). Is it possible to compare DYNOTEARS with the proposed method?
(3) The authors mention that there is no ground truth to evaluate the causal discovery task. I agree with this opinion since the real world does not provide us causal graphs. However, the first experiment is conducted on a synthetic dataset, where I believe it is able to obtain the causation by checking collision conditions. In other words, I am not convinced only by the prediction results. Could the author provide the learned causal graphs and intervention sets and compare them with ground truth even on a simple synthetic dataset?
Clarification questions
(1) It seems the citation of NOTEARS [2] is wrongly used for DYNOTEARS [3]. This citation is important since DYNOTEARS is one of the motivations of this paper.
(2) ICM part in Figure 3 is not clear. How is the Intervention set I is used? If I understand correctly, function f
is a prediction model conditioned on history frames.
(3) The term “Bern” in equation (3) is not defined. I assume it is the Bernoulli distribution. Then what does the symbol B e r n ( α t , β t ) mean?
(4) According to equation (7), each node j
has its own parameters ϕ j t and ψ j t
. Could the authors explain why the parameters are related to time?
(5) In equation (16), the authors mention the term “ secondary optimization”. I can’t find any reference for it. Could the author provide more information?
Minor things:
(1) In the caption of Figure 2, the authors say “For nonstationary causal models, (c)….”. But in figure (c) belongs to stationary methods.
[1] Brouillard P, Lachapelle S, Lacoste A, et al. Differentiable causal discovery from interventional data[J]. arXiv preprint arXiv:2007.01754, 2020.
[2] Zheng X, Aragam B, Ravikumar P, et al. Dags with no tears: Continuous optimization for structure learning[J]. arXiv preprint arXiv:1803.01422, 2018.
[3] Pamfil R, Sriwattanaworachai N, Desai S, et al. Dynotears: Structure learning from time-series data[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2020: 1595-1605.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 86
|
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory.
|
NIPS_2017_356
|
NIPS_2017
|
]
My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my following questions in the rebuttal.
1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there any improvements when the proposed model use larger word embedding, and LSTM parameters?
2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting?
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 92
|
- As mentioned in the previous question, the distribution of videos of different lengths within the benchmark is crucial for the assessment of reasoning ability and robustness, and the paper does not provide relevant explanations. The authors should include a table showing the distribution of video lengths across the dataset, and explain how they ensured a balanced representation of different video lengths across the 11 categories.
|
BTr3PSlT0T
|
ICLR_2025
|
- I express skepticism about whether the number of videos in the benchmark can achieve a robust assessment. The CVRR-ES benchmark includes only 214 videos, with the shortest video being just 2 seconds. Upon reviewing several videos from the anonymous link, I noticed a significant proportion of short videos. I question whether such short videos can adequately cover 11 categories. Moreover, current work that focuses solely on designing Video-LLMs, without specifically constructing evaluation benchmarks, provides a much larger number of assessment videos than the 214 included in CVRR-ES, for example, Tarsier [1].
- As mentioned in the previous question, the distribution of videos of different lengths within the benchmark is crucial for the assessment of reasoning ability and robustness, and the paper does not provide relevant explanations. The authors should include a table showing the distribution of video lengths across the dataset, and explain how they ensured a balanced representation of different video lengths across the 11 categories.
- In the motivation, it is mentioned that the goal is to build human-centric AI systems. Does the paper's reflection on this point merely consist of providing a human baseline? I think that offering more fine-grained visual examples would be more helpful for human-AI comparisons.
- I think that the contribution of the DSCP is somewhat overstated and lacks novelty. Such prompt engineering-based methods have already been applied in many works for data generation, model evaluation, and other stages. The introduction and ablation experiments of this technology in the paper seem redundant.
- The discussion on DSCP occupies a significant portion of the experimental analysis. I think that the current analysis provided in the paper lacks insight and does not fully reflect the value of CVRR-ES, especially in terms of human-machine comparison.
- The phrase should be "there exist a few limitations" instead of "there exist few limitations" in line 520.
- The paper does not provide prompt templates for all the closed-source and open-source Video-LLMs used, which will influence the reproducibility.
The problems discussed in this paper are valuable, but the most crucial aspects of benchmark construction and evaluation are not entirely convincing. Instead, a significant amount of space is dedicated to introducing the DSCP method. I don't think it meets the acceptance standards of ICLR yet. I will consider modifying the score based on the feedback from other reviewers and the authors' responses. ***
[1] Wang J, Yuan L, Zhang Y. Tarsier: Recipes for Training and Evaluating Large Video Description Models[J]. arXiv preprint arXiv:2407.00634, 2024.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 100
|
8.L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case? Minor Points:
|
NIPS_2017_53
|
NIPS_2017
|
Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later.
3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further.
4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation.
5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable.
6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map?
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case?
Minor Points:
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (arenât we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
- Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU?
Perliminary Evaluation
The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*).
[A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. âNeural Module Networks.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799.
[B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. âMultimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847.
[C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. âSimple Baseline for Visual Question Answering.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 101
|
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance segmentation, the overall structures between these modules and the one that this paper proposed are pretty similar.
|
ICLR_2023_3203
|
ICLR_2023
|
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance segmentation, the overall structures between these modules and the one that this paper proposed are pretty similar.
2. Though the improvement is consistent for different frameworks and tasks, the relative gains are not very strong. For most of the baselines, the proposed methods can only achieve just about 1% gain on a relative small backbone ResNet-50. As the proposed method introduces global pooling into its structure, it might be easy to improve a relatively small backbone since it is with a smaller receptive field. I suspect whether the proposed method still works well on large backbone models like Swin-B or Swin-L.
3. Some of the baseline results do not matched with their original paper. I roughly checked the original Mask2former paper but the performance reported in this paper is much lower than the one reported in the original Mask2former paper. For example, for panoptic segmentation, Mask2former reported 51.9 but in this paper it's 50.4, and the AP for instance segmentation reported in the original paper is 43.7 but here what reported is 42.4.
Meanwhile, there are some missing references about panoptic segmentation that should be included in this paper [5, 6]. Reference
[1] Chen, Yunpeng, et al. "A^ 2-nets: Double attention networks." NeurIPS 2018.
[2] Cao, Yue, et al. "Gcnet: Non-local networks meet squeeze-excitation networks and beyond." T-PAMI 2020
[3] Yinpeng Chen, et al. Dynamic convolution: Attention over convolution kernels. CVPR 2020.
[4] Zhang, Hang, et al. "Resnest: Split-attention networks." CVPR workshop 2022.
[5] Zhang, Wenwei, et al. "K-net: Towards unified image segmentation." Advances in Neural Information Processing Systems 34 (2021): 10326-10338.
[6] Wang, Huiyu, et al. "Max-deeplab: End-to-end panoptic segmentation with mask transformers." CVPR 2021
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 103
|
1. Fig. 3 e. Since the preactivation values of two networks are the same membrane potentials, their output cosine similarity will be very high. Why not directly illustrate the results of the latter loss term of Eqn 13?
|
ICLR_2023_2283
|
ICLR_2023
|
1. 1. The symbols in Section 4.3 are not very clearly explained. 2. This paper only experiments on the very small time steps (e.g.1、2) and lack of some experiments on slightly larger time steps (e.g. 4、6) to make better comparisons with other methods. I think it is necessary to analyze the impact of the time step on the method proposed in this paper. 3. Lack of experimental results on ImageNet to verify the method.
Questions: 1. Fig. 3 e. Since the preactivation values of two networks are the same membrane potentials, their output cosine similarity will be very high. Why not directly illustrate the results of the latter loss term of Eqn 13? 2. Is there any use of recurrent connections in the experiments in this paper? Apart from appendix A.5, I do not see the recurrent connections.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 104
|
- This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating model parameters, and this paper primarily focuses on adjusting the input data, how to prove that data processing is superior to model parameter adjustment? I believe a comparison should be made based on experimental results.
|
eI6ajU2esa
|
ICLR_2024
|
- This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating model parameters, and this paper primarily focuses on adjusting the input data, how to prove that data processing is superior to model parameter adjustment? I believe a comparison should be made based on experimental results.
- Under noisy conditions, many TTA methods can achieve desirable results, while the improvement brought by this paper's method is relatively low.
- In appendix A.2.1, under noisy conditions, the average performance improvement brought by this paper's method is very low and can even be counterproductive under certain noise conditions. Does this indicate an issue with the approach of changing input data?
- How to verify the reliability of the long-range photometric consistency in section 3.3? Are there any ablation study results reflecting the performance gain brought by each part?
- The explanation of the formula content in Algorithm 1 in the main body is not clear enough.
[A] Temporal Coherent Test-Time Optimization for Robust Video Classification. ICLR23
[B] Video Test-Time Adaptation for Action Recognition. CVPR23
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 105
|
4. Section 3.2.1: The first expression for J ( θ ) is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) ) .
|
ICLR_2021_863
|
ICLR_2021
|
Weakness 1. The presentation of the paper should be improved. Right now all the model details are placed in the appendix. This can cause confusion for readers reading the main text. 2. The necessity of using techniques includes Distributional RL and Deep Sets should be explained more thoroughly. From this paper, the illustration of Distributional RL lacks clarity. 3. The details of state representation are not explained clear. For an end-to-end method like DRL, it is crucial for state representation for training a good agent, as for network architecture. 4. The experiments are not comprehensive for validating that this algorithm works well in a wide range of scenarios. The efficiency, especially the time efficiency of the proposed algorithm, is not shown. Moreover, other DRL benchmarks, e.g., TD3 and DQN, should also be compared with. 5. There are typos and grammar errors.
Detailed Comments 1. Section 3.1, first paragraph, quotation mark error for "importance". 2. Appendix A.2 does not illustrate the state space representation of the environment clearly. 3. The authors should state clearly as to why the complete state history is enough to reduce POMDP for the no-CSI case. 4. Section 3.2.1: The first expression for J ( θ )
is incorrect, which should be Q ( s t 0 , π θ ( s t 0 ) )
. 5. The paper did not explain Figure 2 clearly. In particular, what does the curve with the label "Expected" in Fig. 2(a) stand for? Not to mention there are multiple misleading curves in Fig. 2(b)&(c). The benefit of introducing distributional RL is not clearly explained. 6. In Table 1, only 4 classes of users are considered in the experiment sections, which might not be in accordance with practical situations, where there can be more classes of users in the real system and more user numbers. 7. In the experiment sections, the paper only showed the Satisfaction Probability of the proposed method is larger than conventional methods. The algorithm complexity, especially the time complexity of the proposed method in an ultra multi-user scenario, is not shown. 8. There is a large literature on wireless scheduling with latency guarantees from the networking community, e.g., Sigcomm, INFOCOM, Sigmetrics. Representative results there should also be discussed and compared with.
====== post rebuttal: My concern regarding the experiments remains. I will keep my score unchanged.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 106
|
8: s/expensive approaches2) allows/expensive approaches,2) allows/ p.8: s/estimates3) is/estimates, and3) is/ In the references: Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in several of the papers. Dusenberry et al. (2020) was published in ICML 2020 Osawa et al. (2019) was published in NeurIPS 2019 Swiatkowski et al. (2020) was published in ICML 2020 p. 13, supplement, Fig.
|
ICLR_2021_872
|
ICLR_2021
|
The authors push on the idea of scalable approximate inference, yet the largest experiment shown is on CIFAR-10. Given this focus on scalability, and the experiments in recent literature in this space, I think experiments on ImageNet would greatly strengthen the paper (though I sympathize with the idea that this can a high bar from a resources standpoint).
As I noted down below, the experiments currently lack results for the standard variational BNN with mean-field Gaussians. More generally, I think it would be great to include the remaining models from Ovadia et al. (2019). More recent results from ICML could also useful to include (as referenced in the related works sections). Recommendation
Overall, I believe this is a good paper, but the current lack of experiments on a dataset larger than CIFAR-10, while also focusing on scalability, make it somewhat difficult to fully recommend acceptance. Therefore, I am currently recommending marginal acceptance for this paper.
Additional comments
p. 5-7: Including tables of results for each experiment (containing NLL, ECE, accuracy, etc.) in the main text would be helpful to more easily assess
p. 7: For the MNIST experiments, in Ovadia et al. (2019) they found that variational BNNs (SVI) outperformed all other methods (including deep ensembles) on all shifted and OOD experiments. How does your proposed method compare? I think this would be an interesting experiment to include, especially since the consensus in Ovadia et al. (2019) (and other related literature) is that full variational BNNs are quite promising but generally methodologically difficult to scale to large problems, with relative performance degrading even on CIFAR-10. Minor
p. 6: In the phrase "for 'in-between' uncertainty", the first quotation mark on 'in-between' needs to be the forward mark rather than the backward mark (i.e., ‘ i n − b e t w e e n ′ ).
p. 7: s/out of sitribution/out of distribution/
p. 8: s/expensive approaches 2) allows/expensive approaches, 2) allows/
p. 8: s/estimates 3) is/estimates, and 3) is/
In the references:
Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in several of the papers.
Dusenberry et al. (2020) was published in ICML 2020
Osawa et al. (2019) was published in NeurIPS 2019
Swiatkowski et al. (2020) was published in ICML 2020
p. 13, supplement, Fig. 5: error bar regions should be upper and lowered bounded by [0, 1] for accuracy.
p. 13, Table 2: Splitting this into two tables, one for MNIST and one for CIFAR-10, would be easier to read.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 107
|
3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates?
|
NIPS_2016_339
|
NIPS_2016
|
weakness of the model. How would the values in table 1 change without this extra assumption? 3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates? 4. An answer to this point may be beyond the scope of this work, but it may be interesting to think about it. It is mentioned (lines 104-106) that "the examples [...] should maximally disambiguate the concept being taught from other possible concepts". How is disambiguation measured? How can disambiguation be maximized? Could there be an information theoretic approach to these questions? Something like: the teacher chooses samples that maximally reduce the entropy of the assumed posterior of the student. Does the proposed model do that? Minor points: ⢠line 88: The optimal policy is deterministic. Hence I'm a bit confused by "the stochastic optimal policy". Is above defined "the Boltzmann policy" meant? ⢠What is d and h in equation 2? ⢠line 108: "to calculate this ..." What is meant by "this"? ⢠Algorithm 1: Require should also include epsilon. Does line 1 initialize the set of policies to an empty set? Are the policies in line 4 added to this set? Does calculateActionValues return the Q* defined in line 75? What is M in line 6? How should p_min be chosen? Why is p_min needed anyway? ⢠Experiment 2: Is the reward 10 points (line 178) or 5 points (line 196)? ⢠Experiment 2: Is 0A the condition where all tiles are dangerous? Why are the likelihoods so much larger for 0A? Is it reasonable to average over likelihoods that differ by more than an order of magnitude (0A vs 2A-C)? ⢠Text and formulas should be carefully checked for typos (e.g. line 10 in Algorithm 1: delta > epsilon; line 217: 1^-6;)
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 108
|
1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Fully supervised’ from ‘Supervised’. Maybe a better idea is to specify the data used to train ALL the parts of each model and have two big columns ‘Mixture training data’ and ‘Single source data’ which will make it much more prevalent of what is which.
|
4N97bz1sP6
|
ICLR_2024
|
1. The authors should make clear the distinction of when the proposed method is trained using only weak supervision and when it is semi-supervised trained. For instance, in Table 1, I think the proposed framework row refers to the semi-supervised version of the method, thus the authors should rename the column to ‘Fully supervised’ from ‘Supervised’. Maybe a better idea is to specify the data used to train ALL the parts of each model and have two big columns ‘Mixture training data’ and ‘Single source data’ which will make it much more prevalent of what is which.
2. Building upon my previous argument, I think that when one is using these large pre-trained networks on single-source data like CLAP, the underlying method becomes supervised in a sense, or to put it more specifically supervised with unpaired data. The authors should clearly explain these differences throughout the manuscript.
3. I think the authors should include stronger text-based sound separation baselines like the model and ideally the training method that uses heterogeneous conditions to train the separation model in [A] which has already shown to outperform LASS-Net (Liu et. al 2022) which is almost always the best performing baseline in this paper.
I would be more than happy to increase my score if all the above weaknesses are addressed by the authors.
[A] Tzinis, E., Wichern, G., Smaragdis, P. and Le Roux, J., 2023, June. Optimal Condition Training for Target Source Separation. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1-5). IEEE.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 121
|
- Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice.
|
NIPS_2020_1454
|
NIPS_2020
|
- Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice. - Claims to be SOTA on three datasets, but this does not seem to be the case. Does not evaluate on what it trains on (see "additional feedback").
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 122
|
- "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper.
|
NIPS_2018_25
|
NIPS_2018
|
- My understanding is that R,t and K (the extrinsic and intrinsic parameters of the camera) are provided to the model at test time for the re-projection layer. Correct me in the rebuttal if I am wrong. If that is the case, the model will be very limited and it cannot be applied to general settings. If that is not the case and these parameters are learned, what is the loss function? - Another issue of the paper is that the disentangling is done manually. For example, the semantic segmentation network is the first module in the pipeline. Why is that? Why not something else? It would be interesting if the paper did not have this type of manual disentangling, and everything was learned. - "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper. - During evaluation at test time, how is the 3D alignment between the prediction and the groundtruth found? - Please comment on why the performance of GTSeeNet is lower than that of SeeNetFuse and ThinkNetFuse. The expectation is that groundtruth 2D segmentation should improve the results. - line 180: Why not using the same amount of samples for SUNCG-D and SUNCG-RGBD? - What does NoSeeNet mean? Does it mean D=1 in line 96? - I cannot parse lines 113-114. Please clarify.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 123
|
2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exemplar manifold for a non-stochastic network? Or put another way, how is the denominator of figure 2.c computed for the ResNet50 & ATResNet50 networks?
|
NIPS_2021_1222
|
NIPS_2021
|
Claims: 1.a) I think the paper falls short of the high-level contributions claimed in the last sentence of the abstract. As the authors note in the background section, there are a number of published works that demonstrate the tradeoffs between clean accuracy, training with noise perturbations, and adversarial robustness. Many of these, especially Dapello et al., note the relevance with respect to stochasticity in the brain. I do not see how their additional analysis sheds new light on the mechanisms of robust perception or provides a better understanding of the role stochasticity plays in biological computation. To be clear - I think the paper is certainly worthy of publication and makes notable contributions. Just not all of the ones claimed in that sentence.
1.b) The authors note on lines 241-243 that “the two geometric properties show a similar dependence for the auditory (Figure 4A) and visual (Figure 4B) networks when varying the eps-sized perturbations used to construct the class manifolds.” I do not see this from the plots. I would agree that there is a shared general upward trend, but I do not agree that 4A and 4B show “similar dependence” between the variables measured. If nothing else, the authors should be more precise when describing the similarities.
Clarifications: 2.a) The authors say on lines 80-82 that the center correlation was not insightful for discriminating model defenses, but then use that metric in figure 4 A&B. I’m wondering why they found it useful here and not elsewhere? Or what they meant by the statement on lines 80-82.
2.b) On lines 182-183 the authors note measuring manifold capacity for unperturbed images, i.e. clean exemplar manifolds. Earlier they state that the exemplar manifolds are constructed using either adversarial perturbations or from stochasticity of the network. So I’m wondering how one constructs images for a clean exemplar manifold for a non-stochastic network? Or put another way, how is the denominator of figure 2.c computed for the ResNet50 & ATResNet50 networks?
2.c) The authors report mean capacity and width in figure 2. I think this is the mean across examples as well as across seeds. Is the STD also computed across examples and seeds? The figure caption says it is only computed across seeds. Is there a lot of variability across examples?
2.d) I am unsure why there would be a gap between the orange and blue/green lines at the minimum strength perturbation for the avgpool subplot in figure 2.c. At the minimum strength perturbation, by definition, the vertical axis should have a value of 1, right? And indeed in earlier layers at this same perturbation strength the capacities are equal. So why does the ResNet50 lose so much capacity for the same perturbation size from conv1 to avgpool? It would also be helpful if the authors commented on the switch in ordering for ATResNet and the stochastic networks between the middle and right subplots.
General curiosities (low priority): 3.a) What sort of variability is there in the results with the chosen random projection matrix? I think one could construct pathological projection matrices that skews the MFTMA capacity and width scores. These are probably unlikely with random projections, but it would still be helpful to see resilience of the metric to the choice of random projection. I might have missed this in the appendix, though.
3.b) There appears to be a pretty big difference in the overall trends of the networks when computing the class manifolds vs exemplar manifolds. Specifically, I think the claims made on lines 191-192 are much better supported by Figure 1 than Figure 2. I would be interested to hear what the authors think in general (i.e. at a high/discussion level) about how we should interpret the class vs exemplar manifold experiments.
Nitpick, typos (lowest priority): 4.a) The authors note on line 208 that “Unlike VOneNets, the architecture maintains the conv-relu-maxpool before the first residual block, on the grounds that the cochleagram models the ear rather than the primary auditory cortex.” I do not understand this justification. Any network transforming input signals (auditory or visual) would have to model an entire sensory pathway, from raw input signal to classification. I understand that VOneNets ignore all of the visual processing that occurs before V1. I do not see how this justifies adding the extra layer to the auditory network.
4.b) It is not clear why the authors chose a line plot in figure 4c. Is the trend as one increases depth actually linear? From the plot it appears as though the capacity was only measured at the ‘waveform’ and ‘avgpool’ depths; were there intermediate points measured as well? It would be helpful if they clarified this, or used a scatter/bar plot if there were indeed only two points measured per network type.
4.c) I am curious why there was a switch to reporting SEM instead of STD for figures 5 & 6.
4.c) I found typos on lines 104, 169, and the fig 5 caption (“10 image and”).
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 130
|
1) Originality is limited because the main idea of variable splitting is not new and the algorithm is also not new.
|
NIPS_2018_476
|
NIPS_2018
|
Weakness] 1) Originality is limited because the main idea of variable splitting is not new and the algorithm is also not new. 2) Theoretical proofs of existing algorithm might be regarded as some incremental contributions. 3) Experiments are somewhat weak: 3-1) I was wondering why Authors conducted experiments with lambda=1. According to Corollary 1 and 2 lambda should be sufficiently large, however it is completely ignored for experimental setting. Otherwise the proposed algorithm has no difference from [10]. 3-2) In Figure 3, different evaluations are shown in different dataset. It might be regarded as subjectively selected demonstrations. 3-3) I think clustering accuracy is not very significant because there are many other sophisticated algorithms, and initializations are still very important for nice performance. It show just the proposed algorithm is OK for some applications. [Minor points] -- comma should be deleted at line num. 112: "... + \lambda(u-v), = 0 ...". -- "close-form" --> "closed-form" at line num.195-196. --- after feedback --- I understand that the contribution of this paper is a theoretical justification of the existing algorithms proposed in [10]. In that case, the experimental validation with respect to the sensitivity of "lambda" is more important rather than the clustering accuracy. So Fig. 1 in feedback file is nice to add paper if possible. I think dropping symmetry is helpful, however it is not new idea that is already being used. So, it will not change anything in practice to use it. Furthermore, in recent years, almost all application researchers are using some application specific extensions of NMF such as sparse NMF, deep NMF, semi-NMF, and graph-regularized NMF, rather than the standard NMF. Thus, this paper is theoretically interesting as some basic research, but weak from an application perspective. Finally, I changed my evaluation upper one as: "Marginally below the acceptance threshold. I tend to vote for rejecting this submission, but accepting it would not be that bad."
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 134
|
2. Â Some ablation study is missing, which could cause confusion and extra experimentation for practitioners. For example, the \sigma in the RBF kernel seems to play a crucial role, but no analysis is given on it. Figure 4 analyzes how changing \lambda changes the performance, but it would be nice to see how \eta and \tau in equation (7) affect performance. Minor comments:
|
NIPS_2019_1131
|
NIPS_2019
|
1. There is no discussion on the choice of "proximity" and the nature of the task. On the proposed tasks, proximity on the fingertip Cartesian positions is strongly correlated with proximity in the solution space. However, this relationship doesn't hold for certain tasks. For example, in a complicated maze, two nearby positions in the Euclidean metric can be very far in the actual path. For robotic tasks with various obstacles and collisions, similar results apply. The paper would be better if it analyzes what tasks have reasonable proximity metrics, and demostrate failure on those that don't. 2. Â Some ablation study is missing, which could cause confusion and extra experimentation for practitioners. For example, the \sigma in the RBF kernel seems to play a crucial role, but no analysis is given on it. Figure 4 analyzes how changing \lambda changes the performance, but it would be nice to see how \eta and \tau in equation (7) affect performance. Minor comments: 1. The diversity term, defined as the facility location function, is undirected and history-invariant. Thus it shouldn't be called "curiosity", since curiosity only works on novel experiences. Please use a different name. 2. The curves in Figure 3 (a) are suspiciously cut at Epoch = 50, after which the baseline methods seem to catch up and perhaps surpass CHER. Perhaps this should be explained.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 138
|
1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted.
|
5UW6Mivj9M
|
EMNLP_2023
|
1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted.
2) Relatedly, it was hard to discern what was novel in the paper and what had already been tried by others.
3) Since the improvement in numbers is not large (in most cases, just a couple of points), it is hard to tell if this improvement is statistically significant and if it translates to qualitative improvements in performance.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 139
|
- Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function, and architecture (i.e., encoder, encoder-decoder, or decoder). In particular, it is worthwhile to include more analysis and discussion for GPT-2. For example, I would like to see the results of Figure 2 for GPT-2.
|
zpayaLaUhL
|
EMNLP_2023
|
- Limited Experiments
- Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function, and architecture (i.e., encoder, encoder-decoder, or decoder). In particular, it is worthwhile to include more analysis and discussion for GPT-2. For example, I would like to see the results of Figure 2 for GPT-2.
- The input for the analysis is limited to only 100 or 200 samples from wikitext-2. It would be desirable to experiment with a larger number of samples or with datasets from various domains.
- Findings are interesting, but no statement of what the contribution is and how practical impact on the community or practical use. (Question A).
- Results contradicting those reported in existing studies (Clark+'19) are observed but not discussed (Question B).
- I do not really agree with the argument in Section 5 that word embedding contributes to relative position-dependent attention patterns. The target head is in layer 8, and the changes caused by large deviations from the input, such as only position embedding, are quite large at layer 8. It is likely that the behavior is not such that it can be discussed to explain the behavior under normal conditions. Word embeddings may be the only prerequisites for the model to work properly rather than playing an important role in certain attention patterns.
- Introduction says to analyze "why attention depends on relative position," but I cannot find content that adequately answers this question.
- There is no connection or discussion of relative position embedding, which is typically employed in recent Transformer models in place of learnable APE (Question C).
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 142
|
5. **(Performance of TTA methods)** This is an interesting observation, that using non-standard benchmarks breaks a lot of popular TTA methods. If the authors can evaluate TTA on more conditions of natural distribution shift, like WILDS [9], it could really strengthen the paper.
|
X4ATu1huMJ
|
ICLR_2024
|
**Overall comment**
The paper discusses evaluating TTA methods across multiple settings, and how to choose the correct method during test-time. I would argue most of the methods/model selection strategies that are discussed in the paper are not novel and/or existed before, and the paper does not have a lot of algorithmic innovation.
While this discussion unifies various prior methods and can be a valuable guideline for practitioners to choose the appropriate TTA method, there needs to be more experiments to make it a compelling paper (i.e., add MEMO [1] as a method of comparison, add WILDS-Camelyon 17 [9], WILDS-FMoW [9], ImageNet-A [7], CIFAR-10.1 [8] as dataset benchmarks). But I do feel the problem setup is very important, and adding more experiments and a bit of rewriting can make the paper much stronger.
**Abstract and Introduction**
1. The paper mentions model restarting to avoid error propagation. There has been important work in TTA, where the model adapts its parameters to only one test example at a time, and reverts back to the initial (pre-trained) weights after it has made the prediction, doing the process all over for the next test example. This is also an important setting to consider, where only one test example is available, and one cannot rely on batches of data from a stream. For example, see MEMO [1].
2. (nitpicking, not important to include in the paper) “under extremely long scenarios all existing TTA method results in degraded performance”, while this is true, the paper does not mention some recent works that helps alleviate this. E.g., MEMO [1] in the one test example at a time scenario, or MEMO + surgical FT [2] where MEMO is used in the online setting, but parameter-efficient updating helps with feature distortion/performance degradation. So the claim is outdated.
3. It would be good to cite relevant papers such as [4] as prior works that look into model selection strategies (but not for TTA setting) to motivate the problem statement.
**Section 3.2, model selection strategies in TTA**
1. While accuracy-on-the-line [3] shows correlation between source (ID) and target (OOD) accuracies, some work [4] also say source accuracy is unreliable in the face of large domain gaps. I think table 3 shows the same result. Better to cite [4] and add their observation.
2. Why not look at agreement-on-the-line [5]? This is known to be a good way of assessing performance on the target domain without having labels. For example, A-Line [6] seems to have good performance on TTA tasks. This should also be considered as a model selection method.
**Section 4.1, datasets**
1. Missing some key datasets such as ImageNet-A [7], CIFAR-10.1 [8]. It is important to consider ImageNet-A (to show TTA’s performance on adversarial examples) and CIFAR-10.1, to show TTA’s performance on CIFAR-10 examples where the shift is natural, i.e., not corruptions. Prior work such as MEMO [1] has used some of these datasets.
**Section 4.3, experimental setup**
1. The architecture suite that is used is limited in size. Only ResNext-29 and ResNet-50 are used. Since the paper’s goal is to say something rigorous about model selection strategies, it is important to try more architectures to have a comprehensive result. At least some vision-transformer architecture is required to make the results strong. I would suggest trying RVT-small [12] or ViT-B/32 [13].
Why do the authors use SGD as an optimizer for all tasks? It is previously shown that [14] SGD often performs worse for more modern architectures. The original TENT [15] paper also claims they use SGD for ImageNet and for everything else they use Adam [16].
**Section 5, results**
1. (Table 1) It might be easier if the texts mention that each row represent one method, and each column represents one model selection strategy. When the authors say “green” represents the best number, they mean “within a row”.
2. (Different methods’ ranking under different selection strategies) The results here are not clear and hard to read. How many times does one method outperform the other, when considering all different surrogate based metrics across all datasets? If the goal is to show consistency of AdaContrast as mentioned in the introduction, a better way of presenting this might be making something similar to table 1 of [17].
3. What does the **Median** column in Table 2 and 3 represent? There is no explanation given for this in paper.
4. I assume the 4 surrogate strategies are: S-Acc, Cross-Acc, Ent and Con. If so, then the statement **“While EATA is significantly the best under the oracle selection strategy (49.99 on average) it is outperformed for example by Tent (5th method using oracle selection) when using 3 out of 4 surrogate-based metrics”** is clearly False according to the last section of Table 2: Tent > EATA on Cross-Acc and Con, but EATA > Tent when using S-Acc and Ent.
5. **(Performance of TTA methods)** This is an interesting observation, that using non-standard benchmarks breaks a lot of popular TTA methods. If the authors can evaluate TTA on more conditions of natural distribution shift, like WILDS [9], it could really strengthen the paper.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 144
|
2. The authors need to show a graph showing the plot of T vs number of images, and Expectation(T) over the imagenet test set. It is important to understand whether the performance improvement stems solely from the network design to exploit spatial redundancies, or whether the redudancies stem from the nature of ImageNet, ie., large fraction of images can be done with Glance and hence any algorithm with lower resolution will have an unfair advantage. Note, algorithms skipping layers or channels do not enjoy this luxury.
|
NIPS_2020_204
|
NIPS_2020
|
1.The authors have done a good job with placing their work appropriately. One point of weakness is insufficient comparison to approaches that aim to reduce spatial redudancy, or make the networks more efficient specifically the ones skipping layers/channels. Comparison to OctConv and SkipNet even for a single datapoint with say the same backbone architecture will be valuable to the readers. 2. The authors need to show a graph showing the plot of T vs number of images, and Expectation(T) over the imagenet test set. It is important to understand whether the performance improvement stems solely from the network design to exploit spatial redundancies, or whether the redudancies stem from the nature of ImageNet, ie., large fraction of images can be done with Glance and hence any algorithm with lower resolution will have an unfair advantage. Note, algorithms skipping layers or channels do not enjoy this luxury. 3. The authors should add results from [57] and discuss the comparison. Recent alternatives to MSDNets should be compared and discussed. 4. Efficient backbone architectures and approaches tailoring the computation by controlling convolutional operator have the added advantage that they can be generally applied to semantic (object recognition) and dense pixel-wise tasks. Extension of this approach, unlike other approaches exploiting spatial redundancy to alternate vision tasks is not straightforward. The authors should discuss the implications of this approach to other vision tasks.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 157
|
- It would be good to include in the left graph in fig 3 the learning curve for a model without any mean teacher or pi regularization for comparison, to see if mean teacher accelerates learning or slows it down.
|
NIPS_2017_114
|
NIPS_2017
|
- More evaluation would have been welcome, especially on CIFAR-10 in the full label and lower label scenarios.
- The CIFAR-10 results are a little disappointing with respect to temporal ensembles (although the results are comparable and the proposed approach has other advantages)
- An evaluation on the more challenging STL-10 dataset would have been welcome. Comments
- The SVNH evaluation suggests that the model is better than pi an temporal ensembling especially in the low-label scenario. With this in mind, it would have been nice to see if you can confirm this on CIFAR-10 too (i.e. show results on CIFAR-10 with less labels)
- I would would have like to have seen what the CIFAR-10 performance looks like with all labels included.
- It would be good to include in the left graph in fig 3 the learning curve for a model without any mean teacher or pi regularization for comparison, to see if mean teacher accelerates learning or slows it down.
- I'd be interested to see if the exponential moving average of the weights provides any benefit on it's own, without the additional consistency cost.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 163
|
2. To utilize a volumetric representation in the deformation field is not a novel idea. In the real-time dynamic reconstruction task, VolumeDeform [1] has proposed volumetric grids to encode both the geometry and motion, respectively.
|
NIPS_2022_728
|
NIPS_2022
|
Weakness 1. The setup of capturing strategy is complicated and is not easy for applications in real life. To initialize the canonical space, the first stage is to capture the static state using a moving camera. Then to model motions, the second stage is to capture dynamic states using a few (4) fixed cameras. Such a 2-stage capturing is not straightforward. 2. To utilize a volumetric representation in the deformation field is not a novel idea. In the real-time dynamic reconstruction task, VolumeDeform [1] has proposed volumetric grids to encode both the geometry and motion, respectively. 3. The quantitative experiments (Tab. 2 and Tab. 3) show that the fidelity of rendered results highly depends on the 2-stage training strategy. In a general capturing case, other methods can obtain more accurate rendered images. Oppositely, Tab. 2 shows that it is not easy to fuse the designed 2-stage training strategy into current mainstream frameworks, such as D-NeRF, Nerfies and HyperNeRF. It verifies that the 2-stages training strategy is not a general design for dynamic NeRF.
[1] Innmann, Matthias, Michael Zollhöfer, Matthias Nießner, Christian Theobalt, and Marc Stamminger. "Volumedeform: Real-time volumetric non-rigid reconstruction." In European conference on computer vision (ECCV), pp. 362-379. Springer, Cham, 2016.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 169
|
4) The rock-paper-scissors example is clearly inspired by an example that appeared in many previous work. Please, cite the source appropriately.
|
NIPS_2018_707
|
NIPS_2018
|
weakness of the paper is the lack of experimental comparison with the state of the art. The paper spends whole page explaining reasons why the presented approach might perform better under some circumstances, but there is no hard evidence at all. What is the reason not to perform an empirical comparison to the joint belief state approach and show the real impact of the claimed advantages and disadvantages? Since this is the main point of the paper, it should be clear when the new modification is useful. 3) Furthermore, there is an incorrect statement about the performance of the state of the art method. The paper claims that "The evidence suggests that in the domain we tested on, using multi-valued states leads to better performance." because the alternative approach "was never shown to defeat prior top AIs". This is simply incorrect. Lack of an experiment is not evidence for superiority of the method that performed the experiment without any comparison. 4) The rock-paper-scissors example is clearly inspired by an example that appeared in many previous work. Please, cite the source appropriately. 5) As explained in 1), the presented method is quite heuristic. The algorithm does not actually play the blueprint strategy, only few values are used in the leaf states, which cannot cover the whole variety of the best response values. In order to assess whether the presented approach might be applicable also for other games, it would be very useful to evaluate it on some substantially different domains, besides poker. Clarity: The paper is well written and organized, and it is reasonably easy to understand. The impact of the key differences between the theoretic inspiration and the practical implementation should be explained more clearly. Originality: The presented method is a novel modification of continual resolving. The paper clearly explains the main distinction form the existing method. Significance: The presented method seems to substantially reduce the computational requirements of creating a strong poker bot. If this proofs to be the case also for some other imperfect information games, it would be a very significant advancement in creating algorithms for playing these games. Detailed comments: 190: I guess the index should be 1 339: I would not say MCCFR is currently the preferred solution method, since CFR+ does not work well with sampling 349: There is no evidence the presented method would work better in stratego. It would depend on the specific representation and how well would the NN generalize over the types of heuristics. Reaction to rebuttal: 1) The formulation of the formal statement should be clearer. Still, while you are using the BR values from the blueprint strategy in the computation, I do not see how the theory can give you any real bounds the way you use the algorithm. One way to get more realistic bounds would be to analyze the function approximation version and use error estimates from cross-valiadation. 2) I do not believe head-to-head evaluation makes too much sense because of well known intransitivity effects. However, since the key difference between your algorithm and DeepStack is the form of the used leaf evaluation function, it would certainly not take man-years to replace the evaluation function with the joint belief in your framework. It would be very interesting to see comparison of exploitability and other trade-offs on smaller games, where we can still compute it. 4) I meant the use of the example for save resolving. 5) There is no need for strong agents for some particular games to make rigorous evaluation of equilibrium solving algorithms. You can compute exploitability in sufficiently large games to evaluate how close your approach is to the equilibrium. Furthermore, there are many domain independent algorithms for approaximating equilibriua in these games you can compare to. Especially the small number of best response values necessary for the presented approach is something that would be very interesting to evaluate in other games. Line 339: I just meant that I consider CFR+ to be "the preferred domain-independent method of solving imperfect-information games", but it is not really important, it was a detailed comment.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 171
|
3. The innovations of network architecture design and constraint embedding are rather limited. The authors discussed that the performance is limited by the performance of the oracle expert.
|
NIPS_2022_69
|
NIPS_2022
|
1. This work uses an antiquated GNN model and method, it seriously impacts the performance of this framework. The baseline algorithms/methods are also antiquated. 2. The experimental results did not show that this work model obviously outperforms other variant comparison algorithms/models. 3. The innovations of network architecture design and constraint embedding are rather limited.
The authors discussed that the performance is limited by the performance of the oracle expert.
| 3
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 172
|
12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct? I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraphs — it's currently a huge wall of text.
|
ICLR_2022_3205
|
ICLR_2022
|
This method trades one intractible problem for another: it requires the learning of cross-values v e ′ ( x t ; e )
for all pairs of possible environments e , e ′
. It is not clear that this will be an improvement when scaling up.
At a few points the paper introduces approximations, but the gap to the true value and the implications of these approximations are not made completely clear to me. The authors should be more precise about the tradeoffs and costs of the methods they propose, both in terms of accuracy and computational cost.
On page 6, it claims that estimating v c
according to samples will lead to Thompson sampling-like behavior, which might lead to better exploration. This seems a bit facetious given that this paper attempts to find a Bayes-optimal policy and explicitly points out the weaknesses of Thompson sampling in an earlier section.
Not scaled to larger domains, but this is understandable.
Questions and minor comments
Is the belief state conditioning the policy also supposed to change with time τ
? As written it looks like the optimal Bayes-adaptive policy conditions on one sampled belief about the environment and then plays without updating that belief.
It is not intuitive to me how it is possible to estimate v f
, despite the Bellman equation written in Eq. 12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct?
I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraphs — it's currently a huge wall of text.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 177
|
1) there is a drop of correlation after a short period of training, which goes up with more training iterations;
|
NIPS_2022_1770
|
NIPS_2022
|
Weakness: There are still several concerns with the finding that the perplexity is highly correlated with the number of decoder parameters.
According to Figure 4, the correlation decreases as top-10% architectures are chosen instead of top-100%, which indicates that the training-free proxy is less accurate for parameter-heavy decoders.
The range of sampled architectures should also affect the correlation. For instance, once the sampled architectures are of similar sizes, it could be more challenging to differentiate their perplexity and thus the correlation can be lower.
Detailed Comments:
Some questions regarding Figure 4: 1) there is a drop of correlation after a short period of training, which goes up with more training iterations; 2) the title "Top-x%" should be further explained;
Though the proposed approach yields the Pareto frontier of perplexity, latency and memory, is there any systematic way to choose a single architecture given the target perplexity?
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 179
|
2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ should be shown in a rigorous way or this paper mention it.
|
ICLR_2023_650
|
ICLR_2023
|
1.One severe problem of this paper is that it misses several important related work/baselines to compare[1,2,3,4], either in discussion [1,2,3,4]or experiments[1,2]. This paper addresses to design a normalization layer that can be plugged in the network for avoiding the dimensional collapse of representation (in intermediate layer). This idea has been done by the batch whitening methods [1,2,3] (e.g, Decorrelated Batch Normalization (DBN), IterNorm, etal.). Batch whitening, which is a general extent of BN that further decorrelating the axes, can ensure the covariance matrix of the normalized output as Identity (IterNorm can obtain an approximate one). These normalization modules can surely satisfy the requirements these paper aims to do. I noted that this paper cites the work of Hua et al, 2021, which uses Decorrelated Batch Normalization for Self-supervised learning (with further revision using shuffling). This paper should note the exist of Decorrelated Batch Normalization. Indeed, the first work to using whitening for self-supervised learning is [4], where it shows how the main motivations of whitening benefits self-supervised learning.
2.I have concerns on the connections and analyses, which is not rigorous for me. Firstly, this paper removes the A D − 1
in Eqn.6, and claims that “In fact, the operation corresponds to the stop-gradient technique, which is widely used in contrastive learning methods (He et al., 2020; Grill et al., 2020). By throwing away some terms in the gradient, stop-gradient makes the training process asymmetric and thus avoids representation collapse with less computational overhead. It verifies the feasibility of our discarding operation”. I do not understand how to stop gradients used in SSL can be connected to the removement of A D − 1
, I expect this paper can provide the demonstration or further clarification.
Secondly, It is not clear why layerNorm is necessary. Besides, how the layer normalization can be replace with an additional factor (1+s) to rescale H shown in claims “For the convenience of analysis, we replace the layer normalization with an additional factor 1 + s to rescale H”. I think the assumption is too strong.
In summary, the connections between the proposed contraNorm and uniformity loss requires: 1) removing A D − 1
and 2) add layer normalization, furthermore the propositions for support the connection require the assumption “layer normalization can be replace with an additional factor (1+s) to rescale H”. I personally feel that the connection and analysis are somewhat farfetched.
Other minors:
1)Figure 1 is too similar to the Figure 1 of Hua et al, 2021, I feel it is like a copy at my first glance, even though I noted some slightly differences when I carefully compare Figure 1 of this paper to Figure 1 of Hua et al, 2021.
2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ
should be shown in a rigorous way or this paper mention it.
3)In page 6. the reference of Eq.(24)? References:
[1] Decorrelated Batch Normalization, CVPR 2018
[2] Iterative Normalization: Beyond Standardization towards Efficient Whitening, CVPR 2019
[3] Whitening and Coloring transform for GANs. ICLR, 2019
[4]Whitening for Self-Supervised Representation Learning, ICML 2021
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 181
|
1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2].
|
3vXpZpOn29
|
ICLR_2025
|
It is unclear that linear datamodels extend to other kinds of tasks, e.g. language modeling or regression problems. I believe this to be a major weakness of the paper. While linear datamodels lead to simple algorithms in this paper, the previous work [1] does not have a good argument for why linear datamodels work [1; Section 7.2]---in fact Figure 6 of [1] display imperfect matching using linear datamodels. It'd be useful to mention this limitation in this manuscript as well, and discuss the limitation's impact to machine learning.
# Suggestions:
1. Line 156. It'd be useful to the reader to add a citation on differential privacy, e.g. one of the standard works like [2].
2. Line 176. $\hat{f}$ should have output range in $\mathbb{R}^k$ since the range of $f_x$ is in $\mathbb{R}^k$.
3. Line 182. "show" -> "empirically show".
4. Definition 3. Write safe, $S_F$, and input $x$ explicitly in KLoM, otherwise KLoM$(\mathcal{U})$ looks like KLoM of the unlearning function across _all_ safe functions and inputs. I'm curious why the authors wrote KLoM$(\mathcal{U})$.
5. Add a Limitations section.
[1] Ilyas, A., Park, S. M., Engstrom, L., Leclerc, G., & Madry, A. (2022). Datamodels: Predicting predictions from training data. arXiv preprint arXiv:2202.00622.
[2] Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211-407.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 182
|
2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on the advantages and disadvantages of each of the three ways of transforming the high-dimensional data to low-dimensional latent space which might make them more or less suitable for certain tasks/datasets/applications. Because of these concerns, I am keeping my original rating.
|
ICLR_2021_1504
|
ICLR_2021
|
W1) The authors should compare their approach (methodologically as well as experimentally) to other concept-based explanations for high-dimensional data such as (Kim et al., 2018), (Ghorbani et al., 2019) and (Goyal et al., 2019). The related work claims that (Kim et al., 2018) requires large sets of annotated data. I disagree. (Kim et al., 2018) only requires a few images describing the concept you want to measure the importance of. This is significantly less than the number of annotations required in the image-to-image translation experiment in the paper where the complete dataset needs to be annotated. In addition, (Kim et al., 2018) allows the flexibility to consider any given semantic concept for explanation while the proposed approach is limited either to semantic concepts captured by frequency information, or to semantic concepts automatically discovered by representation learning, or to concepts annotated in the complete dataset. (Ghorbani et al., 2019) also overcomes the issue of needing annotations by discovering useful concepts from the data itself. What advantages does the proposed approach offer over these existing methods?
W2) Faithfulness of the explanations with the pretrained classifier. The methods of disentangled representation and image-to-image translation require training another network to learn a lower-dimensional representation. This runs the risk of encoding some biases of its own. If we find some concerns with the explanations, we cannot infer if the concerns are with the trained classifier or the newly trained network, potentially making the explanations useless.
W3) In the 2-module approach proposed in the paper, the second module can theoretically be any explainability approach for low-dimensional data. What is the reason that the authors decide to use Shapely instead of other works such as (Breiman, 2001) or (Ribeiro et al., 2016)?
W4) Among the three ways of transforming the high-dimensional data to low-dimensional latent space, what criteria should be used by a user to decide which method to use? Or, in other words, what are the advantages and disadvantages of each of these methods which might make them more or less suitable for certain tasks/datasets/applications?
W5) The paper uses the phrase “human-interpretable explainability”. What other type of explainability could be possible if it’s not human-interpretable? I think the paper might benefit with more precise definitions of these terms in the paper.
References mentioned above which are not present in the main paper:
(Ghorbani et al., 2019) Amirata Ghorbani, James Wexler, James Zou, Been Kim. Towards Automatic Concept-based Explanations. NeurIPS 2019.
(Goyal et al., 2019) Yash Goyal, Amir Feder, Uri Shalit, Been Kim. Explaining Classifiers with Causal Concept Effect (CaCE). ArXiv 2019.
—————————————————————————————————————————————————————————————— ——————————————————————————————————————————————————————————————
Update after rebuttal: I thank the authors for their responses to all my questions. However, I believe that these answers need to be justified experimentally in order for the paper’s contributions to be significant for acceptance. In particular, I still have two major concerns. 1) the faithfulness of the proposed approach. I think that the authors’ answer that their method is less at risk to biases than other methods needs to be demonstrated with at least a simple experiment. 2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on the advantages and disadvantages of each of the three ways of transforming the high-dimensional data to low-dimensional latent space which might make them more or less suitable for certain tasks/datasets/applications. Because of these concerns, I am keeping my original rating.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 184
|
3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation.
|
NIPS_2022_2373
|
NIPS_2022
|
weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. 4. This work theoretically proves that CATER is resilient to statistical reverse-engineering, which is also verified by their experiments. In addition, they show that CATER can defend against ONION, an effective approach for backdoor removal.
Weakness: 1. The authors assume that all training data are from the API response, but what if the adversary only uses the part of the API response? 2. Figure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5.
The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could extend CATER to other languages in the future.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 188
|
- Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player).
|
NIPS_2017_143
|
NIPS_2017
|
For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant:
- In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy?
- In l32-45 they pretend to give a real example but for me this is too vague. I do see that in some scenarios the loss/objective they consider (high accuracy on majority) kind of makes sense. But I imagine that such losses already have been studied, without necessarily referring to "strategic" settings. In particular, how is this related to robust statistics, Huber loss, precision, recall, etc.?
- In l50 they claim that "pershaps even in most [...] practical scenarios" predicting accurate on the majority is most important. I contradict: in many areas with safety issues such as robotics and self-driving cars (generally: control), the models are allowed to have small errors, but by no means may have large errors (imagine a self-driving car to significantly overestimate the distance to the next car in 1% of the situations).
Related to this, in my view they fall short of what they claim as their contribution in the introduction and in l79-87:
- Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player).
- In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE).
- Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize).
- I agree that "prediction is not done in isolation", but I don't see the "main" contribution of showing that the "task of prediction may have strategic aspects" yet. REMARKS:
What's "true" payoff in Table 1? I would have expected to see the test set payoff in that column. Or is it the population (complete sample) empirical payoff?
Have you looked into the work by Vapnik about teaching a learner with side information? This looks a bit similar as having your discrapency p alongside x,y.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 189
|
- The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers.
|
NIPS_2018_430
|
NIPS_2018
|
- The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. - The authors only applied their method on peculiar types of machine learning applications that were already used for testing boolean classifier generation. It is unclear whether the method could lead to progress in the direction of cleaner machine learning methods for standard machine learning tasks (e.g. MNIST). Questions: - How where the time limits in the inner and outer problem chosen? Did larger timeouts lead to better solutions? - It would be helpful to have an algorithmic writeup of the solution of the pricing problem. - SVM gave often good results on the datasets. Did you use a standard SVM that produced a linear classifier or a Kernel method? If the former is true, this would mean that the machine learning tasks where rather easy and it would be necessary to see results on more complicated problems where no good linear separator exists. Conclusion: I very much like the paper and strongly recommend its publication. The authors propose a theoretically well grounded approach to supervised classifier learning. While the number of problems that one can attack with the method is not so large, the theoretical (problem formulation) and practical (Dantzig-Wolfe solver) contribution can possibly serve as a starting point for further progress in this area of machine learning.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 191
|
- Multiscale modeling:- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly.
|
8HG2QrtXXB
|
ICLR_2024
|
- Source of Improvement and Ablation Study:
- Given the presence of various complex architectural choices, it's difficult to determine whether the Helmholtz decomposition is the primary source of the observed performance improvement. Notably, the absence of the multi-head mechanism leads to a performance drop (0.1261 -> 0.1344) for the 64x64 Navier-Stokes, which is somewhat comparable to the performance decrease resulting from the ablation of the Helmholtz decomposition (0.1261 -> 0.1412). These results raise questions about the model's overall performance gain compared to the baseline models when the multi-head trick is absent. Additionally, the ablation studies need to be explained more comprehensively with sufficient details, as the current presentation makes it difficult to understand the methodology and outcomes.
- The paper claims that Vortex (Deng et al., 2023) cannot be tested on other datasets, which seems unusual, as they are the same type of task and data that are disconnected from the choice of dynamics modeling itself. It should be further clarified why Vortex cannot be applied to other datasets.
- Interpretability Claim:
- The paper's claim about interpretability is not well-explained. If the interpretability claim is based on the model's prediction of an explicit term of velocity, it needs further comparison and a more comprehensive explanation. Does the Helmholtz decomposition significantly improve interpretability compared to baseline models, such as Vortex (Deng et al., 2023)?
- In Figure 4, it appears that the model predicts incoherent velocity fields around the circle boundary, even with non-zero velocity outside the boundary, while baseline models do not exhibit such artifacts. This weakens the interpretability claim.
- Multiscale modeling:
- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly.
- Regarding some missing experimental results with cited baselines, it's crucial to include and report all baseline results to ensure transparency, even if the outcomes are considered inferior.
- Minor issues:
- Ensure proper citation format for baseline models (Authors, Year).
- Make sure that symbols are well-defined with clear reference to their definitions. For example, in Equation (4), the undefined operator $\mathbb{I}_{\vec r\in\mathbb{S}}$ needs clarification. If it's an indicator function, use standard notation with a proper explanation. "Embed(•)" should be indicated more explicitly.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 195
|
3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1. Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on unaltered data, evaluated with random data? Clarify: Random data (Fig 3). Was the non-random data normalized or not (i.e. is the additional “unit-ball” noise small or large compared to the data). Ideally show some examples of the random data in the appendix.
|
ICLR_2021_1716
|
ICLR_2021
|
Results are on MNIST only. Historically it’s often been the case that strong results on MNIST would not carry over to more complex data. Additionally, at least some core parts of the analysis does not require training networks (but could even be performed e.g. with pre-trained classifiers on ImageNet) - there is thus no severe computational bottleneck, which is often the case when going beyond MNIST.
The “Average stochastic activation diameter” is a quite crude measure and results must thus be taken with a (large) grain of salt. It would be good to perform some control experiments and sanity checks to make sure that the measure behaves as expected, particularly in high-dimensional spaces.
The current paper reports the hashing effect and starts relating it to what’s known in the literature, and has some experiments that try to understand the underlying causes for the hashing effect. However, while some factors are found to have an influence on the strength of the effect, some control experiments are still missing (training on random labels, results on untrained networks, and an analysis of how the results change when starting to leave out more and more of the early layers).
Correctness Overall the methodology, results, and conclusions seem mostly fine (I’m currently not very convinced by the “stochastic activation diameter” and would not read too much into the corresponding results). Additionally some claims are not entirely supported (in fullest generality), based on the results shown, see comments for more on this.
Clarity The main idea is well presented and related literature is nicely cited. However, some of the writing is quite redundant (some parts of the intro appear as literal copies later in the text). Most importantly the writing in some parts of the manuscript seems quite rushed with quite a few typos and some sentences/passages that could be rephrased for more fluent reading.
Improvements (that would make me raise my score) / major issues (that need to be addressed)
Experiments on more complex datasets.
One question that is currently unresolved is: is the hashing effect mostly attributed to early layer activations? Ultimately, a high-accuracy classifier will “lump together” all datapoints of a certain class when looking at the network output only. The question is whether this really happens at the very last layer or already earlier in the network. Similarly, when considering the input to the network (the raw data) the hashing effect holds since each data-point is unique. It is conceivable that the first layer activations only marginally transform the data in which case it would be somewhat trivially expected to see the hashing effect (when considering all activations simultaneously). However that might not explain e.g. the K-NN results. I think it would be very insightful to compute the redundancy ratio layer-wise and/or when leaving out more and more of the early layer activations (i.e. more and more rows of the activation pattern matrix). Additionally it would be great to see how this evolves over time, i.e. is the hashing effect initially mostly localized in early layers and does it gradually shape deeper activations over training? This would also shed some light on the very important issue of how a network that maps each (test-) data-point to a unique pattern generalize well?
Another unresolved question is whether it’s mostly the structure of the input-data or the labels driving the organization of the hashed space? The random data experiments answers this partially. Additionally it would be interesting to see what happens when (i) training with random data, (ii) training with random labels - is the hashing effect still there, does the K-NN classification still work?
Clarify: Does Fig 3c and 4a show results for untrained networks? I.e. is the redundancy ratio near 0 for training, test and random data in an untrained network? I would not be entirely surprised by that (a “reservoir effect”) but if that’s the case that should be commented/discussed in the paper, and improvement 3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1.
Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on unaltered data, evaluated with random data?
Clarify: Random data (Fig 3). Was the non-random data normalized or not (i.e. is the additional “unit-ball” noise small or large compared to the data). Ideally show some examples of the random data in the appendix.
P3: “It is worth noting that the volume of boundaries between linear regions is zero” - is this still true for non-ReLU nonlinearities (e.g. sigmoids)? If not what are the consequences (can you still easily make the claims on P1: “This linear region partition can be extended to the neural networks containing smooth activations”)? Otherwise please rephrase the claims to refer to ReLU networks only.
I disagree that model capacity is well measured by layer width. Please use the term ‘model-size’ instead of ‘model-capacity’ throughout the text. Model capacity is a more complex concept that is influenced by regularizers and other architectural properties (also note that the term capacity has e.g. a well-defined meaning in information theory, and when applied to neural networks it does not simply correspond to layer-width).
Sec 5.4: I disagree that regularization “has very little impact” (as mentioned in the abstract and intro). Looking at the redundancy ratio for weight decay (unfortunately only shown in the appendix) one can clearly see a significant and systematic impact of the regularizer towards higher redundancy ratios (as theoretically expected) for some networks (I guess the impact is stronger for larger networks, unfortunately Fig 8 in the appendix does not allow to precisely answer which networks are which).
Minor comments A) Formally define what “well-trained” means. The term is used quite often and it is unclear whether it simply means converged, or whether it refers to the trained classifier having to have a certain performance.
B) There is quite an extensive body of literature (mainly 90s and early 2000s) on “reservoir effects” in randomly initialized, untrained networks (e.g. echo state networks and liquid state machines, however the latter use recurrent random nets). Perhaps it’s worth checking that literature for similar results.
C) Remark 1: is really only the training distribution meant, i.e. without the test data, or is it the unaltered data generating distribution (i.e. without unit-ball noise)?
D) Is the red histogram in Fig 3a and 3b the same (i.e. does Fig 3b use the network trained with 500 epochs)?
E) P2 - Sufficiently-expressive regime: “This regime involves almost all common scenarios in the current practice of deep learning”. This is a bit of a strong claim which is not fully supported by the experiments - please tone it down a bit. It is for instance unclear whether the effect holds for non-classification tasks, and variational methods with strong entropy-based regularizers, or Dropout, ...
F) P2- The Rosenblatt 1961 citation is not entirely accurate, MLP today typically only loosely refers to the original Perceptron (stacked into multiple-layers), most notably the latter is not trained via gradient backpropagation. I think it’s fine to use the term MLP without citation, or point out that MLP refers to a multi-layer feedforward network (trained via backprop).
G) First paragraph in Sec. 4 is very redundant with the first two bullet points on P2 (parts of the text are literally copied). This is not a good writing style.
H) P4 - first bullet point: “Generally, a larger redundancy ratio corresponds a worse encoding property.”. This is a quite hand-wavy statement - “worse” with respect to what? One could argue that for instance for good generalization high redundancy could be good.
I) Fig 3: “10 epochs (red) and 500 epochs (blue),” does not match the figure legend where red and blue are swapped.
J) Fig 3: Panel b says “Rondom” data.
K) Should the x-axis in Fig 3c be 10^x where x is what’s currently shown on the axis? (Similar to how 4a is labelled?)
L) Some typos P2: It is worths noting P2: By contrast, our the partition in activation hash phase chart characerizes goodnessof-hash. P3: For the brevity P3: activation statue
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 198
|
- The improvement over previous methods is small, about 0.2%-1%. Also the results in Table 1 and Fig.5 don't report the mean and standard deviation, and whether the difference is statistically significant is hard to know. I will suggest to repeat the experiments and conduct statistical significance analysis on the numbers. Thus, due to the limited novelty and marginal improvement, I suggest to reject the paper.
|
NIPS_2018_985
|
NIPS_2018
|
Weakness: - One drawback is that the idea of dropping a spatial region in training is not new. Cutout [22] and [a] have been explored this direction. The difference towards previous dropout variants is marginal. [a] CVPR'17. A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection. - The improvement over previous methods is small, about 0.2%-1%. Also the results in Table 1 and Fig.5 don't report the mean and standard deviation, and whether the difference is statistically significant is hard to know. I will suggest to repeat the experiments and conduct statistical significance analysis on the numbers. Thus, due to the limited novelty and marginal improvement, I suggest to reject the paper.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 202
|
1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date?
|
ARR_2022_209_review
|
ARR_2022
|
1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date?
2. There isn't one clear aggregation strategy that gives consistent performance gains across all tasks. So it is hard for someone to implement this approach in practice.
1. Experimental setup details: Can you explain how you pick which notes from the patient's EHR do you use an input and how far away are the outcomes from the last note date? Also, how do you select the patient population for the experiments? Do you use all patients and their admissions for prediction? Is the test set temporally split or split according to different patients?
2. Is precision more important or recall? You seem to consider precision more important in order to not raise false alarms. But isn't recall also important since you would otherwise miss out on reporting at-risk patients?
3. You cannot refer to appendix figures in the main paper (line 497). You should either move the whole analysis to appendix or move up the figures.
4. How do you think your approach would compare to/work in line with other inputs such as structured information? AUCROC seems pretty high in other models in literature.
5. Consider explaining the tasks and performance metrics when you call them out in the abstract in a little more detail. It's a little confusing now since you mention mortality prediction and say precision@topK, which isn't a regular binary classification metric.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 206
|
1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are encouraged to discuss the reason why DMLCBO does not achieve it and the theoretical technique difference between DMLCBO and above mentioned works.
|
K98byXpOpU
|
ICLR_2024
|
1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are encouraged to discuss the reason why DMLCBO does not achieve it and the theoretical technique difference between DMLCBO and above mentioned works.
2. In the experimental part, the author only shows the results of DMLCBO in early time, it will be more informative to provide results in the later steps.
3. In Table 3, DMLCBO exhibits higher variance compared with other baselines in MNIST datasets, the authors are encouraged to discuss more experimental details about it and explain the behind reason.
[1] A Near-Optimal Algorithm for Stochastic Bilevel Optimization via Double-Momentum
[2] Provably Faster Algorithms for Bilevel Optimization
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 207
|
3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is. Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset meta-learning from kernel ridge-regression. In International Conference on Learning Representations, 2021. Update: Please see my comment below. I have increased the score from 3 to 5.
|
NIPS_2021_386
|
NIPS_2021
|
1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS with ImageNet dataset. 2. There is no actual new algorithmic or research contribution in this paper. The paper uses the methods of [Nguyen et al., 2021] directly. The only contribution seems to be running large-scale experiments of the same methods. However, compared to [Nguyen et al., 2021], it seems that there are some qualitative differences in the obtained images as well (lines 173-175). The authors do not clearly explain what these differences are, or why there are any differences at all (since the approach is identical). The only thing reviewer could understand is that this is due to ZCA preprocessing which does not sound like a major contribution. 3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is.
Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset meta-learning from kernel ridge-regression. In International Conference on Learning Representations, 2021.
Update: Please see my comment below. I have increased the score from 3 to 5.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 208
|
35. No.3. 2021. Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included. Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method.
|
ICLR_2023_1599
|
ICLR_2023
|
of the proposed method are listed as below:
There are two key components of the method, namely, the attention computation and learn-to-rank module. For the first component, it is a common practice to compute importance using SE blocks. Therefore, the novelty of this component is limited.
Some important SOTAs are missing and some of them as below outperform the proposed method: (1) Ding, Xiaohan, et al. "Resrep: Lossless cnn pruning via decoupling remembering and forgetting." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. (2) Li, Bailin, et al. "Eagleeye: Fast sub-net evaluation for efficient neural network pruning." European conference on computer vision. Springer, Cham, 2020. (3) Ruan, Xiaofeng, et al. "DPFPS: dynamic and progressive filter pruning for compressing convolutional neural networks from scratch." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 3. 2021.
Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included.
Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 224
|
1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise injected models does not ensure the flatness of the minima. So, to claim that the minima found by minimizing the loss in Eq (3), the analysis on the losses of the noise-injected models after training is required.
|
NIPS_2021_121
|
NIPS_2021
|
Weakness] 1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise injected models does not ensure the flatness of the minima. So, to claim that the minima found by minimizing the loss in Eq (3), the analysis on the losses of the noise-injected models after training is required. 2. In Eq (4), the class prototypes before and after injecting noise are utilized for prototype fixing regularization. However, this means that F2M have to compute the prototypes of the base class every time the noise is injected: M+1 times for each update. Considering the fact that there are many classes and many samples for the base classes, this prototype fixing is computationally inefficient. If I miss some details about the prototype fixing, please fix my misunderstanding in rebuttal. 3. Analysis on the sampling times M
and noise bound value b
is missing. These values decide the flat area around the flat minima, and the performance would be affected by theses value. However, there is no analysis on M and b
in the main paper nor the appendix. Moreover, the exact value M
used for the experiments is not reported. 4. Comparison with single session incremental few-shot learning is missing. Like [42] in the main paper, there are some meta-learning based single session incremental FSL methods are being studied. Although this paper targets on multi-session incremental FSL with different setting and different dataset split, it would be more informative to compare the proposed F2M with that kind of methods, considering that the idea of finding flat minima seems valuable for the single session incremental few-shot learning task too.
There is a typo in Table 2 – the miniImageNet task is 5-way, but it is written as 10-way.
Post Rebuttal
Reviewer clarified the confusing parts of the paper, and added useful analysis during rebuttal. Therefore, I raise my score to 6.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 229
|
- The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text.
|
ICLR_2023_1765
|
ICLR_2023
|
weakness, which are summarized in the following points:
Important limitations of the quasi-convex architecture are not addressed in the main text. The proposed architecture can only represent non-negative functions, which is a significant weakness for regression problems. However, this is completed elided and could be missed by the casual reader.
The submission is not always rigorous and some of the mathematical developments are unclear. For example, see the development of the feasibility algorithm in Eq. 4 and Eq. 5. Firstly, t ∈ R while y , f ( θ ) ∈ R n
, where n
is the size of the training set, so that the operation y − t − f ( θ )
is not well-defined. Moreover, even if y , f ( θ ) ∈ R
, the inequality ψ t ( θ ) ≤ 0 implies l ( θ ) ≤ t 2 / 2
, rather than ( θ ) ≤ t
. Since, in general, the training problem will be defined for y ∈ R n
, the derivations in the text should handle this general case.
The experiments are fairly weak and do not convince me that the proposed models have sufficient representation power to merit use over kernel methods and other easy-to-train models. The main issue here is the experimental evaluation does not contain a single standard benchmark problem nor does it compare against standard baseline methods. For example, I would really have liked to see regression experiments on several UCI datsets with comparisons against kernel regression, two-layer ReLU networks, etc. Although boring, such experiments establish a baseline capacity for the quasi-concave networks; this is necessary to show they are "reasonable". The experiments as given have several notable flaws:
Synthetic dataset: This is a cute synthetic problem, but obviously plays to the strength of the quasi-concave models. I would have preferred to see a synthetic problem for which was noisy with non piece-wise linear relationship.
Contour Detection Dataset: It is standard to report the overall test ODS, instead of reporting it on different subgroups. This allows the reader to make a fair overall comparison between the two methods.
Mass-Damper System Datasets: This is a noiseless linear regression problem in disguise, so it's not surprising that quasi-concave networks perform well.
Change-point Detection: Again, I would really have rather seen some basic benchmarks like MNIST before moving on to novel applications like detecting changes in data distribution.
Minor Comments
Introduction: - The correct reference for SGD is the seminal paper by Robbins and Monro [1]. - The correct reference for backpropagation is Rumelhart et al. [2]
- "Issue 1: Is non-convex deep neural networks always better?": "is" should be "are". - "While some experiments show that certain local optima are equivalent and yield similar learning performance" -- this should be supported by a reference. - "However, the derivation of strong duality in the literature requires the planted model assumption" --- what do you mean by "planted model assumption"? The only necessary assumption for these works is that the shallow network is sufficiently wide.
Section 4: - "In fact, suppose there are m weights, constraining all the weights to be non-negative will result in only 1 / 2 m
representation power." -- A statement like this only makes sense under some definition of "representation power". For example, it is not obvious how non-negativity constraints affect the underlying hypothesis class (aside from forcing it to contain only non-negative functions), which is the natural notion of representation power. - Equation 3: There are several important aspects of this model which should be mentioned explicitly in the text. Firstly, it consists of only one neuron; this is obvious from the notation, but should be stated as well. Secondly, it can only model non-negative functions. This is a strong restriction and should be discussed somewhere. - "Among these operations, we choose the minimization procedure because it is easy to apply and has a simple gradient." --- the minimization operator may produce a non-smooth function, which does not admit a gradient everywhere. Nor is it guaranteed to have a subgradient since the negative function only quasi-convex, rather than convex. - "... too many minimization pooling layers will damage the representation power of the neural network" --- why? Can the authors expand on this observation?
Section 5: - "... if we restrict the network output to be smaller than the network labels, i.e., f ( θ ) ≤ y
" --- note that this observation requires y ≥ 0
, which does not appear to be explicitly mentioned. - What method is being used to solve the convex feasibility problem in Eq. (5)? I cannot find this stated anywhere.
Figure 6: - Panel (b): "conveyers" -> "converges".
Figure 7: - The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text. - "It could explain that the classification accuracy of QCNN (94.2%) outperforms that of deep networks (92.7%)" --- Is this test accuracy, or training accuracy? I assume this is the test metric on the hold-out set, but the text should state this clearly. References
[1] Robbins, Herbert, and Sutton Monro. "A stochastic approximation method." The annals of mathematical statistics (1951): 400-407.
[2] Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning representations by back-propagating errors." nature 323.6088 (1986): 533-536.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 230
|
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help.
|
NIPS_2017_104
|
NIPS_2017
|
---
There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit.
* More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy (e.g., how a human would demonstrate for Baxter)? Does the model generalize from any working policy? What about a policy which spends most of its time doing irrelevant or intentionally misleading manipulations? Can a demonstration task be input in a higher level language like the one used throughout the paper (e.g., at line 129)?
* How does this setting relate to question answering or visual question answering?
* How does the model perform on the same train data it's seen already? How much does it overfit?
* How hard is it to find intuitive attention examples as in figure 4?
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help.
* The related works section would be better understood knowing how the model works, so it should be presented later.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 234
|
1. Symbols are a little bit complicated and takes a lot of time to understand.
|
NIPS_2018_461
|
NIPS_2018
|
1. Symbols are a little bit complicated and takes a lot of time to understand. 2. The author should probably focus more on the proposed problem and framework, instead of spending much space on the applications. 3. No conclusion section Generally I think this paper is good, but my main concern is the originality. If this paper appears a couple years ago, I would think that using meta-learning to solve problems is a creative idea. However, for now, there are many works using meta-learning to solve a variety of tasks, such as in active learning and reinforcement learning. Hence, this paper seems not very exciting. Nevertheless, deciding the number of clusters and selecting good clustering algorithms are still useful. Quality: 4 of 5 Clarity: 3 of 5 Originality: 2 of 5 Significance: 4 of 5 Typo: Line 240 & 257: Figure 5 should be Figure 3.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 236
|
1. The introduction to orthogonality in Part 2 could be more detailed.
|
oKn2eMAdfc
|
ICLR_2024
|
1. The introduction to orthogonality in Part 2 could be more detailed.
2. No details on how the capsule blocks are connected to each other.
3. The fourth line of Algorithm 1 does not state why the flatten operation is performed.
4.The presentation of the α-enmax function is not clear.
5. Eq. (4) does not specify why BatchNorm is used for scalars (L2-norm of sj).
6. The proposed method was tested on relatively small datasets, so that the effectiveness of the method was not well evaluated.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"3",
"3",
"3"
]
}
|
3
|
gold
| 240
|
1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm.
|
NIPS_2022_2315
|
NIPS_2022
|
Weakness: 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. 2) The justification for isotropic representation and contractive search could be more solid.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 242
|
- Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the initial classifiers in RobustBench (see [Croce et al. (2021)](https://arxiv.org/abs/2010.09670)), showing a similar linear correlation with ID robustness. Moreover, [A, B] have also evaluated the robustness of adversarially trained models to unseen attacks.
|
RnYd44LR2v
|
ICLR_2024
|
- Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the initial classifiers in RobustBench (see [Croce et al. (2021)](https://arxiv.org/abs/2010.09670)), showing a similar linear correlation with ID robustness. Moreover, [A, B] have also evaluated the robustness of adversarially trained models to unseen attacks.
- A central aspect of evaluating adversarial robustness is the attacks used to measure it. In the paper, this is described with sufficient details only in the appendix. In particular for the non $\ell_p$-threat models I think it would be important to discuss the strength (e.g. number of iterations) of the attacks used, since these are not widely explored in prior works.
[A] https://arxiv.org/abs/1908.08016
[B] https://arxiv.org/abs/2105.12508
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 245
|
1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this.
|
NIPS_2020_125
|
NIPS_2020
|
1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this. 2. Similar to above, it would be good to provide more details of how the attention modules are added to tested architectures. I assume they are added following the SE paper but would be good to clarify. 3. Related to above, how is the complexity of the added module controlled? Is there a tunable channel weight similar to SE? It would be good to clarify this. 4. In Table 3, the additional complexity of the found module is ~5-15% in terms of parameters and flops. It is not clear if this is actually negligible. Would be good to perform comparisons where the complexity matches more closely. 5. In Table 3, it seems that the gains are decreasing for larger models. It would be good to show results with larger and deeper models (ResNet-101 and ResNet-152) to see if the gains transfer. 6. Similar to above, it would be good to show results for different model types (e.g. ResNeXt or MobileNet) to see if the module transfer to different model types. All current experiments use ResNet models. 7. It would be good to discuss and report how the searched module affect the training time, inference time, and memory usage (compared to vanilla baselines and other attention modules). 8. It would be interesting to see the results of searching for the module using a different backbone (e.g. ResNet-56) or a different dataset (e.g. CIFAR-100) and compare both the performance and the resulting module. 9. The current search space for the attention module consists largely of existing attention operations as basic ops. It would be interesting to consider a richer / less specific set of operators.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 259
|
2). The proposed method looks stronger at high bitrate but close to the baselines at low bitrate. What is the precise bitrate range used for BD-rate comparison? Besides, a related work about implementing content adaptive algorithm in learned video compression is suggested for discussion or comparison: Guo Lu, et al., "Content Adaptive and Error Propagation Aware Deep Video Compression." ECCV 2020.
|
ICLR_2022_1522
|
ICLR_2022
|
Weakness:
The overall novelty seems limited since the instance-adaptive method is from existing work with no primary changes. Here are some main questions and concerns:
1). How many optimization steps are used to produce the final reported performance in Figure.1 as well as in some other figs and tables?
2). The proposed method looks stronger at high bitrate but close to the baselines at low bitrate. What is the precise bitrate range used for BD-rate comparison?
Besides, a related work about implementing content adaptive algorithm in learned video compression is suggested for discussion or comparison:
Guo Lu, et al., "Content Adaptive and Error Propagation Aware Deep Video Compression." ECCV 2020.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 260
|
- If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defined error bars under different random seeds.
|
ARR_2022_59_review
|
ARR_2022
|
- If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defined error bars under different random seeds. - Another concern is that the method may not be practical. In fine-tuning, THE-X firstly drops the pooler of the pre-trained model and replaces softmax and GeLU, then conducts standard fine-tuning. For the fine-tuned model, they add LayerNorm approximation and d distill knowledge from original LN layers. Next, they drop the original LN and convert the model into fully HE-supported ops. The pipeline is too complicated and the knowledge distillation may not be easy to control. - Only evaluating the approach on BERTtiny is also not convincing although I understand that there are other existing papers that may do the same thing. For example, a BiLSTM-CRF could yield a 91.03 F1-score and a BERT-base could achieve 92.8. Although computation efficiency and energy-saving are important, it is necessary to comprehensively evaluate the proposed approach.
- The LayerNorm approximation seems to have a non-negligible impact on the performances for several tasks. I think it is an important issue that is worth exploring.
- I am willing to see other reviews of this paper and the response of the authors. - Line #069: it possible -> it is possible?
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 268
|
3. Insufficient ablation study on \alpha. \alpha is only set to 1e-4, 1e-1, 5e-1 in section 5.4 with a large gap between 1e-4 and 1e-1. The author is recommended to provide more values of \alpha, at least 1e-2 and 1e-3.
|
ICLR_2023_2396
|
ICLR_2023
|
1. Lack of the explanation about the importance and the necessity to design deep GNN models . In this paper, the author tries to address the issue of over-smoothing and build deeper GNN models. However, there is no explanation about why should we build a deep GNN model. For CNN, it could be built for thousands of layers with significant improvement of the performance. While for GNN, the performance decreases with the increase of the depth (shown in Figure 1). Since the deeper GNN model does not show the significant improvement and consumes more computational resource, the reviewer wonders the explanation of the importance and the necessity to design deep models. 2. The experimental results are not significantly improved compared with GRAND. For example, GRAND++-l on Cora with T=128 in Table 1, on Computers with T=16,32 in Table 2. Since the author claims that GRAND suffers from the over-smoothing issue while DeepGRAND significantly mitigates such issue, how to explain the differences between the theoretical and practical results, why GRAND performs better when T is larger? Besides, in Table 3, DeepGRAND could not achieve the best performance with 1/2 labeled on Citeseer, Pubmed, Computers and CoauthorCS dataset, which could not support the argument that DeepGRAND is more resilient under limited labeled training data. 3. Insufficient ablation study on \alpha. \alpha is only set to 1e-4, 1e-1, 5e-1 in section 5.4 with a large gap between 1e-4 and 1e-1. The author is recommended to provide more values of \alpha, at least 1e-2 and 1e-3. 4. Minor issues. The x label of Figure 2, Depth (T) rather than Time (T).
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 269
|
6: How many topics were used? How did you get topic-word parameters for this "real" dataset? How big is the AG news dataset? Main paper should at least describe how many documents in train/test, and how many vocabulary words.
|
ICLR_2022_1872
|
ICLR_2022
|
I list 5 concerns here, with detailed discussion and questions for the authors below
W1: While theorems suggest "existence" of a linear transformation that will approximate the posterior, the actual construction procedure for the "recovered topic posterior" is unclear
W2: Many steps are difficult to understand / replicate from main paper
W3: Unclear what theorems can say about finite training sets
W4: Justification / intuition for Theorems is limited in the main paper
Responses to W1-W3 are most important for the rebuttal.
W1: Actual procedure for constructing the "recovered topic posterior" is unclear
In both synthetic and real experiments, the proposed self-supervised learning (SSL) method is used to produce a "recovered topic posterior" p( w x). However, the procedure used here is unclear... how do we estimate p( w x) using the learning function f(x)?
The theorems imply that a linear function exists with limited (or zero) approximation error for any chosen scalar summary of the doc-topic weights w. However, how such a linear function is constructed is unclear. The bottom of page four suggests that when t=1 and A is full rank, that "one can use the pseudoinverse of A to recover the posterior", however it seems (1) unclear what the procedure is in general and what its assumptions are, and (2) odd that the prior may not needed at all.
Can the authors clarify how to estimate the recovered topic posterior using the proposed SSL method?
W2: Many other steps are difficult to understand / replicate from main paper
Here's a quick list of questions on experimental steps I am confused about / would have trouble reproducing
For the toy experiments in Sec. 5:
Do you estimate the topic-word parameter A? Or assume the true value is given?
What is the format for document x provided as input to the neural networks that define f(x)? The top paragraph of page 7 makes it seem like you provide an ordered list of words. Wouldn't a bag-of-words count vector be a more robust choice?
How do you set t=1 (predict one word given others) but somehow also use "the last 6 words are chosen as the prediction target"?
How do you estimate the "recovered topic posterior" for each individual model (LDA, CTM, etc)? Is this also using HMC (which is used to infer the ground-truth posterior)?
Why use 2000 documents for the "pure" topic model but 500 in test set for other models? Wouldn't more complex models benefit from a larger test set?
For the real experiments in Sec. 6:
How many topics were used?
How did you get topic-word parameters for this "real" dataset?
How big is the AG news dataset? Main paper should at least describe how many documents in train/test, and how many vocabulary words.
W3: Unclear what theorems / methods can say about finite training sets
All the theorems seem to hold when considering terms that are expectations over a known distribution over observed-data x and missing-data y. However, in practical data analysis we do not know the true data generating distribution, we only have a finite training set.
I am wondering about this method's potential in practice for modest-size datasets. For the synthetic dataset with V=5000 (a modest vocabulary size), the experiments considered 0.72 million to 6 million documents, which seems quite large.
What practically must be true of the observed dataset for the presented methods to work well?
W4: Justification / intuition for Theorems is limited in the main paper
All 3 theorems in the main paper are presented without much intuition or justification about why they should be true, which I think limits their impact on the reader. (I'll try to wade thru the supplement, but did not have time before the review deadline).
Theorem 3 tries to give intuition for the t=1 case, but I think could be stronger: why should f(x) have an optimal form p ( y = v 1 x )
? Why should "these probabilities" have the form A E [ w x ]
? I know space is limited, but helping your reader figure things out a bit more explicitly will increase the impact.
Furthermore, the reader would benefit from understanding how tight the bounds in Theorem 4 are. Can we compute the bound quality for toy data and understand it more practically?
Detailed Feedback on Presentation
No need to reply to these in rebuttal but please do address as you see fit in any revision
Page 3:
"many topic models can be viewed"... should probably say "the generative process of many topic models can be viewed..."
the definition of A_ij is not quite right. I would not say "word i \in topic j", I would say "word i topic j". A word is not contained in a topic, Each word has a chance of being generated.
I'd really avoid writing Δ ( K )
and would just use Δ
throughout .... unclear why this needs to be a function of K
but the topic-word parameters (whose size also depends on K
) does not
Should we call the reconstruction objective a "partial reconstruction" or "masked reconstruction"? I'm used to reconstruction in an auto-encoder context, where the usual "reconstruction" objective is literally to recover all observed data, not a piece of observed data that we are pretending not to see
In Eq. 1, are you assuming an ordered or unordered representation of the words in x and y?
Page 4:
I would not reuse the variable y in both reconstruction and contrastive contexts. Find another variable. Same with theta.
Page 5:
I would use f ∗
to denote the exact minimizer, not just f
Figure 2 caption should clarify:
what is the takeaway for this figure? Does reader want to see low values? Does this figure suggest the approach is working as expected?
what procedure is used for the "recovered" posterior? Your proposed SSL method?
why does Pure have a non-monotonic trend as alpha gets larger?
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 270
|
3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum.
|
NIPS_2020_1592
|
NIPS_2020
|
Major concerns: 1. While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are needed. If you get rid of pretraining/initialization from T5/BART, would this method work? 2. This work requires MLE pretraining, while prior work "Training Language GANs from Scratch" does not. 3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum. 4. This work is claiming that it is the first time that language GANs outperform MLE, while prior works like seqGAN or scratchGAN all claim to be better than MLE. Is this argument based on the tradeoff between BLEU and self-BLEU from "language GANs falling short"? If so, Figure 2 is not making a fair comparison since this work uses T5/BART which is trained on external data, while previous works do not. What if you only use in-domain data? Would this still outperform MLE? Minor concerns: 5. This work only uses answer generation and summarization to evaluate the proposed method. While these are indeed conditional generation tasks, they are close to "open domain" generation rather than "close domain" generation such as machine translation. I think this work would be more convincing if it is also evaluated in machine translation which exhibits much lower uncertainties per word. 6. The discriminator accuracy of ~70% looks low to me, compared to "Real or Fake? Learning to Discriminate Machine from Human Generated Text" which achieves almost 90% accuracy. I wonder if the discriminator was not initialized with a pretrained LM, or is that because the discriminator used is too small? ===post-rebuttal=== The added scratch GAN+pretraining (and coldGAN-pretraining) experiments are fairer, but scratch GAN does not need MLE pretraining while this work does, and we know that MLE pretraining makes a big difference, so I am still not very convinced. My main concern is the existence of so many hyper-parameters/tricks: mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, and MLE pretraining. I think some sensitivity analysis similar to scratch GAN's would be very helpful. In addition, rebuttal Figure 2 is weird: when generating only one word, why would cold GAN already outperform MLE by 10%? To me, this seems to imply that improvement might be due to hyper-parameter tuning.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 275
|
- The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memory-enhanced segmentation strategies [6], and recursive methods [7] for handling very long documents.
|
NJUzUq2OIi
|
ICLR_2025
|
I found the proposed idea, experiments, and analyses conducted by the authors to be valuable, especially in terms of their potential impact on low-resource scenarios. However, for the paper to fully meet the ICLR standards, there are still areas that need additional work and detail. Below, I outline several key points for improvement. I would be pleased to substantially raise my scores if the authors address these suggestions and enhance the paper accordingly.
**General Feedback**
- I noticed that the title of the paper does not match the one listed on OpenReview.
- The main text should indicate when additional detailed discussions are deferred to the Appendix for better reader guidance. **Introduction**
- The Introduction lacks foundational references to support key claims. Both the second and third paragraphs would benefit from citations to strengthen the arguments. For instance, the statement: "This method eliminates the need for document chunking, *a common limitation in current retrieval systems that often results in loss of context and reduced accuracy*" needs a supporting citation to substantiate this point.
- The sentence: "Second, to be competitive with embedding approaches, a retrieval language model needs to be small" requires further justification. The authors should include in the paper a complexity analysis comparison discussing time and GPU memory consumption to support this assertion.
**Related Work**
- The sentence "Large Language Models are found to be inefficient processing long-context documents" should be rewritten for clarity, for example: "Large Language Models are inefficient when processing long-context documents."
- The statements "Transformer models suffer from quadratic computation during training and linear computation during inference" and "However, transformer-based models are infeasible to process extremely long documents due to their linear inference time" are incorrect. Transformers, as presented in "Attention is All You Need," scale quadratically in both training and inference.
- The statement regarding State Space Models (SSMs) having "linear scaling during training and constant scaling during inference" is inaccurate. SSMs have linear complexity for both training and inference. The term "constant scaling" implies no dependence on sequence length, which is incorrect.
- The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memory-enhanced segmentation strategies [6], and recursive methods [7] for handling very long documents.
- Similarly, the paragraph on Retrieval-Augmented Generation should specify how prior works addressed different long document tasks. Examples include successful applications of RAG in long-document summarization [8, 9] and query-focused multi-document summarization [10, 11], which are closely aligned with the present work. **Figures**
- Figures 1 and 2 are clear but need aesthetic improvements to meet the conference's standard presentation quality.
**Model Architecture**
- The description "a subset of tokens are specially designated, and the classification head is applied to these tokens. In the current work, the classification head is applied to the last token of each sentence, giving sentence-level resolution" is ambiguous. Clarify whether new tokens are added to the sequence or if existing tokens (e.g., periods) are used to represent sentence ends.
**Synthetic Data Generation**
- The "lost in the middle" problem when processing long documents [12] is not explicitly discussed. Have the authors considered the position of chunks during synthetic data generation? Ablation studies varying the position and distance between linked chunks would provide valuable insights into Mamba’s effectiveness in addressing this issue.
- More details are needed regarding the data decontamination pipeline, chunk size, and the relative computational cost of the link-based method versus other strategies.
- The authors claim that synthetic data generation is computationally expensive but provide no supporting quantitative evidence. Information such as time estimates and GPU demand would strengthen this argument and assess feasibility.
- There is no detailed evaluation of the synthetic data’s quality. An analysis of correctness and answer factuality would help validate the impact on retrieval performance beyond benchmark metrics. **Training**
- This section is too brief. Consider merging it with Section 3, "Model Architecture," for a more cohesive presentation.
- What was the training time for the 130M model?
**Experimental Method**
- Fix minor formatting issues, such as adding a space after the comma in ",LVeval."
- Specify in Table 1 which datasets use free-form versus multiple-choice answers, including the number of answers and average answer lengths.
- Consider experimenting with GPT-4 as a retriever.
- Expand on "The accuracy of freeform answers is judged using GPT-4."
- Elaborate on the validation of the scoring pipeline, particularly regarding "0.942 macro F1." Clarify the data and method used for validation.
- Justify the selection of "50 sentences" for Mamba retrievers and explain chunk creation methods for embedding models. Did the chunks consist of 300 fixed-length segments, or was semantic chunking employed [3, 5]? Sentence-level embedding-based retrieval could be explored to align better with the Mamba setting.
- The assertion that "embedding models were allowed to retrieve more information than Mamba" implies an unfair comparison, but more context can sometimes degrade performance [12].
- Clarify the use of the sliding window approach for documents longer than 128k tokens, especially given the claim that Mamba could process up to 256K tokens directly. **Results**
- Remove redundancy in Section 7.1.2, such as restating the synthetic data generation strategies.
- Expand the ablation studies to cover different input sequence lengths during training and varying the number of retrieved sentences to explore robustness to configuration changes.
- Highlight that using fewer training examples (500K vs. 1M) achieved comparable accuracy (i.e., 59.4 vs. 60.0, respectively).
- Why not train both the 130M and 1.3B models on a dataset size of 500K examples, but compare using 1M and 400K examples, respectively? **Limitations**
- The high cost of generating synthetic training data is mentioned but lacks quantification. How computationally expensive is it in terms of time or resources? **Appendix**
- Note that all figures in Appendices B and C are the same, suggesting an error that needs correcting.
**Missing References**
[1] Longformer: The Long-Document Transformer. arXiv 2020.
[2] LongT5: Efficient Text-To-Text Transformer for Long Sequences. NAACL 2022.
[3] Semantic Self-Segmentation for Abstractive Summarization of Long Documents in Low-Resource Regimes. AAAI 2022.
[4] Summ^n: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents. ACL 2022.
[5] Align-Then-Abstract Representation Learning for Low-Resource Summarization. Neurocomputing 2023.
[6] Efficient Memory-Enhanced Transformer for Long-Document Summarization in Low-Resource Regimes. Sensors 2023.
[7] Recursively Summarizing Books with Human Feedback. arXiv 2021.
[8] DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization. ACL 2022.
[9] Towards a Robust Retrieval-Based Summarization System. arXiv 2024.
[10] Discriminative Marginalized Probabilistic Neural Method for Multi-Document Summarization of Medical Literature. ACL 2022.
[11] Retrieve-and-Rank End-to-End Summarization of Biomedical Studies. SISAP 2023.
[12] Lost in the Middle: How Language Models Use Long Contexts. TACL 2024.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"5",
"5",
"5"
]
}
|
5
|
gold
| 287
|
1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting.
|
ICLR_2022_2425
|
ICLR_2022
|
1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. 2)Clarity: Since the coreset construction algorithm is built up on previous works, a reader without the background in literature on coresets would find it hard to understand why the particular sampling probabilities are chosen and why they give particular guarantees. It would be useful rewrite the algorithm preview and to give at least a bit of intuition on how the importance sampling scores are chosen and how they can give the coreset guarantees Suggestions:
In the experiment section, other than uniform sampling, it would be interesting to use some other classical k-means coreset as baselines for comparison.
Please highlight the technical challenges and contributions clearly when compared to coresets for classical k-means.
| 4
|
{
"annotators": [
"6740484e188a64793529ee77",
"6686ebe474531e4a1975636f",
"boda"
],
"labels": [
"1",
"1",
"1"
]
}
|
1
|
gold
| 294
|
End of preview. Expand
in Data Studio
RevUtil: Measuring the Utility of Peer Reviews for Authors
📚 Overview
Providing constructive feedback to authors is a key goal of peer review. To support research on evaluating and generating useful peer review comments, we introduce RevUtil, a dataset for measuring the utility of peer review feedback.
RevUtil focuses on four main aspects of review comments:
- Actionability – Can the author act on the comment?
- Grounding & Specificity – Is the comment concrete and tied to the paper?
- Verifiability – Can the statement be checked against the paper?
- Helpfulness – Does the comment assist the author in improving their work?
🧑🔬 RevUtil Human
- 1,430 review comments from real peer reviews.
- Each comment is annotated independently by three human raters.
- Labels are provided as
"gold"(3/3 agreement),"silver"(2/3), or"none"(no agreement).
Key columns:
| Column | Description |
|---|---|
paper_id |
ID of the reviewed paper |
venue |
Conference or journal name |
focused_review |
Full review (weakness + suggestion sections) |
review_point |
Individual review comment being evaluated |
id |
Unique ID for the review point |
batch |
Annotation batch/study identifier |
ASPECT |
Dictionary with annotators and their labels |
ASPECT_label |
Majority label (if available) |
ASPECT_label_type |
"gold", "silver", or "none" |
🚀 Usage
You can load the datasets directly via 🤗 Datasets:
from datasets import load_dataset
# Human annotations
human = load_dataset("boda/RevUtil_human")
# Synthetic annotations
synthetic = load_dataset("boda/RevUtil_synthetic")
📎 Citation
@inproceedings{sadallah-etal-2025-good,
title = "The Good, the Bad and the Constructive: Automatically Measuring Peer Review{'}s Utility for Authors",
author = {Sadallah, Abdelrahman and
Baumg{\"a}rtner, Tim and
Gurevych, Iryna and
Briscoe, Ted},
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.emnlp-main.1476/",
doi = "10.18653/v1/2025.emnlp-main.1476",
pages = "28979--29009",
ISBN = "979-8-89176-332-6"
}
- Downloads last month
- 65