paper_title stringlengths 12 156 | paper_id stringlengths 10 10 | conference stringclasses 1
value | review_id stringlengths 10 10 | weakness_content stringlengths 10 3.03k | perspective stringclasses 7
values | rebuttal_content stringlengths 3 10.6k | rebuttal_label stringclasses 5
values |
|---|---|---|---|---|---|---|---|
Pricing with Contextual Elasticity and Heteroscedastic Valuation | zt8bb6vC4m | ICLR-2024 | d7eMHE2Gje | About the linear assumption on $x_t^\top \eta$, can this be generalized to some non-linear function of $x_t$? Also, when $x_t$ is stochastic, can the assumption of $x_t^\top \eta>0$ be relaxed to $E[x_t^\top \eta]>0$, where $E[\cdot]$ is the expectation over $x$? | Theory | As for the generalization of our linear price elasticity model (i.e. $\alpha=x_t^\top\eta^*$), we believe that our algorithm design and analysis are still applicable within a slight generalization from Euclidean space to *known* kernel spaces. As for the generalization from $x_t^\top\eta^*>0$ to $\mathbb{E}[x_t^\top\et... | DWC |
Pricing with Contextual Elasticity and Heteroscedastic Valuation | zt8bb6vC4m | ICLR-2024 | d7eMHE2Gje | Can the authors provide a real-world (or semi-real) data study on evaluating the performance of algorithms in real-life situations? | Experiments | We are actually motivated by real-world scenarios to consider a heteroscedastic setting where the price elasticity is feature-based. However, it is unfortunate that we are unable to have real-world evaluations of our algorithm, which requires either massive investments or confidential commercial-use data. On the one ha... | DWC |
Pricing with Contextual Elasticity and Heteroscedastic Valuation | zt8bb6vC4m | ICLR-2024 | d7eMHE2Gje | In terms of the presentation of simulation results, could the authors present log-log plots and compare them with the $1/2 \log T$ curve? Since it would be hard to see the regret order if they are not presented in this way. | Presentation | It’s a good catch that a log-log plot would better show the regret rate. In fact, our plots are indeed presented in log-log diagrams, and therefore a slope-$\alpha$ line indicates an $O(T^{\alpha})$ regret.To show the regret rate, we also plotted the linear asymptote of each regret curve. The curve of $\frac12\log T$ o... | DWC |
Pricing with Contextual Elasticity and Heteroscedastic Valuation | zt8bb6vC4m | ICLR-2024 | BPOPTf75MX | In my opinion, Ban and Keskin (2021) should be given more credits. As far as I know, Ban and Keskin (2021) is the first to consider the heterogenous price elasticities which are formulated to be linear with context. At least when introducing the formulation, I think the paper should be cited and discussed more. | Novelty | 1, We have attributed our model to Ban and Keskin (2021) who introduced the generalized linear demand model with heterogenous price elasticity (coefficient) for the first time, which well-motivates a feature-based heteroscedasticity and how it affects the price-demand relationship. We also reduce a broadly adopted li... | CRP |
Pricing with Contextual Elasticity and Heteroscedastic Valuation | zt8bb6vC4m | ICLR-2024 | BPOPTf75MX | I understand that a known link function is a good starting point and a common practice. One direction that I think might further improve the paper is to consider (or at least discuss about) an unknown link function. The reason why I mention this point is that Fan et al. (2021) studies a problem with unknown noise distr... | Theory | 2, Thanks for your suggestions! Unfortunately, our algorithm is unable to be generalized to the online contextual pricing problem with linear valuation and unknown noise distribution that has been studied by Fan et al. (2023). Indeed, the problem becomes substantially harder when the noise distribution is unknown to th... | DWC |
Pricing with Contextual Elasticity and Heteroscedastic Valuation | zt8bb6vC4m | ICLR-2024 | 8rE7Vjlke7 | Although the proposed demand model extends existing models by considering the feature-dependent price elasticity, the proposed model and online algorithm still rely on linear forms of elasticity and valuation. Remember ICLR is a deep learning conference. A potentially more suitable treatment may be substituting the lin... | Theory | > W1. "ICLR is a deep learning conference" "replace linear with NTK"
While ICLR starts as a deep-learning focused conference, I believe it is now widely regarded as exchangeable with ICML and NeurIPS. The topic list in the 2024 Call for Papers seems very inclusive and we believe our paper is within scope. The AC and ... | SRP |
Pricing with Contextual Elasticity and Heteroscedastic Valuation | zt8bb6vC4m | ICLR-2024 | 8rE7Vjlke7 | As the authors mention in Ethic issues, personalized pricing may have fairness issues. Therefore, it is essential to discuss how to deal with the cases when we add some fairness regularization terms or fairness constraints to the optimization problem. | Theory | > W3. "fairness in personalized pricing", "fairness inducing regularization"
> W4. "the objective is purely the interest of the platform" "how does personalized pricing affect buyer well-being"
We agree these are important problems. However, our problem setting is *not* personalized pricing! The context $x_t$ descr... | DWC |
Pricing with Contextual Elasticity and Heteroscedastic Valuation | zt8bb6vC4m | ICLR-2024 | 8rE7Vjlke7 | Still about personalized pricing. As the objective is purely the interest of the platform, I would like to see discussions or experimental results on how the personalized pricing algorithm affects customer well-being metrics such as consumer surplus. | Evaluation | > W3. "fairness in personalized pricing", "fairness inducing regularization"
> W4. "the objective is purely the interest of the platform" "how does personalized pricing affect buyer well-being"
We agree these are important problems. However, our problem setting is *not* personalized pricing! The context $x_t$ descr... | DWC |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | j955v9Me7D | Limited Expansion of Distribution: The method struggles to widen the distribution beyond the initial text-conditioned one provided by the model. | Evaluation | It is true that FABRIC does not necessarily expand the conditional distribution of generated images, especially when the feedback images are sampled from the same model, but this is generally the case for interventions that improve quality. Indeed, in order to improve quality it is necessary to constrain the distributi... | CRP |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | j955v9Me7D | Feedback Loop Limitation: Since the feedback originates from the model's output, it creates a cyclical limitation where the model might only reinforce its existing biases. | Theory | Similar to the previous point, this is more a limitation of our experimental setup rather than a fundamental limitation of FABRIC. Indeed, it is possible to move the balance of the exploration-exploitation trade-off in the direction of the former with very simple extensions to FABRIC such as the addition of a retrieval... | DWC |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | j955v9Me7D | Diversity Collapse: As the strength of the feedback and the number of feedback images increase, the diversity of the generated images tends to diminish. The images tend to converge towards a single mode that closely resembles the feedback images. | Evaluation | We agree with the reviewer that in our experimental setup FABRIC clearly suffers from diversity collapse. However, we would like to point out that this is not necessarily the case in a general setup. In order to automate the process of evaluation, we were only giving images from the ones generated during previous round... | DWC |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | j955v9Me7D | Lack of Detailed Feedback: Users cannot specify which particular aspects of an image they appreciate or dislike. This restricts the model's ability to fine-tune its output based on detailed user preferences. | Evaluation | Even though we did not evaluate it experimentally, it is possible to provide a textual description in order to steer the feature extraction process. Namely, we use the null prompt for extracting attention features from the feedback images in our experiments. However, one may use an arbitrary prompt for each feedback im... | CRP |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | J9uAQOZhMf | The weakness of the paper mainly lies in writing. | Writing | We thank the reviewer for the excellent summary of our contributions and for the concise, constructive feedback. We agree that certain sections of the writing could be improved and have uploaded a revised version of the paper. In particular, the methods section has been majorly overhauled, hopefully addressing your con... | CRP |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | J9uAQOZhMf | It is better to incorporate more method descriptions, including model design and formulations in the main script instead of the appendix. | Presentation | We thank the reviewer for the excellent summary of our contributions and for the concise, constructive feedback. We agree that certain sections of the writing could be improved and have uploaded a revised version of the paper. In particular, the methods section has been majorly overhauled, hopefully addressing your con... | CRP |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | J9uAQOZhMf | I'd like to accept this paper if the writing problem is addressed. | Writing | We thank the reviewer for the excellent summary of our contributions and for the concise, constructive feedback. We agree that certain sections of the writing could be improved and have uploaded a revised version of the paper. In particular, the methods section has been majorly overhauled, hopefully addressing your con... | CRP |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | qVfDeKlstk | **Limited technical novelty**: While the proposed method is effective in incorporating user feedback, the extension to enabling 'iterative feedback' is rather naive, and the feedback is constrained to binary labels (which the author(s) have acknowledged as a limitation). It would be more interesting to explore more adv... | Novelty | While we agree that the core conditioning mechanism isn’t novel and has been proposed by the authors of ControlNet [1], the combination with weighted attention and CFG makes for a very versatile and flexible algorithm even beyond the scenarios that were evaluated experimentally. For example, real-valued ratings can be ... | CRP |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | qVfDeKlstk | **Lack of human rating in a paper focused on iterative human feedback**: While the author(s) have used reasonable proxy to evaluate the effectiveness of the model in following human preferences, it would strengthen the paper if the author(s) can include some form of user study, given this papers' focus is in incorporat... | Experiments | We agree with the reviewer that studying human interaction with the system would certainly be insightful. Doing a user study was considered, but we ultimately decided against it due to the challenges involved with the design and execution of such a study. To illustrate: A naive study design would simply let the user tr... | DWC |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | qVfDeKlstk | **Missing discussion to some prior work**: I believe the proposed method has some technical similarity to prompt-based image editing methods, such as instruct-pix2pix [1] and prompt2prompt. [2] While the proposed method is different in the types of feedback and preference investigated, it would be great if the author(s... | Novelty | These two papers, prompt2prompt and instruct-pix2pix, are indeed related to our work, as they use similar techniques but have different goals. We thank the reviewer for pointing this out and have added a paragraph on image editing to Section 5 (Related Work). | CRP |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | qVfDeKlstk | While the paper claims to outperform a supervised-learning baseline (HPS LoRA), it is unclear to me how does HPS relate to PickScore, as they both appear to measure human preference. Would the author(s) please clarify how might they relate to each other? As the models are evaluated on PickScore but LoRA-tuned on HPS. | Evaluation | The reviewer is correct that HPS and PickScore are very similar and solve the same task, which is human preference estimation. In fact, in the early phases of the project, we were using HPS as the main evaluation metric but decided to replace it with PickScore when that was published since it demonstrated superior accu... | CRP |
FABRIC: Personalizing Diffusion Models with Iterative Feedback | zsfrzYWoOP | ICLR-2024 | qVfDeKlstk | How does the method relate to/differ from prompt2prompt and instruct-pix2pix? As stated above, it would be helpful to systematically compare them (and other related prior work) in a table. | Novelty | We like the idea of adding a systematic comparison of methods which incorporate human feedback in the generation process of diffusion models, even beyond just the techniques using attention-injection, but unfortunately we couldn’t find space to include it in the paper. Instead, we’ll just attach it here and possibly ad... | CRP |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | 5KJg0MIfmE | This paper has a well-motivated idea and contains comprehensive theoretical derivation for understanding the key idea. However, as mentioned by the author, the NCE method is related, it would be nice to have a deeper theoretical connection and comparison with the NCE method. For now, the major comparison is shown by em... | Theory | Thank you for suggesting an investigation of the relationship between the NCE and SNL objectives. We found that the SNL loss resembles one of the generalisations of NCE proposed in a theoretical paper on NCE [2]. We will discuss this paper in the revised version. | SRP |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | 5KJg0MIfmE | As a novel learning method, it would be nice to have a practical learning algorithm to simplify and illustrate the main idea. | Reproducibility | We added an algorithm description for our method in appendix E. Thank you for the suggestions. | CRP |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | nToYmsL0jC | The sensitivity to the choice of proposal should be critical but it is only investigated in low-dimensional cases. | Experiments | We agree that the method will be sensitive to the proposal, especially when scaling it for image modelling. However, we feel that the work required for finding and tuning a correct proposal is a new avenue of work in itself. Indeed, all these papers are mostly focused on scaling existing methods by finding and tuning a... | DWC |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | nToYmsL0jC | Can you provide more real-world experiments? For instance in generative modeling (without the VAE component) or out-of-distribution detection. | Experiments | We agree that the method will be sensitive to the proposal, especially when scaling it for image modelling. However, we feel that the work required for finding and tuning a correct proposal is a new avenue of work in itself. Indeed, all these papers are mostly focused on scaling existing methods by finding and tuning a... | DWC |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | nToYmsL0jC | In [4], the authors give a very similar result as your theorem 3.1 but for NCE. Is there more theoretical comparisons to be drawn against NCE? | Theory | The paper you are mentioning also provides more theoretical insights into the loss landscape for NCE. Conducting such a study for SNL would require substantial work but is an interesting new avenue of work but we will integrate it to a revised version of the paper. Thank you for suggesting an investigation of the relat... | SRP |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | nToYmsL0jC | As mentioned in the weaknesses, I think it would be nice to compare SNL against MCMC-based methods (at least Langevin based) with apple-to-apple computational budgets. | Experiments | Due to the lack of time required to implement and tune MCMC-based methods, we will provide a comparison of SNL with MCMC based method using similar computational budgets (ie number of call of a neural networks and time) upon acceptance or for a further iteration of the paper. | SRP |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | og13umrAL0 | From my own experience, the most challenging part when training the EBM is to get valid samples from the current fitted distribution to estimate the (gradient of) normalizing constant. Previous works try to solve this problem with different sampling techniques. While this work proposes a linear lower bound, it still ne... | Novelty | Our experience matches the one of reviewer CuDM, where the difficulty is to get good samples from the current fitted distribution. In MCMC-based methods training the EBM requires long chains or tricks (for instance, keeping a buffer in [1]) to avoid biased training. However, our method does not require sampling from th... | DWC |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | og13umrAL0 | The proposed algorithm introduces a variational parameter b, and it requires to update b together with the energy function iteratively. Then similar to the VAE case, whether there can be a mismatch between the estimate of b and the energy function $E_\theta(X)$. (Not sure whether the $\exp^{-b}$ term will make the trai... | Experiments | It is true that the quality of the gradient estimate depends on how close $b$ is to the normalization constant. This is clearly seen in equation (17) of the paper where we rewrite SNL gradients as the likelihood gradients with some negligible terms if $b$ verifies the aforementioned condition :
$$\begin{aligned}
\n... | DWC |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | og13umrAL0 | As also mentioned in Point 2, the modeled distributions in the experiments are too simple to be convincing to me. The modeled experiments are either unconditional distribution on toy data or with image input but only model the conditional distribution on some low dimensional label. The VAE experiment in 5.3 models bina... | Experiments | Though we agree that the experiments are low-dimensional, we disagree that they are not convincing enough. Indeed, to the best of our knowledge, this is the first time an EBM is used for density estimation (and actually provides an upper and lower bound of the likelihood) on the UCI datasets. These datasets usually req... | DWC |
Learning energy-based models by self-normalising the likelihood | zrxlSviRqC | ICLR-2024 | og13umrAL0 | The review for EBM study seems to be insufficient, may consider the following works:
[1] Improved contrastive divergence training of energy-based models.
[2] Learning energy-based models by diffusion recovery likelihood.
[3] A tale of two flows: Cooperative learning of langevin flow and normalizing flow toward energ... | Novelty | We added the recommended paper to the references, we want to thank the reviewer for their valuable suggestions. | CRP |
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models | zpVPhvVKXk | ICLR-2024 | eWML9SEZGd | I'm not totally convinced that semantics in padding tokens have so much impact. My own empirical experience is that the padding tokens usually have very small attention scores (=> close to 0 attention probabilities) compared to meaningful tokens, and thus their semantics, if any, add little to the image features. Thoug... | Experiments | **1. More systematic experiments for the prompt with various length**
**1.1. Replacing meaningful tokens with padding tokens.**
We agree that the padding token contains less semantic information compared to meaningful tokens.
However, we observe that the padding token contains small yet useful semantic information, a... | CRP |
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models | zpVPhvVKXk | ICLR-2024 | eWML9SEZGd | CLIPscores in Table 1 are a bit confusing. Are they the similarity between the images and negative prompts? | Evaluation | **4. The CLIPscores in Table 1 are a bit confusing, do they represent the similarity between the images and negative prompts?**
In our paper, Clipscore is a metric that evaluates the quality of a pair of a negative prompt
and an edited image. We have updated it in our updated paper. | CRP |
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models | zpVPhvVKXk | ICLR-2024 | 9s9tSv1fLj | The diffusion model is a hot topic in the machine learning and computer vision community, and the differences should be further highlighted. | Novelty | **1. The diffusion model is a hot topic in the machine learning and computer vision community, and further highlighting its differences is essential**
Indeed, diffusion models are a hot topic, and one of the most impactful applications is their usage for text-guided image editing. The ability to create realistic image... | DWC |
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models | zpVPhvVKXk | ICLR-2024 | 9s9tSv1fLj | With this methodology, we can remove subjects from an image or add subjects to it. Is it possible to change one subject to another in one go? For example, can we change the “toothbrush” in “Girl holding toothbrush” image to a “pen”? | Experiments | **2. Is it possible to change one subject to another in one go?**
Subject replacement is a common task in various image editing methods [1-4].
We can edit an image by replacing subject with another using only the prompt (see in Appendix F. Fig.31 (the second, fourth, and sixth columns)).
We replace the text of the edi... | SRP |
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models | zpVPhvVKXk | ICLR-2024 | 7BLwxhjWr5 | From the algorithmic perspective, both improvement points are existing methods, and thus lack a certain level of novelty. | Novelty | $\textbf{1. From the algorithmic perspective, both improvement points are existing methods and thus lack a certain level of novelty}$
We would like to stress that for efficient text-guided image generation, the usage of negative lexemes is very important. They are known to be essential for humans to precisely communic... | DWC |
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models | zpVPhvVKXk | ICLR-2024 | 7BLwxhjWr5 | In the comparative experiments, it would be beneficial to specifically list the time and memory consumption ratios of this method compared to other methods, as this is necessary for a more application-oriented task. | Evaluation | $\textbf{3. The time and memory consumption ratios of this method compared to other methods}$
In the following table, we report the time and memory consumption ratios. We randomly select 100 prompts and feed them into the SD model. Compared to the baselines, we need additional time and memory consumption. Note th... | CRP |
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models | zpVPhvVKXk | ICLR-2024 | 7BLwxhjWr5 | In the first phase, this method uses coefficients to adjust the size of the negative information matrix to suppress the expression of negative information. If the singular value decomposition method is not employed, but instead, the entire matrix is multiplied by an attenuation factor, how would that affect the image e... | Experiments | **4. If the singular value decomposition method is not employed but instead the entire matrix is multiplied by an attenuation factor, how would that affect the image editing results?**
We evaluate the advised method involving an attenuation factor. We experimentally observed that employing an attenuation factor (e.g.,... | DWC |
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models | zpVPhvVKXk | ICLR-2024 | VQJ9HlclBE | This work introduces some new matrix computations, such as the SVD in soft-weighted regularization and the attention map alignment in ITO. However, the authors do not discuss the additional computational overhead of these computations. | Evaluation | $\textbf{1. The additional computational cost of SWR and ITO}$
We randomly select 100 prompts and feed them into the SD model. As shown in the following Table, we report the average values for inference time (s/image) and GPU memory demand (GB), respectively. For a comparison of time and memory consumption ratios wit... | CRP |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | rdbqGqqLeO | Instead, more interesting would be to include all OpenMP pragmas, loop level scheduling, architecture aware compute efficiencies with hardware aware scheduling strategies, etc (for reference see [1]). Combining all of these non-trivial loop level parallelizations would be a right task for such large capacity LLMs. | Experiments | **Challenges of extending support for all OpenMP clauses:** Also, there are not many open source code available for all the OpenMP pragmas for example pragmas like “simd” that offer vectorization and “target” that offer massive data parallelism through GPU-offloading are difficult to find. Hence extending support for a... | DWC |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | rdbqGqqLeO | The difference in the execution times of the vanilla LLM generated parallel code versus AUTOPARLLM-CodeGen/GPT generated parallel code is so small (<3.0% in the best case) that it is difficult to state that the difference is significant. There is no mention of the reported execution times being an average/median of mul... | Evaluation | **Execution time:** Please see Global Response 1. We clarified the confusion. All reported times are average of 5 runs.
**Improvements:** Please see Global Response 2. AUTOPARLLM approach achieved as high as **14.75%** speedup when the speedup of individual applications are considered. | DWC |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | rdbqGqqLeO | There are approaches in LLM literature to transform one language code to another [2]; why not try something similar to this directly to address the problem. Of course, it requires some minimal refactoring in the form of pre-training/fine-tuning these open CodeTransformer models. | Experiments | **CodeTransformer:** The referred paper (Zügner et al., 2021) uses source code and AST representation of programs. However, for parallelism detection, not only the structure (AST) but also the control, data, and call flows are extremely important. There was no mention of the above flows being incorporated in CodeTransf... | DWC |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | rdbqGqqLeO | In the methods section, the details of loss functions at different components of the proposed method are missing. | Reproducibility | **Loss functions:** In all cases, the Cross Entropy-based loss function of PyTorch is used. | DWC |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | 0jOVShH4XZ | The paper uses large language models for code generation. As an auto-parallelizing framework, it is unclear whether we really need such LLMs. It appears that LLMs are primarily utilized for inserting pragmas into the outermost loop of a given loop nest, a task that could potentially be accomplished through simpler mean... | Novelty | **Challenges of parallelization, advantages of LLMs:** We can parallelize using OpenMP by inserting pragmas in loops. The insertion of pragmas may seem trivial however, before considering the parallelization of loops, let alone inserting pragmas, the control, data, and call flow-related characteristics need to be analy... | DWC |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | 0jOVShH4XZ | The paper mentions that source-to-source compilers miss a lot of parallelism opportunities due to being overly conservative. However, the paper does not compare the optimization results against these compilers or manually-parallelized programs. Consequently, the true effectiveness of this approach remains uncertain. | Experiments | **Comparing with source-to-source (S2S) compiler, and traditional tools:** We compared AUTOPARLLM with both DIscoPoP and AutoPar (S2S compiler) on the DiscoPoP and AutoPar subset of OMP_Serial dataset in Appendix 8.1 of the main paper for the task of Parallelism Discovery. AUTOPARLLM achieved 36% higher accuracy than ... | DWC |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | 0jOVShH4XZ | It seems that the OMPScore can only be applied to auto-parallelization with OpenMP directives. How does it compare to other metrics with parallel semantics (e.g., ParaBLEU[1])? Is there any common design philosophy of metrics targeting different code generation approaches? | Evaluation | **Comparing with ParaBLEU:** We compared the ParaBLEU score along with other metrics and updated Table 1 and Table 3 in the paper. Our findings remain consistent with ParaBLEU also along with other metrics as it can be observed from Table 1 and Table 3 that AUTOPARLLM improved the ParaBLEU score of LLMs.
Also, we sho... | CRP |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | l4ZbOfwavf | Graph information is not clear: The authors used GNN to determine the parallel regions and predict the OMP clauses from the data-flow, control-flow, and call graphs. However, it remains unclear how the heterogeneous graphs are constructed and how the features of nodes are determined. Also, it would be better if the aut... | Reproducibility | **Heterogeneous Graph Construction, node-types, edge-types, and features:** Please see Appendix 8.3.
**Training Details, learning curves:** Please see Global Response 4.
**Num of Nodes, edges:** The average number of nodes in the PerfoGraph representation of OMP_serial dataset is 67.39, and the average number of edg... | CRP |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | l4ZbOfwavf | Correctness of the parallel version: The code generation heavily relies on GNN predictions; however, the training/test accuracy of the GNN predictions is not reported. What if the GNN gives a wrong prediction for the parallel regions or the OMP clauses? Furthermore, not all the parallel code can run correctly. The auth... | Evaluation | **Correctness:** We ensured that all parallel codes that are generated by LLMs are correct before program execution. Please see Appendix 8.4 where we describe how we handle cases where LLMs wrongly parallelize a loop or generate wrong OMP clauses.
**GNN accuracy:** We reported the GNN prediction accuracy of Parallelis... | CRP |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | l4ZbOfwavf | Improvement is not significant: Although the authors claim that their methods can improve the execution time, a 2-3% speedup is not significant. It would be better to conduct multiple runs and report the mean and variance of the execution time to determine if it can indeed bring improvements. Alternatively, the author ... | Evaluation | **Execution times:** We reported the average execution time of 5 runs for each application in the paper. Please see Global Response 1.
**Improvements:** When speedup of individual applications are considered then AUTOPARLLM achieves as high as **14.75%** speedup. Please see Global Response 2 for details. | DWC |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | l4ZbOfwavf | Comparison with Chain of thoughts: The idea of the proposed method is to use another model (GNN) to generate better prompts to guide the LLM in generating better results. The root cause is that LLMs cannot generate good results at once without hints. Therefore, it would be reasonable to compare it with the Chain of Tho... | Experiments | **COT Prompting:** Please see Table 8 in Appendix 8.5. Due to time constraints, we performed Chain-of-Thought (COT) experiments with three of the LLMs: GPT-3.5, GPT-4, and CodeLlama on the NAS benchmark test set of 90 loops. However, AUTOPARLLM outperformed the COT approach in terms of all code generation metrics. Deta... | CRP |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | l4ZbOfwavf | New metric OMPScore: The OMPScore is an extension of the Rough-L score, which is measured by the longest common subsequence. However, almost all the OpenMP pragmas share the same subsequence, ``#pragma omp parallel for``. Therefore, it doesn't appear to be a suitable metric. Additionally, the authors claim that the OMP... | Evaluation | **Incorporating ROGUE-L in OMPScore:** Before incorporating ROUGE-L as a component of OMPScore, we conducted an analysis to assess the correlation between several established translation evaluation metrics and human evaluations. The results, presented in Table 5 of our submission, indicated that ROUGE-L exhibited the h... | CRP |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | l4ZbOfwavf | The idea is simple and clear, but more details should be provided in the methods. Improving LLM generation by prompt engineering is not novel, as far as this reviewer is concerned. Using GNN to predict the parallel region is novel, but it is not quite convincing. It's hard to determine if it can be reproduced, especial... | Novelty | **Reproducibility:** We have already provided with the codes necessary for training the GNNs (base.py) and doing predictions using the GNNs (main.py) in the anonymous repository (https://anonymous.4open.science/r/Project-A-AE4A/base.py), and we mentioned the repository in the Appendix (Page 12). Also, the codes generat... | CRP |
AUTOPARLLM: GNN-Guided Automatic Code Parallelization using Large Language Models | znjaiy1Z9q | ICLR-2024 | uJrRsUktGu | The authors are urged to include further literature references in related work. As the presented approach relies on the compilation through IR, two works that stand out and should be included are:
- Transcoder-IR --> Code translation with Compiler Representations
- Automap/PartIR --> Automap: Towards Ergonomic Auto... | Novelty | **Reference of Transcoder-IR and automap/part:** Thanks for the suggestions. Yes, both of these works are IR based where one uses IR representation for code translation and the other uses IR based representation for parallelizing ML models. We referred the Automap/PartIR paper in Section 2 (Related Works: Data-driven a... | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | D7II3bm0HG | prefer to learn more details of how you decide the length of soft prompt vectors, e.g., why 4 and 16, will there be more ranges to be investigated basing on the specificl tasks for VLMs? | Experiments | **A1.** Thanks for your valuable comment regarding the discussion of the impact of the length of soft prompt vectors. Our paper mainly focuses on investigating the effect of soft prompt norms on vision-language models, so we just follow the standard settings of CoOp, with a length of 16 for soft prompts. Generally, the... | DWC |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | D7II3bm0HG | prefer to learn more investigations of combining Nemesis with existing PEFT algorithms to see if the results can be further improved or not so that other researchers can better leverage your method to their existing frameworks. | Experiments | **A2.** We appreciate your concern regarding the need for discussion on potential applicable scenarios. While our proposed method primarily focuses on benchmarking soft prompt-tuning VLMs and several downstream VLM-based tasks, including few-shot image classification, domain generalization, and base-to-new generalizati... | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | D7II3bm0HG | could there be a combination of between soft-prompt tuning and hard-prompt tuning? (hard = explicitly use some predefined words/phrases as part of the prompts); | Experiments | **A3.** Thanks for your insightful comment considering the possibility of combination of between soft-prompt tuning and hard-prompt tuning. As far as we know, **P-tuning** [5] and **P-tuning v2** [6] have implemented this idea, which employs trainable continuous prompt embeddings in concatenation with discrete prompts,... | DWC |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | D7II3bm0HG | any idea of further combining existing PEFT (prompt tuning, prefix tuning, LoRA...) with your Nemesis method? | Experiments | **A4.** The response related to the potential applicable scenarios of Nemesis has provided earlier in **A2**. Our preliminary experiments have demonstrated that our proposed method Nemesis can enhance the performance of visual prompt-tuning and prefix-tuning methods. We sincerely welcome further testing of our approach... | DWC |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | jmBGXmUuc2 | $\beta$ can be either 0 or 1, corresponding to two variants of the proposed Nemesis method. However, there is no ablation study on the selection of $\beta$, nor is there an exploration of the potential impact of setting $\beta$ with decimal values to assign weights to the two methods. | Experiments | **A1.** We sincerely appreciate your suggestion to conduct ablation study about $\beta$ (i.e. explore the simultaneous combination of the PEN loss and the PAN loss). In response to this concern, we conducted experiments to investigate the combined effect of these two losses on model performance. To be specific, we eval... | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | jmBGXmUuc2 | The Position Equality Normalization (PEN) loss applies equal weight to the norms of soft prompts at all positions. While the paper does acknowledge that normalizing prompt vectors at positions unaffected by the Low-Norm Effect may not yield performance improvement, the inherent assumption of the universality of the Low... | Theory | **A3.** We appreciate your insightful comment on our proposed PEN loss. As your statement, there is an inherent assumption of the diversity of the Low-Norm Effect across positions, which aligns with our observations during experiments. The mechanism behind this diversity is complex as the positions that induce the Low-... | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | jmBGXmUuc2 | The paper utilizes the RESCALE operation with a specific rescaling factor, τ, described as a positive real number less than 1. However, there’s no mention of how the value of τ is determined, if it's consistent across datasets, or its sensitivity. The choice of τ could have implications on the effectiveness of the Neme... | Experiments | **A4.** Thank you for your comment regarding the discussion of the rescaling factor $\tau$ used in the PAN loss. We would like to kindly point out that it seems to have been overlooked in your review. In fact, the paper did provide hyper-parameter analysis for different $\tau$ values, including 0.1, 0.5, and 0.9. Based... | DWC |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | jmBGXmUuc2 | Given the significance of the parameter $\beta$ in differentiating between the two variants of the Nemesis method, why was an ablation study not conducted to evaluate its impact? Additionally, have you considered exploring decimal values for $\beta$ to potentially strike a balance between the effects of the PEN and PAN... | Experiments | **A5.** The response related to $\beta$ has provided earlier in **A1**. As suggested, we have incorporated an ablation study about $\beta$ into the Ablation Study Section (i.e. Section 4.7) of the updated paper. | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | jmBGXmUuc2 | How does the proposed Nemesis method compare with other soft-prompt tuning methods in terms of computational efficiency and scalability, especially in larger datasets or more complex tasks? | Experiments | **A6.** The response related to computation costs has provided earlier in **A2**. As suggested, we have added a subsection (i.e. Appendix A.2.7) to analyze computation costs in the revised paper.
As for **scalability**, we have conducted preliminary experiments on a few PEFT methods and their applicable scenarios, in... | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | w5zyKrQ9Qf | The writing of some parts of the paper are not clear enough. It is recommended that the authors check. For example, there is a discrepancy between formula 4 and the symbol definition in the previous paragraph. | Writing | **A1.** Thank you for your attention and feedback. We have carefully reviewed the symbol definitions and have made the necessary revisions to address the issue you mentioned. In order to differentiate the subscripts of $\alpha$ between in Eq. (3) and in Eq. (4), we have revised Eq. (4) in the updated paper. | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | w5zyKrQ9Qf | The two types of losses proposed in the paper lack a correlation with practical significance, suggesting authors discuss why the two forms of normalization affect soft prompt. | Evaluation | **A2.** We apologize for not explicitly discussing the specific impacts of two normalization losses on models in the paper. However, we have added a subsection in the Appendix (i.e. Appendix A.2.6) of the revised paper to discuss this. We hope that it can address your concern.
Taking the Caltech101 dataset [1] as an e... | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | w5zyKrQ9Qf | The paper lacks discussion on the applicable scenarios of two normalization losses. | Evaluation | **A3.** We appreciate your concern regarding the need for discussion on potential applicable scenarios. While our proposed method primarily focuses on benchmarking soft prompt-tuning VLMs and several downstream VLM-based tasks, including few-shot image classification, domain generalization, and base-to-new generalizati... | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | w5zyKrQ9Qf | The paper proposes two normalization methods, while only testing the effects of PEN and PAN on the experimental results respectively. Why cannot both types of losses be used simultaneously? If there is a contradiction between the two losses, it is recommended that the authors discuss the differences. If the two losses ... | Experiments | **A4.** We sincerely appreciate your suggestion to explore the simultaneous combination of the PEN loss and the PAN loss. In response to this concern, we conducted experiments to investigate the combined effect of these two losses on model performance. To be specific, we evaluated various values of $\beta$, including 0... | CRP |
Nemesis: Normalizing the Soft-prompt Vectors of Vision-Language Models | zmJDzPh1Dm | ICLR-2024 | w5zyKrQ9Qf | Can author discuss application circumstance of two normalization methods? In practical applications, what kind of normalization loss should we choose for what situation? Suggest the authors to discuss. | Evaluation | **A5.** Thank you for highlighting this concern. This paper proposed two types of normalization losses to harness the Low-Norm Effect during soft prompt-tuning vision-language models. Based on the few-shot recognition results presented in the first subfigure of Figure 2 and Table A5, it can be observed that the PEN los... | DWC |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | iIIUU7nYOB | The notation of Proposition 3.2 and its proof in the appendix are sloppy and I cannot determine the correctness: what is the inverse of the rectangular matrix $\frac{\partial f_\theta(x_l^t, x_l^c)}{\partial x_l^t}$? Is it a pseudo-inverse, or is it a part of the network Jacobian? I suggest to greatly rewrite this prop... | Theory | The Jacobian matrix $\frac{\partial f_\theta(x_l^{trans}, x_l^{cond})}{\partial x_l^{trans}}$ is square and invertible and there is thus no subtlety in defining its inverse. In more detail, we define a coupling block in Eq 6 as
$$x_{l+1}^{trans} = f_{\theta} (x_{l}^{trans}, x_{l}^{cond})$$
$$x_{l+1}^{cond} = x_{l... | CRP |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | iIIUU7nYOB | What is the cost of computing Proposition 3.2? As I mentioned in the first point, by rewriting the recursion more generally, this could easily be showcased. | Theory | First, note that in the revised manuscript Proposition 3.2 has become Proposition 3.3. We will refer to proposition numbers in the revised manuscript.
For general flow architectures the recursion formula is given by the new Proposition 3.2. For generic architectures, where the inverse of the transformation $T_{l, \the... | CRP |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | iIIUU7nYOB | What is the intuition behind Proposition 4.1? What is the regularization obtained from including the unnormalized density (probably something like the corrected relative weight of each sample according to the ground truth density)? What derivative vanishes in expectation? How large is the variance of the removed gradie... | Theory | We substantially extended Appendix B.3 to address your questions in detail. Briefly summarized:
- Intuition: It is well known that the forward KL in target space becomes the reverse KL in base space (see Section 2.3.3 in [1]). This duality allows us to immediately apply all the results derived for the reverse KL case ... | CRP |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | iIIUU7nYOB | In this light, how much parameter tuning was involved in the other experiments $\phi^4$ and $U(1)$? Please compare your numbers to the state of the art results on these benchmarks. | Experiments | For our $\phi^4$ experiments, we use the exact same hyperparameter choices as in the recent publication [1] as well as the same codebase.
To the best of our knowledge, there are currently only two publications considering maximum likelihood training for $\phi^4$ theory: [1] and [2].
We chose [1] as it is more recent. ... | DWC |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | iIIUU7nYOB | Eq. (13) is missing a logarithm. | Theory | We fixed the typo. | CRP |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | iIIUU7nYOB | The caption for Figure 1 is on page 21 in the appendix, took me some time. | Presentation | We rephrased the caption. | CRP |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | iIIUU7nYOB | If a reader is not familiar with the terms forward and reverse KL, it is hard to understand the introduction. Point the reader to Section 2 or drop it here, leaving space for more explanations on theoretical results. | Writing | We have pointed the reader to Section 2 in the introduction as suggested. | CRP |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | R6DWZzxh4A | The speedup for explicitly invertible flows (which are more common) is relatively minor. | Evaluation | Our estimators for explicitly invertible flows have about 60 percent the runtime of the previous state-of-the-art. Thus the speed-up is significant. We however agree that proposing path gradient estimators for implicitly invertible normalizing flows is probably the more important contribution of our manuscript since ma... | DWC |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | R6DWZzxh4A | The authors emphasise that an advantage of their method relative to those from Vaitl et al. for the estimation of the forward KL is that their method does not require reweighting. However, their method uses samples from the target, while the method from Vaitl et al. uses samples from the flow - hence the two methods ar... | Evaluation | We agree that this is an important difference between our method and the one proposed by Vaitl et al. We have revised the manuscript to make this clearer.
We stress however that their reweighting method comes with an important downside: it fails as the system size grows because the probability mass of the target densit... | DWC |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | R6DWZzxh4A | How come the flow trained via the standard maximum likelihood objective achieves such poor performance on the MGM problem (Table 1)?. It seems possible that poor hyper-parameters have been used as training by maximum likelihood should be able to obtain reasonable results. | Experiments | Note that our experiments merely establish that path gradients facilitate *more sample efficient training* and help avoid overfitting.
This can be seen from the rhs of Fig 1, namely that standard maximum likelihood training can easily fit the MGM if enough samples are provided. For smaller number of samples, maximum l... | CRP |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | R6DWZzxh4A | In the case of forwards KL with flows that require implicit differentiation for inversion, is it not more efficient to set the forwards direction of the flow to map from the target to the flow’s base (rather than base to target), such that implicit differentiation is required for sampling, but not density evaluation)? | Theory | You are correct in that one can choose the “directionality” of the flow such that density estimation is fast.
In such as situation, implicit differentiation is not necessary.
However, such a choice is strongly disfavored in the context of Boltzmann generators: one wants to use these flows to facilitate fast sampling ... | DWC |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | S0GjQTr3qO | The experiments are a bit toy, or at least their significance was not explained. | Experiments | We politely disagree with this statement. Lattice field theory provides the mathematical framework underlying many parts of modern theoretical physics, in particular, high-energy physics, gravitational physics, condensed matter and statistical physics. Indeed all known fundamental forces of nature can be described by q... | DWC |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | S0GjQTr3qO | I have a naive question about computing the pathwise gradient of the reverse KL. In equation (2), it seems to me that we could rewrite the equation by using the Jacobian of the forward transform based on the inverse function theorem, so that the $+\log |\textup{det} ~ dT^{-1}/dx|$ term becomes $- \log |\textup{det}~dT/... | Theory | The stated identity is, of course, correct but unfortunately unhelpful as we are interested in the derivative with respect to $x,$ i.e., $\frac{\partial \log \det |\frac{d T_{\theta}^{-1}(x) }{ d x }|}{\partial x}$. The term $\log |\det dT_{\theta}/d x_{0}|$ manifestly only depends $x_0$ and only implictly on $x$ thr... | DWC |
Fast and unified path gradient estimators for normalizing flows | zlkXLb3wpF | ICLR-2024 | S0GjQTr3qO | "Path gradients have the appealing property that they are unbiased and have lower variance compared to standard estimators, thereby promising accelerated convergence (Roeder et al., 2017; Agrawal et al., 2020; Vaitl et al., 2022a;b)." → Other estimators are also unbiased, but the sentence makes it seem like they aren't... | Writing | We fully agree and have rephrased the relevant sentences. | CRP |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | Szeh1IWYeZ | It is advisable to conduct a comparison between the proposed method and other existing techniques for robust GM, such as ASAR[1] and COMMON [2]. Specifically, COMMON addresses robust graph matching by considering noisy correspondence during training, while ASAR takes adversarial attacks into account during training. Ev... | Experiments | We appreciate your valuable feedback. We have performed additional experiments based on your suggestions and further elaborated our proposed certified robustness method.
* The ASAR and COMMON methods you referred to are the recently proposed visual GM methods with relatively good results. We calculated CR (in link: ht... | CRP |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | Szeh1IWYeZ | Since the author outlines four challenges in the Introduction, it would be beneficial to emphasize these points within the Method section, using C1 to C4. | Presentation | Thanks for your suggestion, which will improve the clarity of our paper. We have implemented the corresponding changes in the new version of the paper. | CRP |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | TxSX0A5Wps | In Eq. 10, the authors mentioned that a constraint on b is imposed in the optimization. However, how this constraint works is not well explained. The effectiveness of this constraint is not evaluated in the experiments. | Experiments | Thank you for your questions and suggestions. We will clarify the role of the constraint and its ablation study. We have also revised the paper accordingly, please refer to Appendix.C and Appendix.E.1 in the updated version for more details.
* ***We first explain the rationale behind the design of the optimization algo... | CRP |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | TxSX0A5Wps | The authors introduced a regularization in Eq. 11, however, the ablation study of the variant without this regularization is missing. | Experiments | Thank you for your question. We actually have presented the ablation study you referred to in Fig.2. We evaluate the certification results of RS-GM and CR-OSRS on the basic model, the model with data augmentation (AUG), and the model with data augmentation and the regularization term (AUG+REG). We also explain in Sec.5... | CRP |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | TxSX0A5Wps | Please provide more details of the constraint in Eq. 10 as well as ablation studies. | Experiments | Thank you for your questions and suggestions. We will clarify the role of the constraint and its ablation study. We have also revised the paper accordingly, please refer to Appendix.C and Appendix.E.1 in the updated version for more details.
* ***We first explain the rationale behind the design of the optimization algo... | CRP |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | TxSX0A5Wps | Please provide the ablation study of the regularization in Eq. 11. | Experiments | Thank you for your question. We actually have presented the ablation study you referred to in Fig.2. We evaluate the certification results of RS-GM and CR-OSRS on the basic model, the model with data augmentation (AUG), and the model with data augmentation and the regularization term (AUG+REG). We also explain in Sec.5... | CRP |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | LxlI2z5JIQ | The presentation can be improved for better clarity, as it involves multiple areas ranging from graph matching (combinatorial optimization), robustness certification, visual recognition, etc. | Presentation | We appreciate your insightful suggestions. Our research draws on knowledge from multiple fields, and we aim to present and compare them more rigorously and clarify their relevance to this study according to your suggestions. We show the comparison of visual GM methods and certified robustness methods in Tab.1 and Tab.2... | CRP |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | LxlI2z5JIQ | The paper lacks some discussion for enlarging its potential impact to other combinatorial tasks or any limitation and difficulty to extend its adaption to other tasks. | Evaluation | We appreciate your valuable suggestions. Below we will discuss the challenges of extending certified robustness research to general combinatorial optimization (CO) problems. Then we will introduce the particularities of the visual graph matching (GM) problem that this work focuses on and its advancement for the general... | DWC |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | LxlI2z5JIQ | Have you explored the possibility of extending your approach to address problems beyond graph matching? Given the ubiquity of combinatorial optimization on graphs, a discussion on a potentially more general framework could be beneficial. | Novelty | We appreciate your valuable suggestions. Below we will discuss the challenges of extending certified robustness research to general combinatorial optimization (CO) problems. Then we will introduce the particularities of the visual graph matching (GM) problem that this work focuses on and its advancement for the general... | VCR |
Certified Robustness on Visual Graph Matching via Searching Optimal Smoothing Range | zkzf0VkiNv | ICLR-2024 | LxlI2z5JIQ | Can you add a table or figure to summarize and compare related methods from multiple aspects for better accessibility of readers? | Presentation | We appreciate your insightful suggestions. Our research draws on knowledge from multiple fields, and we aim to present and compare them more rigorously and clarify their relevance to this study according to your suggestions. We show the comparison of visual GM methods and certified robustness methods in Tab.1 and Tab.2... | CRP |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | fYc9hm7pkm | In the experiments, choosing $\lambda$ for SAPS uses a subset of the calibration set. Do all baselines use the (same) remaining calibration set? That is, is SAPS calibrated on a smaller set due to $\lambda$? | Experiments | Thank you for the recognition. We address specific concerns below.
**1. Choice of $\lambda$. [Q1]**
For SAPS and RAPS, we utilize a validation set to tune hyperparameters and the remaining dataset to calibrate the $\tau$. For APS, we calibrate the threshold $\tau$ on the whole calibration set. | DWC |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | I have a reservation about the claimed contribution of higher adaption, i.e., the adaption is not that convincing: For the example of Figure 3(b), now that both SAPS and RAPS achieve the same coverage, why should we require a larger prediction set for difficult observations? In general, the smaller the better. RAPS giv... | Evaluation | In the literature of conformal prediction, the size of prediction sets is expected to represent the inherent uncertainty of the classifier's predictions. With a comparable set size, methods with high adaption can reflect instance-wise uncertainty precisely [R1]. Specifically, prediction sets should be larger for hard e... | DWC |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | Even though the proposed method has a promising performance compared to several methods, how is the proposed method far away from the ground truth? | Evaluation | We guess that the "ground truth" is the sets produced by APS with the oracle model (correct us if we are mistaken). In Proposition 2, we provide a theoretical result, suggesting that SAPS is capable of producing smaller sets than APS, with the oracle model. | DWC |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | To make it clearly catch the whole scope, it would be better to explicitly outline the calibration and prediction under the frame of a pseudo-algorithm as the one in RAPS. For example, I believe "We choose the hyper-parameter that achieves the smallest set size on a validation set" fails to disclose the entire picture ... | Reproducibility | Thank you for the great suggestion. We have included pseudo-code algorithms in Appendix H of the revised manuscript.
For the validation set, we would like to clarify that various score functions caused by different values of $\lambda$ always satisfy the desired coverage. In other words, the value of $\lambda$ does no... | CRP |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | The proofs are not friendly reading (see the section on questions). | Writing | We thanks for pointing out the typos. We have fixed them in the updated version.
- $\mathcal{C}\left(\boldsymbol{x}\_{i},y_i,u_i\right)$ is a typo. In Section 4.1, we revised $\mathcal{C}\left(\boldsymbol{x}\_{i},y_i,u_i\right)$ to $\mathcal{C}\left(\boldsymbol{x}\_{i}\right)$ representing the prediction set for $\bo... | CRP |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | Do you mean $\mathcal{C}(\boldsymbol x_i, y_i, u_i)$ for the definition of coverage rate? | Writing | We thanks for pointing out the typos. We have fixed them in the updated version.
- $\mathcal{C}\left(\boldsymbol{x}\_{i},y_i,u_i\right)$ is a typo. In Section 4.1, we revised $\mathcal{C}\left(\boldsymbol{x}\_{i},y_i,u_i\right)$ to $\mathcal{C}\left(\boldsymbol{x}\_{i}\right)$ representing the prediction set for $\bo... | CRP |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | In the proof of Lemma 1, did you intend to assume $p_{(k)}\geq \frac{1}{k}$? Where will be $\tilde{k}$ used in the proof? | Theory | We thanks for pointing out the typos. We have fixed them in the updated version.
- $\mathcal{C}\left(\boldsymbol{x}\_{i},y_i,u_i\right)$ is a typo. In Section 4.1, we revised $\mathcal{C}\left(\boldsymbol{x}\_{i},y_i,u_i\right)$ to $\mathcal{C}\left(\boldsymbol{x}\_{i}\right)$ representing the prediction set for $\bo... | CRP |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | Is (2) generally correct? In other words, are the prediction results always nested? Particularly in Theorem 2, Since there is a random variable $u$ introduced, why $\mathcal{C}_{1-\alpha}(\boldsymbol{x}, u)$ have the nesting property? | Theory | The nesting property defined by Eq.2 is a common property holding on all prediction sets for any conformal predictor [R2]. Specifically, if a lower error rate is expected, the set will have a larger size for higher coverage. In this work, since the calibrated threshold $\tau$ is the $1-\alpha$ quantile of scores, the n... | DWC |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | Proposition 1: How is $\mathcal{C}_{1-\alpha}(\boldsymbol x, u)$ defined as in Eq. 3? They have totally different notations. | Writing | Thank you for pointing out the ambiguous notation. The prediction set in Proposition 1 is defined as $$\mathcal{C}_{1-\alpha}(\boldsymbol{x},u) =\lbrace y\in\mathcal{Y} : S(\boldsymbol{x},y,u;\hat{\pi})\leq \tau \rbrace.$$ Thus, it is mathematically equivalent to Eq. 3. To mitigate this confusion, we added a detailed ... | CRP |
Conformal Prediction for Deep Classifier via Label Ranking | zkVm3JqJzs | ICLR-2024 | uFU1xCjTX8 | I didn’t get the point of the proof for Proposition 1. What is the difference between your proof of proposition 1 and Theorem 2? The conclusion of coverage is for the popped $\mathcal{C}(\boldsymbol x_{n+1},u_{n+1})$ but there is no $\mathcal{C}(\boldsymbol x_{n+1},u_{n+1})$ during your proof. I think the authors need ... | Theory | Proposition 1 is a corollary of Theorem 2. Specifically, Theorem 2 gives a coverage guarantee for CP methods whose prediction set has a general formulation $\mathcal{C}(\boldsymbol{x},u,\tau)$. The prediction set of SAPS defined as $\lbrace y\in\mathcal{Y}:S(\boldsymbol{x},y,u;\hat{\pi})\leq \tau \rbrace$ is a specific... | CRP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.