id_paragraph stringlengths 20 26 | parag_1 stringlengths 101 3.02k | parag_2 stringlengths 173 2.77k | annot_1 dict | annot_2 dict | id_source stringlengths 8 11 | id_target stringlengths 8 11 | index_paragraph int64 0 26 | list_sentences_1 listlengths 1 36 | list_sentences_2 listlengths 1 36 |
|---|---|---|---|---|---|---|---|---|---|
MnewiFDvHZ.iAYttXl-uH.03 | We compared RECOO with Algorithm 1 in [30], where learning rates are summarized in Table 4in Appendix H. Figure 2 includes the cumulative losses and violations. In particular, The (mean,variance) pair of RECOO for loss and violation are p 42510 . 05 q and p 713 . 45 , 1 . 60 q at the end of learning horizon, respective... | We compared RECOO with Algorithm 1 in [27], with α t “ η t “ ? t , γ t “ t 1 { 2 ` 0 . 01 in RECOO; andα t “ 0 . 8 {? t, β t “ 5 {? t and γ t “ 0 . 5 {? t for Algorithm 1 in [27] (these are the optimized learningrates). Figure 2 includes the cumulative losses and violations. In particular, The (mean, variance)pair of R... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | null | MnewiFDvHZ | iAYttXl-uH | 3 | [
{
"text": "We compared RECOO with Algorithm 1 in [30], where learning rates are summarized in Table 4in Appendix H. Figure 2 includes the cumulative losses and violations."
},
{
"text": "In particular, The (mean,variance) pair of RECOO for loss and violation are p 42510 ."
},
{
"text": "05 q and... | [
{
"text": "We compared RECOO with Algorithm 1 in [27], with α t “ η t “ ? t , γ t “ t 1 { 2 ` 0 . 01 in RECOO; andα t “ 0 . 8 {? t, β t “ 5 {? t and γ t “ 0 . 5 {? t for Algorithm 1 in [27] (these are the optimized learningrates). Figure 2 includes the cumulative losses and violations."
},
{
"text": "In... |
MXi6uEx-hp.rdZfFcGyf9.00 | Intelligent agents can solve tasks in a variety of ways depending on the action set at their disposal. For instance, while using a toolkit for repair, the choice of tool (the action) closely depends on what other tools are available. Yet, such dependence on other available actions is ignored in conventional reinforceme... | Intelligent agents can solve tasks in various ways depending on their available set of actions. However, conventional reinforcement learning (RL) assumes a fixed action set. This work asserts that tasks with varying action sets require reasoning of the relations between the available actions. For instance, taking a nail... | {
"annotation": [
"Content_substitution",
"Rewriting_light"
],
"instruction": "",
"annotator": "annotator_06"
} | {
"annotation": [
"Rewriting_heavy"
],
"instruction": "Make reasoning understandable, use accurate words.",
"annotator": "annotator_08"
} | MXi6uEx-hp | rdZfFcGyf9 | 0 | [
{
"text": "Intelligent agents can solve tasks in a variety of ways depending on the action set at their disposal."
},
{
"text": "For instance, while using a toolkit for repair, the choice of tool (the action) closely depends on what other tools are available."
},
{
"text": "Yet, such dependence ... | [
{
"text": "Intelligent agents can solve tasks in various ways depending on their available set of actions."
},
{
"text": ""
},
{
"text": "However, conventional reinforcement learning (RL) assumes a fixed action set."
},
{
"text": "This work asserts that tasks with varying action sets requ... |
S6FTGJ2qg.pZxjlXjpkL.00 | Ermon, 2019). Song et al. (2020b) shows that diffusion models are trained using denoising score matching (Vincent, 2011), a conditional objective that provides unbiased gradients with respect to the score matching objective. Conditional Flow Matching draws inspiration from this result, but generalizes to matching vecto... | Ermon, 2019). Song et al. (2020b) shows that diffusion models are trained using denoising score matching (Vincent, 2011), a conditional objective that provides unbiased gradients with respect to the score matching objective. Conditional Flow Matching draws inspiration from this result, but generalizes to matching vecto... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | S6FTGJ2qg | pZxjlXjpkL | 0 | [
{
"text": "Ermon, 2019)."
},
{
"text": "Song et al."
},
{
"text": "(2020b) shows that diffusion models are trained using denoising score matching (Vincent, 2011), a conditional objective that provides unbiased gradients with respect to the score matching objective."
},
{
"text": "Conditi... | [
{
"text": "Ermon, 2019)."
},
{
"text": "Song et al."
},
{
"text": "(2020b) shows that diffusion models are trained using denoising score matching (Vincent, 2011), a conditional objective that provides unbiased gradients with respect to the score matching objective."
},
{
"text": "Conditi... |
skR2qMboVK.lmwxQfhmln.00 | BALD (Houlsby et al., 2011) estimates the mutual information (MI) between the datapoints and the model weights, the idea being that points with large MI between the predicted label and weights have a larger impact on the trained model’s performance. The measure, denoted I , is the conditional entropy over predictions g... | BALD (Houlsby et al., 2011) estimates the mutual information (MI) between the datapoints and the model weights, the idea being that points with large MI between the predicted label and weights have a larger impact on the trained model’s performance. The measure, denoted I , is approximated as: | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_02"
} | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | skR2qMboVK | lmwxQfhmln | 0 | [
{
"text": "BALD (Houlsby et al., 2011) estimates the mutual information (MI) between the datapoints and the model weights, the idea being that points with large MI between the predicted label and weights have a larger impact on the trained model’s performance."
},
{
"text": "The measure, denoted I , is ... | [
{
"text": "BALD (Houlsby et al., 2011) estimates the mutual information (MI) between the datapoints and the model weights, the idea being that points with large MI between the predicted label and weights have a larger impact on the trained model’s performance."
},
{
"text": "The measure, denoted I , is ... |
p8yrWJS4W.eHA5NswPr.03 | On the other hand, our metrics deem text with stopwords removed as utterly different from the reference. Permuting words within texts has a similar effect, demonstrating that, at least to some extent, the embedding space captures notions of syntax and grammaticality, rather than pure unigram These results inspire us to... | On the other hand, our metrics deem text with stopwords removed as utterly different from the reference. Permuting words within texts has a similar effect, demonstrating that, at least to some extent, the embedding space captures notions of syntax and grammaticality, rather than pure unigram statistics. The increase in... | {
"annotation": [
"Development",
"Content_addition"
],
"instruction": "",
"annotator": "annotator_03"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | p8yrWJS4W | eHA5NswPr | 3 | [
{
"text": "On the other hand, our metrics deem text with stopwords removed as utterly different from the reference."
},
{
"text": "Permuting words within texts has a similar effect, demonstrating that, at least to some extent, the embedding space captures notions of syntax and grammaticality, rather tha... | [
{
"text": "On the other hand, our metrics deem text with stopwords removed as utterly different from the reference."
},
{
"text": "Permuting words within texts has a similar effect, demonstrating that, at least to some extent, the embedding space captures notions of syntax and grammaticality, rather tha... |
Sy-6xpqtX.S1Ogmz_a7.00 | Optimization algorithms can provide insight and guidance in the design of deep network architectures (Vogel & Pock, 2017; Yang et al., 2016; Zhang & Ghanem, 2018). For example, Yang et al. (2016) have proposed a deep network architecture for compressed sensing. Their network, dubbed ADMM-Net, is inspired by ADMM updat... | Optimization algorithms can provide insight and guidance in the design of deep network architectures (Vogel & Pock, 2017; Kobler et al., 2017; Yang et al., 2016; Zhang & Ghanem, 2018). For example, Yang et al. (2016) have proposed a deep network architecture for compressed sensing. Their network, dubbed ADMM-Net, is in... | {
"annotation": [
"Content_addition",
"Development"
],
"instruction": "",
"annotator": "annotator_09"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | Sy-6xpqtX | S1Ogmz_a7 | 0 | [
{
"text": "Optimization algorithms can provide insight and guidance in the design of deep network architectures (Vogel & Pock, 2017; Yang et al., 2016; Zhang & Ghanem, 2018)."
},
{
"text": "For example, Yang et al."
},
{
"text": "(2016) have proposed a deep network architecture for compressed s... | [
{
"text": "Optimization algorithms can provide insight and guidance in the design of deep network architectures (Vogel & Pock, 2017; Kobler et al., 2017; Yang et al., 2016; Zhang & Ghanem, 2018)."
},
{
"text": "For example, Yang et al."
},
{
"text": "(2016) have proposed a deep network architect... |
wxzHtn7XId.d4DKAyZjOj.00 | Optimization Dilemma in OOD Algorithms. Along with the developments of OOD methods, the optimization dilemma in OOD generalization is gradually perceived in the literature, and raises new puzzles to the community. In fact, several recent works also notice the optimization dilemma in OOD algorithms, specifically, the tr... | Optimization Dilemma in OOD Algorithms. Along with the developments of OOD methods, the optimization dilemma in OOD generalization is gradually perceived in the literature, and raises new puzzles to the community. In fact, several recent works also notice the optimization dilemma in OOD algorithms, specifically, the tr... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | wxzHtn7XId | d4DKAyZjOj | 0 | [
{
"text": "Optimization Dilemma in OOD Algorithms."
},
{
"text": "Along with the developments of OOD methods, the optimization dilemma in OOD generalization is gradually perceived in the literature, and raises new puzzles to the community."
},
{
"text": "In fact, several recent works also notice... | [
{
"text": "Optimization Dilemma in OOD Algorithms."
},
{
"text": "Along with the developments of OOD methods, the optimization dilemma in OOD generalization is gradually perceived in the literature, and raises new puzzles to the community."
},
{
"text": "In fact, several recent works also notice... |
HWNjBFvR-q.BTfXOtvRW9.00 | We thank Huan Zhang for helpful discussions and Alex Wang for helpful comments on a draft of this work. CW was supported by a NSF Graduate Research Fellowship. Toyota Research Institute provided funds to support this work. | We thank Huan Zhang for helpful discussions and Alex Wang for helpful comments on a draft of this work. CW was supported by a NSF Graduate Research Fellowship. Portions of this work were supported by funds from Toyota Research Institute and the Bosch Center for AI. | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_03"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | HWNjBFvR-q | BTfXOtvRW9 | 0 | [
{
"text": "We thank Huan Zhang for helpful discussions and Alex Wang for helpful comments on a draft of this work."
},
{
"text": "CW was supported by a NSF Graduate Research Fellowship."
},
{
"text": " Toyota Research Institute provided funds to support this work."
}
] | [
{
"text": "We thank Huan Zhang for helpful discussions and Alex Wang for helpful comments on a draft of this work."
},
{
"text": "CW was supported by a NSF Graduate Research Fellowship."
},
{
"text": "Portions of this work were supported by funds from Toyota Research Institute and the Bosch Cent... |
W6V9WgTOwm.kXnvpTSqMp.00 | Discussion. In this paper, we show that ID-calibrated ensembles, a simple method of calibrating a standard and robust model only on ID data and then ensembling them, can eliminate the tradeoff between in-distribution (ID) and out-of-distribution (OOD) accuracy on a wide range of natural shifts. We hope that this leads ... | Conclusion and Future Work. In this paper, we show that ID-calibrated ensembles, a simple method of calibrating a standard and robust model only on ID data and then ensembling them, can eliminate the tradeoff between in-distribution (ID) and out-of-distribution (OOD) accuracy on a wide range of natural shifts. We hope ... | {
"annotation": [
"Rewriting_light"
],
"instruction": "Rename this section to a more appropriate title.",
"annotator": "annotator_03"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Rename the section \"Conclusion and Future Work\"",
"annotator": "annotator_07"
} | W6V9WgTOwm | kXnvpTSqMp | 0 | [
{
"text": "Discussion."
},
{
"text": "In this paper, we show that ID-calibrated ensembles, a simple method of calibrating a standard and robust model only on ID data and then ensembling them, can eliminate the tradeoff between in-distribution (ID) and out-of-distribution (OOD) accuracy on a wide range o... | [
{
"text": "Conclusion and Future Work."
},
{
"text": "In this paper, we show that ID-calibrated ensembles, a simple method of calibrating a standard and robust model only on ID data and then ensembling them, can eliminate the tradeoff between in-distribution (ID) and out-of-distribution (OOD) accuracy o... |
nCTSF9BQJ.DGhBYSP_sR.11 | Overview Our method consists of three parts. The core component is the rotamer density estimator (RDE), a conditional normalizing flow that models the probability density of sidechain conformations (rotamers) given the amino acid type and environments (Section 3.2). Next is the algorithm for estimating the entropy of ... | Overview Our method comprises three main components. The first is the Rotamer Density Estimator (RDE), which is a conditional normalizing flow that models the probability density of sidechain conformations (rotamers) based on the amino acid type and backbone structures (Section 3.2). The second component is an algorith... | {
"annotation": [
"Rewriting_light"
],
"instruction": "Generate a more formal version of this paragraph",
"annotator": "annotator_01"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Replace all mentions of amino acid by 'residue'. Revise this paragraph for clarity.",
"annotator": "annotator_07"
} | nCTSF9BQJ | DGhBYSP_sR | 11 | [
{
"text": "Overview Our method consists of three parts."
},
{
"text": "The core component is the rotamer density estimator (RDE), a conditional normalizing flow that models the probability density of sidechain conformations (rotamers) given the amino acid type and environments (Section 3.2)."
},
{
... | [
{
"text": "Overview Our method comprises three main components."
},
{
"text": "The first is the Rotamer Density Estimator (RDE), which is a conditional normalizing flow that models the probability density of sidechain conformations (rotamers) based on the amino acid type and backbone structures (Section... |
aomiOZE_m2.rxb2TiQ6bq.00 | Lightweight image super-resolution (SR) networks have obtained promising results with moderate model size. However, they are impractical or neglected to be extended to larger networks. At the same time, model compression techniques, like neural architecture search and knowledge distillation, typically consume considera... | Several image super-resolution (SR) networks have been proposed of late for efficient SR, achieving promising results. However, they are still not lightweight enough and neglect to be extended to larger networks. At the same time, model compression techniques, like neural architecture search and knowledge distillation, ... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Replace all occurrences of SRPN-L with SRPN-Lite. Improve the english of this paragraph.",
"annotator": "annotator_02"
} | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Replace SRPN-L by SPRN-Lite. Make the first and last sentence more fitting to the academic style.",
"annotator": "annotator_07"
} | aomiOZE_m2 | rxb2TiQ6bq | 0 | [
{
"text": "Lightweight image super-resolution (SR) networks have obtained promising results with moderate model size."
},
{
"text": "However, they are impractical or neglected to be extended to larger networks."
},
{
"text": "At the same time, model compression techniques, like neural architectu... | [
{
"text": "Several image super-resolution (SR) networks have been proposed of late for efficient SR, achieving promising results."
},
{
"text": "However, they are still not lightweight enough and neglect to be extended to larger networks."
},
{
"text": "At the same time, model compression techniq... |
_VWsQJEH-X3.Tr4NZOz3iN.00 | For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] See Section (b) Did you describe the limitations of your work? [Yes] See Section (c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read t... | For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? [Yes] See Section(b) Did you describe the limitations of your work? [Yes] See Section(c) Did you discuss any potential negative societal impacts of your work? [No] (d) Have you read th... | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | null | _VWsQJEH-X3 | Tr4NZOz3iN | 0 | [
{
"text": "For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?"
},
{
"text": "[Yes] See Section"
},
{
"text": "(b) Did you describe the limitations of your work?"
},
{
"text": "[Yes] See Section"
},
{
... | [
{
"text": "For all authors... (a) Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?"
},
{
"text": "[Yes] See Section(b)"
},
{
"text": "Did you describe the limitations of your work?"
},
{
"text": "[Yes] See Section(c)"
},
... |
H1O5OQGfz.SksTSgdMG.00 | We conduct a preliminary investigation of this issue by studying the generalizability of KD, BU and LID for detecting previously unseen attack strategies on the CIFAR-10 dataset. The KD, BU and LID detectors are trained on samples of the simplest attack strategy, FGM, and then tested on samples of the more complex atta... | We conduct a preliminary investigation of this issue by studying the generalizability of KD, BU and LID for detecting previously unseen attack strategies on the CIFAR-10 dataset. The KD, BU and LID detectors are trained on samples of the simplest attack strategy, FGM, and then tested on samples of the more complex atta... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_08"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_02"
} | H1O5OQGfz | SksTSgdMG | 0 | [
{
"text": "We conduct a preliminary investigation of this issue by studying the generalizability of KD, BU and LID for detecting previously unseen attack strategies on the CIFAR-10 dataset."
},
{
"text": "The KD, BU and LID detectors are trained on samples of the simplest attack strategy, FGM, and then ... | [
{
"text": "We conduct a preliminary investigation of this issue by studying the generalizability of KD, BU and LID for detecting previously unseen attack strategies on the CIFAR-10 dataset."
},
{
"text": "The KD, BU and LID detectors are trained on samples of the simplest attack strategy, FGM, and then ... |
u9NaukzyJ-.hh0KECXQLv.14 | A , three with Design B, and 10 with Design C . To complete this task, participants could rely on the bars that indicate allowed medication intake times with Design A and Design C . With Design B , the marker on the medication entry indicated the allowed time. As most participants were not successful in completing this... | B . To complete this task, participants could rely on the bars that indicate allowed medication intake times with Design A and Design C . With Design B , the marker on the medication entry indi- cated the allowed time and five participants said Design B did not support that task. For example, P1 said “it doesn’t show a... | {
"annotation": [
"Concision"
],
"instruction": "Make this paragraph much more concise.",
"annotator": "annotator_03"
} | {
"annotation": [
"Concision"
],
"instruction": "Please write more concisely about design B.",
"annotator": "annotator_09"
} | u9NaukzyJ- | hh0KECXQLv | 14 | [
{
"text": "A , three with Design B, and 10 with Design C ."
},
{
"text": "To complete this task, participants could rely on the bars that indicate allowed medication intake times with Design A and Design C ."
},
{
"text": "With Design B , the marker on the medication entry indicated the allowed ... | [
{
"text": "B ."
},
{
"text": "To complete this task, participants could rely on the bars that indicate allowed medication intake times with Design A and Design C ."
},
{
"text": "With Design B , the marker on the medication entry indi- cated the allowed time and five participants said Design B d... |
atxti8SVk.3K9AmPwALM.09 | Semantic co-occurrence. Semantic context characterizes the co-occurrences of different objects and can be used to group and separate pixels. We define semantic context as the union of object classes in each image. Even without the location of labels, we can leverage semantic context to impose global regularization in ... | Semantic co-occurrence. Semantic context characterizes the co-occurrences of different objects, which can be used as a prior to group and separate pixels. We define semantic context as the union of object classes in each image. Even without the pixel-wise localization of semantic labels, we can leverage semantic context... | {
"annotation": [
"Concision"
],
"instruction": "Rewrite this paragraph to be more concise.",
"annotator": "annotator_03"
} | {
"annotation": [
"Rewriting_light",
"Concision"
],
"instruction": "Split the last sentence and make it slightly shorter. Improve the english.",
"annotator": "annotator_07"
} | atxti8SVk | 3K9AmPwALM | 9 | [
{
"text": "Semantic co-occurrence."
},
{
"text": "Semantic context characterizes the co-occurrences of different objects and can be used to group and separate pixels."
},
{
"text": "We define semantic context as the union of object classes in each image."
},
{
"text": "Even without the l... | [
{
"text": "Semantic co-occurrence."
},
{
"text": "Semantic context characterizes the co-occurrences of different objects, which can be used as a prior to group and separate pixels."
},
{
"text": "We define semantic context as the union of object classes in each image."
},
{
"text": "Even ... |
CVRUl83zah.I75TtW0V7.14 | Results Table 2 shows our results. iDSPN is close to solving the problem at any training set size by being exclusively multiset-equivariant like DSPN. As expected, the set-equivariant models are unable to solve this task because they cannot map the equal elements in the input to different elements in the output. This a... | Results. Table 2 shows our results. The two exclusively multiset-equivariant models DSPN and iDSPN perform well at any training set size. As expected, the set-equivariant models are unable to solve this task because they cannot map the equal elements in the input to different elements in the output. This applies even t... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Simplify the second sentence.",
"annotator": "annotator_09"
} | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Rewrite the 2nd sentence to make it easier to read and less confusing.",
"annotator": "annotator_07"
} | CVRUl83zah | I75TtW0V7 | 14 | [
{
"text": "Results Table 2 shows our results."
},
{
"text": " iDSPN is close to solving the problem at any training set size by being exclusively multiset-equivariant like DSPN."
},
{
"text": "As expected, the set-equivariant models are unable to solve this task because they cannot map the equal... | [
{
"text": "Results. Table 2 shows our results."
},
{
"text": "The two exclusively multiset-equivariant models DSPN and iDSPN perform well at any training set size."
},
{
"text": "As expected, the set-equivariant models are unable to solve this task because they cannot map the equal elements in t... |
7_CwM-IzWd.zcm6f5HDI.12 | Colored-and-gray-MNIST (Kim et al., 2019) is a synthetic dataset based on MNIST (LeCun et al., 1998). In the training set of 60,000 examples, each example has two images, a gray-scale image and a monochromatic image, with color strongly correlated with its digit label. For the validation set of 10,000 examples, each ex... | Colored-and-gray-MNIST (Kim et al., 2019) is a synthetic dataset based on MNIST (LeCun et al., 1998). In the training set of 60,000 examples, each example has two images, a gray-scale image and a monochromatic image, with color strongly correlated with its digit label. For the validation set of 10,000 examples, each ex... | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_05"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | 7_CwM-IzWd | zcm6f5HDI | 12 | [
{
"text": "Colored-and-gray-MNIST (Kim et al., 2019) is a synthetic dataset based on MNIST (LeCun et al., 1998)."
},
{
"text": "In the training set of 60,000 examples, each example has two images, a gray-scale image and a monochromatic image, with color strongly correlated with its digit label."
},
... | [
{
"text": "Colored-and-gray-MNIST (Kim et al., 2019) is a synthetic dataset based on MNIST (LeCun et al., 1998)."
},
{
"text": "In the training set of 60,000 examples, each example has two images, a gray-scale image and a monochromatic image, with color strongly correlated with its digit label."
},
... |
JKGrgCQWiO.vNgWh1ZGc.00 | In this work, we develop a novel third family of attacks, recursive gradient attack on privacy (RGAP), that is based on a recursive, depth-wise algorithm for recovering training data from gradient information. Different from the analytical attack using the bias term, R-GAP utilizes much more information and is the first... | In this work, we develop a novel third family of attacks, recursive gradient attack on privacy (RGAP), that is based on a recursive, depth-wise algorithm for recovering training data from gradient information. Different from the analytical attack using the bias term, R-GAP utilizes much more information and is the first... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | JKGrgCQWiO | vNgWh1ZGc | 0 | [
{
"text": "In this work, we develop a novel third family of attacks, recursive gradient attack on privacy (RGAP), that is based on a recursive, depth-wise algorithm for recovering training data from gradient information."
},
{
"text": "Different from the analytical attack using the bias term, R-GAP util... | [
{
"text": "In this work, we develop a novel third family of attacks, recursive gradient attack on privacy (RGAP), that is based on a recursive, depth-wise algorithm for recovering training data from gradient information."
},
{
"text": "Different from the analytical attack using the bias term, R-GAP util... |
CVRUl83zah.I75TtW0V7.16 | Finally, we evaluate iDSPN on the CLEVR (Johnson et al., 2017) object property prediction task that was used in DSPN (Zhang et al., 2019) and Slot Attention (Locatello et al., 2020). Given an image of a synthetic 3d scene containing several objects, the goal is to predict the set of their properties: 3d coordinate, siz... | Finally, we evaluate iDSPN on the CLEVR (Johnson et al., 2017) object property prediction task that was used in DSPN (Zhang et al., 2019) and Slot Attention (Locatello et al., 2020). Given an image of a synthetic 3d scene containing up to ten objects, the goal is to predict the set of their properties: 3d coordinate, s... | {
"annotation": [
"Development",
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | CVRUl83zah | I75TtW0V7 | 16 | [
{
"text": "Finally, we evaluate iDSPN on the CLEVR (Johnson et al., 2017) object property prediction task that was used in DSPN (Zhang et al., 2019) and Slot Attention (Locatello et al., 2020)."
},
{
"text": "Given an image of a synthetic 3d scene containing several objects, the goal is to predict the s... | [
{
"text": "Finally, we evaluate iDSPN on the CLEVR (Johnson et al., 2017) object property prediction task that was used in DSPN (Zhang et al., 2019) and Slot Attention (Locatello et al., 2020)."
},
{
"text": "Given an image of a synthetic 3d scene containing up to ten objects, the goal is to predict the... |
kJOgIGrJMU.jC95Y2Lt4f.00 | Pratt, 1998) and (multi-source) domain adaptation. In fact, there are a few works in domain generalization that are inspired by the meta-learning principles, such as Li et al. (2018a) ; Balaji et al. ; Li et al. Dou et al. In Ren et al. (2018), we also see the leveraging of gradient inner product in meta-learning, wher... | Pratt, 1998) and (multi-source) domain adaptation. In fact, there are a few works in domain generalization that are inspired by the meta-learning principles, such as Li et al. (2018a); Balaji et al. ; Li et al. Dou et al. Specifically, Li et al. (2020) also proposes to adapt Reptile for domain generalization tasks, howe... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | null | kJOgIGrJMU | jC95Y2Lt4f | 0 | [
{
"text": "Pratt, 1998) and (multi-source) domain adaptation."
},
{
"text": "In fact, there are a few works in domain generalization that are inspired by the meta-learning principles, such as Li et al."
},
{
"text": "(2018a)"
},
{
"text": "; Balaji et al."
},
{
"text": "; Li et a... | [
{
"text": "Pratt, 1998) and (multi-source) domain adaptation."
},
{
"text": "In fact, there are a few works in domain generalization that are inspired by the meta-learning principles, such as Li et al."
},
{
"text": "(2018a);"
},
{
"text": "Balaji et al."
},
{
"text": "; Li et al... |
hegI87bI5S.fL6Q48sfx8.19 | Twelve local university students from a different participants group from experiment 1 participated in this experiment. The average age was 22.3 years ( SD = 1 . 67). All participants were skillful in mouse operation and used their dominant right hand. | A total of 12 local university students from a different participants group from that in Experiment 1 participated in this experiment. The average age was 22.3 years ( SD = 1 . 67). All participants were skilled in mouse operation and used their dominant hand (right hand). | {
"annotation": [
"Rewriting_light"
],
"instruction": "Rephrase the paragraph",
"annotator": "annotator_06"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Revise this text to make it more clear.",
"annotator": "annotator_07"
} | hegI87bI5S | fL6Q48sfx8 | 19 | [
{
"text": "Twelve local university students from a different participants group from experiment 1 participated in this experiment."
},
{
"text": "The average age was 22.3 years ( SD = 1 . 67)."
},
{
"text": "All participants were skillful in mouse operation and used their dominant right hand."
... | [
{
"text": "A total of 12 local university students from a different participants group from that in Experiment 1 participated in this experiment."
},
{
"text": "The average age was 22.3 years ( SD = 1 . 67)."
},
{
"text": "All participants were skilled in mouse operation and used their dominant ... |
OV5v_wBMHk.bw4cqlpLh.18 | Reweighting methods weight individuals with balanced score to obtain globally balanced distributions, represented by the inverse propensity score (IPS) approach (Rosenbaum & Rubin, 1983a) and its doubly robust variant (Robins et al., 1994). Imai & Ratkovic (2014) and Fong et al. (2018) propose to calculate the balancin... | Reweighting-based methods weight individuals with balanced scores to achieve globally balanced distributions, represented by the inverse propensity score (IPS) approach (Rosenbaum & Rubin, 1983a) and its doubly robust variant (Robins et al., 1994). Imai & Ratkovic (2014) and Fong et al. (2018) propose calculating the b... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Use formal words in the last sentence.",
"annotator": "annotator_08"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Reorder the last sentence arguments. Make this paragraph a bit more precise.",
"annotator": "annotator_07"
} | OV5v_wBMHk | bw4cqlpLh | 18 | [
{
"text": "To avoid the significant variance in the IHDP benchmark, we conduct ablation study on the ACIC benchmark, to evaluate the effectiveness of ESCFR’s components and validate our claims in Section 3."
},
{
"text": "In Table 2, ESCFR firstly augments TARNet with stochastic optimal transport in Se... | [
{
"text": "To verify the effectiveness of individual components, an ablation study is conducted on the ACIC benchmark in Table 2."
},
{
"text": "Specifically, ESCFR first augments TARNet with stochastic optimal transport in Section 3.1, which effectively reduces the out-of-sample PEHE from 3.254 to 3.20... |
X50LVGSli.jqJzurpUu.02 | MVC problem respectively. In both problems and across the five datasets, Meta-EGN siginificantly outperforms EGN and RUN-CSP, both before and after the fine-tuning step. In comparison with the traditional CO solvers, Meta-EGN narrows the gap from Gurobi9.5 on those real small graphs. For RB graphs, Meta-EGN outperforms... | MVC problem respectively. In both problems and across the five datasets, Meta-EGN significantly outperforms EGN and RUN-CSP, both before and after the fine-tuning step. In comparison with the traditional CO solvers, Meta-EGN narrows the gap from Gurobi9.5 on those real small graphs. For RB graphs, Meta-EGN outperforms ... | {
"annotation": [
"Concision"
],
"instruction": "Fuse the last two sentences for conciseness.",
"annotator": "annotator_02"
} | {
"annotation": [
"Concision"
],
"instruction": "Merge the two last sentences to make it shorter.",
"annotator": "annotator_07"
} | X50LVGSli | jqJzurpUu | 2 | [
{
"text": "MVC problem respectively."
},
{
"text": "In both problems and across the five datasets, Meta-EGN siginificantly outperforms EGN and RUN-CSP, both before and after the fine-tuning step."
},
{
"text": "In comparison with the traditional CO solvers, Meta-EGN narrows the gap from Gurobi9.... | [
{
"text": "MVC problem respectively."
},
{
"text": "In both problems and across the five datasets, Meta-EGN significantly outperforms EGN and RUN-CSP, both before and after the fine-tuning step."
},
{
"text": "In comparison with the traditional CO solvers, Meta-EGN narrows the gap from Gurobi9.5... |
Rd7TGMaUy.dkY5HcKwZ1.02 | A curious feature of our model is that during training one has to back-propagate over the gradient of target distribution multiple times to optimize R . In (Titsias & Dellaportas, 2019), the authors avoid multiple back-propagation by stopping the derivative calculation at the density gradient term. In our experiment, ... | A curious feature of our model is that during training one has to back-propagate over the gradient of the target distribution multiple times to optimize R . In (Titsias & Dellaportas, 2019) the authors avoid multiple back-propagation by stopping the derivative calculation at the density gradient term. In our experiment... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | Rd7TGMaUy | dkY5HcKwZ1 | 2 | [
{
"text": "A curious feature of our model is that during training one has to back-propagate over the gradient of target distribution multiple times to optimize R ."
},
{
"text": "In (Titsias & Dellaportas, 2019), the authors avoid multiple back-propagation by stopping the derivative calculation at the ... | [
{
"text": "A curious feature of our model is that during training one has to back-propagate over the gradient of the target distribution multiple times to optimize R ."
},
{
"text": "In (Titsias & Dellaportas, 2019) the authors avoid multiple back-propagation by stopping the derivative calculation at th... |
7_CwM-IzWd.zcm6f5HDI.19 | Results As shown in Figure 3, | d util ( f ) | increases along λ , especially when log( λ ) ≥ − 5 and | d util ( f ) | is positively correlated with R ( f ) . In other words, the stronger the regularization, the larger the imbalance in utilization between modalities we obverse. It confirms the second conjecture in §3.2,... | Results As shown in Figure 3, | d util ( f ) | increases along λ , especially when log( λ ) ≥ − 5 . We also see that | d util ( f ) | is positively correlated with R ( f ) . In other words, the stronger the regularization is, the larger the imbalance in utilization between modalities we observe. We see that | d speed |... | {
"annotation": [
"Content_deletion",
"Rewriting_medium"
],
"instruction": "Split first sentence in two and delete the third sentence",
"annotator": "annotator_06"
} | {
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Exclude redundant expression.",
"annotator": "annotator_08"
} | 7_CwM-IzWd | zcm6f5HDI | 19 | [
{
"text": "Results As shown in Figure 3, | d util ( f ) | increases along λ , especially when log( λ ) ≥ − 5 and | d util ( f ) | is positively correlated with R ( f ) ."
},
{
"text": "In other words, the stronger the regularization, the larger the imbalance in utilization between modalities we obverse.... | [
{
"text": "Results As shown in Figure 3, | d util ( f ) | increases along λ , especially when log( λ ) ≥ − 5 . We also see that | d util ( f ) | is positively correlated with R ( f ) ."
},
{
"text": "In other words, the stronger the regularization is, the larger the imbalance in utilization between moda... |
g5N2H6sr7.6J3ec8Dl3p.03 | The classification accuracies on the five benchmarks are shown in Table 1. MLG, as a kernel method, performs well on PROTEINS. However, it suffers from a long run time and takes more than 1 day on two larger datasets, as observed in INFOGRAPH. Our method achieves the best results in 4 out of 5 datasets compared with both... | The classification accuracies on the five benchmarks are shown in Table 4. MLG, as a kernel method, performs well on PROTEINS. However, it suffers from a long run time and takes more than 1 day on two larger datasets, as observed in INFOGRAPH. Our method achieves the best results in 4 out of 5 datasets compared with both... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | g5N2H6sr7 | 6J3ec8Dl3p | 3 | [
{
"text": "The classification accuracies on the five benchmarks are shown in Table 1."
},
{
"text": "MLG, as a kernel method, performs well on PROTEINS."
},
{
"text": "However, it suffers from a long run time and takes more than 1 day on two larger datasets, as observed in INFOGRAPH."
},
{
... | [
{
"text": "The classification accuracies on the five benchmarks are shown in Table 4."
},
{
"text": "MLG, as a kernel method, performs well on PROTEINS."
},
{
"text": "However, it suffers from a long run time and takes more than 1 day on two larger datasets, as observed in INFOGRAPH."
},
{
... |
IuxfzBFSR0.CSFycBGzvd.01 | Theorem 6.1 applies to the general cost function with ρ set to 0. Note that the regret upper bound depends on the total number of time steps T , which is random. To replace the T -dependence by the | K rather than T . Furthermore, for finite horizon MDPs, the number of steps is equal to T = KH . In this case, the result in 6.1 can avoid the extra factor of B, c min and other logarithmic term. Theorem 6.1 applies to the general cost function with ρ set to 0. Note that the regret upper bound depends on the total numbe... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_06"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_08"
} | IuxfzBFSR0 | CSFycBGzvd | 1 | [
{
"text": " Theorem 6.1 applies to the general cost function with ρ set to 0."
},
{
"text": "Note that the regret upper bound depends on the total number of time steps T , which is random."
},
{
"text": "To replace the T -dependence by the"
}
] | [
{
"text": "K rather than T . Furthermore, for finite horizon MDPs, the number of steps is equal to T = KH . In this case, the result in 6.1 can avoid the extra factor of B, c min and other logarithmic term. Theorem 6.1 applies to the general cost function with ρ set to 0."
},
{
"text": "Note that the reg... |
BkxG1CvhWf.wcpE7maMLZ4.02 | First, it is the tightest topological property of state spaces that has been studied in the literature of model checking and planning, as far as we know. Secondly, although the worst-case complexity of computing the diameter for a factored transition system, and succinct digraphs more generally, is Π P 2 -hard (Hemaspa... | First, it is the tightest topological property of state spaces that has been studied. Secondly, although the worst-case complexity of computing the diameter for a succinct graph is Π P 2 -hard (Hemaspaandra et al. 2010), there are practical methods that can compositionally compute upper bounds on the diameter (Baumgart... | {
"annotation": [
"Concision"
],
"instruction": "Remove unnecessary details.",
"annotator": "annotator_09"
} | {
"annotation": [
"Concision"
],
"instruction": "Concise by removing unnecessary details.",
"annotator": "annotator_07"
} | BkxG1CvhWf | wcpE7maMLZ4 | 2 | [
{
"text": "First, it is the tightest topological property of state spaces that has been studied in the literature of model checking and planning, as far as we know."
},
{
"text": "Secondly, although the worst-case complexity of computing the diameter for a factored transition system, and succinct digrap... | [
{
"text": "First, it is the tightest topological property of state spaces that has been studied."
},
{
"text": "Secondly, although the worst-case complexity of computing the diameter for a succinct graph is Π P 2 -hard (Hemaspaandra et al. 2010), there are practical methods that can compositionally comp... |
jyac3IgQ44.f4au9jfat5.00 | Vision transformer. Transformer [42, 5] has recently achieved great success in computer vision [6,1 , 21, 18, 58, 50, 44]. Swin-transformer [21] restricts self-attention to non-overlapping local windowswhile allowing cross-window connection to improve efficiency. SSA [29] divides attention headsinto multiple groups... | Vision transformer. Inspired by the great success of Transformer [35, 4] in NLP, some worksemploy it in the field of computer vision [5, 1, 18, 16, 50, 42, 36]. Swin-transformer [18] increasesefficiency by limiting self-attention computation to non-overlapping local windows while allowing forcross-window connection. SS... | {
"annotation": [
"Rewriting_heavy"
],
"instruction": "Rewrite this paragraph for improved readability and clarity.",
"annotator": "annotator_02"
} | {
"annotation": [
"Rewriting_heavy",
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | jyac3IgQ44 | f4au9jfat5 | 0 | [
{
"text": "Vision transformer."
},
{
"text": "Transformer [42, 5] has recently achieved great success in computer vision [6,1 , 21, 18, 58, 50, 44]."
},
{
"text": "Swin-transformer [21] restricts self-attention to non-overlapping local windowswhile allowing cross-window connection to improve ... | [
{
"text": "Vision transformer."
},
{
"text": "Inspired by the great success of Transformer [35, 4] in NLP, some worksemploy it in the field of computer vision [5, 1, 18, 16, 50, 42, 36]."
},
{
"text": "Swin-transformer [18] increasesefficiency by limiting self-attention computation to non-overla... |
Rd7TGMaUy.dkY5HcKwZ1.00 | In MCMC, one chooses a transition kernel that leaves the target distribution invariant and constructs a Markov Chain by applying the kernel repeatedly. The MCMC method relies only on the ergodicity assumption. Other than that it is general, if enough computation is performed, the Markov Chain generates correct samples ... | In MCMC, one chooses a transition kernel that leaves the target distribution invariant and constructs a Markov Chain by applying the kernel repeatedly. The MCMC method relies only on the ergodicity assumption, other than that it is general. If enough computation is performed, the Markov chain generates correct samples ... | {
"annotation": [
"Rewriting_light"
],
"instruction": "Rephrase the paragraph",
"annotator": "annotator_06"
} | {
"annotation": [
"Rewriting_medium",
"Rewriting_light"
],
"instruction": "Balance sentences length.",
"annotator": "annotator_07"
} | Rd7TGMaUy | dkY5HcKwZ1 | 0 | [
{
"text": "In MCMC, one chooses a transition kernel that leaves the target distribution invariant and constructs a Markov Chain by applying the kernel repeatedly."
},
{
"text": "The MCMC method relies only on the ergodicity assumption."
},
{
"text": "Other than that it is general, if enough comp... | [
{
"text": "In MCMC, one chooses a transition kernel that leaves the target distribution invariant and constructs a Markov Chain by applying the kernel repeatedly."
},
{
"text": "The MCMC method relies only on the ergodicity assumption, other than that it is general."
},
{
"text": "If enough comp... |
-JRdgpyZWz.82w43do7ak.00 | Choose a [ t ] = w.p. m ] = w.p 1 − m t then end end end are multiple types of arms and train a separate NeurWIN for each type. During testing, the controller calculates the index of each arm based on the arm’s NeurWIN and schedules the M arms with the highest indices. | For such scenarios, we consider that there are multiple types of arms and train a separate NeurWIN for each type. During testing, the controller calculates the index of each arm based on the arm’s state and schedules the M arms with the highest indices. | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_04"
} | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | -JRdgpyZWz | 82w43do7ak | 0 | [
{
"text": "Choose a [ t ] = w.p. m ] = w.p 1 − m t then end end end are multiple types of arms and train a separate NeurWIN for each type."
},
{
"text": "During testing, the controller calculates the index of each arm based on the arm’s NeurWIN and schedules the M arms with the highest indices."
}
] | [
{
"text": "For such scenarios, we consider that there are multiple types of arms and train a separate NeurWIN for each type."
},
{
"text": "During testing, the controller calculates the index of each arm based on the arm’s state and schedules the M arms with the highest indices."
}
] |
Rd7TGMaUy.dkY5HcKwZ1.01 | To incorporate the gradient of the target distribution into the sampler, we take inspiration from the HMC algorithm. Basic HMC starts with drawing a random initial momentum v 0 , followed by several steps of leapfrog integration. In the following, we denote the momentum variable after n updates by v n and position by x... | The gradient of the target distribution enters our model in those affine transformations. To motivate the particular form we choose, we take a closer look at the HMC algorithm. Basic HMC starts with drawing a random initial momentum v 0 , followed by several steps of leapfrog integration. Let x n be the momentum variabl... | {
"annotation": [
"Development",
"Concision"
],
"instruction": "",
"annotator": "annotator_06"
} | {
"annotation": [
"Development",
"Concision"
],
"instruction": "",
"annotator": "annotator_07"
} | Rd7TGMaUy | dkY5HcKwZ1 | 1 | [
{
"text": "To incorporate the gradient of the target distribution into the sampler, we take inspiration from the HMC algorithm."
},
{
"text": "Basic HMC starts with drawing a random initial momentum v 0 , followed by several steps of leapfrog integration."
},
{
"text": "In the following, we deno... | [
{
"text": "The gradient of the target distribution enters our model in those affine transformations. To motivate the particular form we choose, we take a closer look at the HMC algorithm."
},
{
"text": "Basic HMC starts with drawing a random initial momentum v 0 , followed by several steps of leapfrog in... |
S1BhqsOsB.1mgtDFRDc.03 | We show a visualization of the occupancy grids in Figure 9 (right). We visualize the occupancy grids by converting them to heightmaps. This is achieved by multiplying each voxel’s occupancy value by its height coordinate in the grid, and then taking a max along the grid’s height axis. The visualizations show that the ... | We show a visualization of the estimated occupancy volumes in Figure 9-right. We visualize the occupancy volumes by converting them to heightmaps. This is achieved by multiplying each voxel’s occupancy value by its height coordinate in the grid, and then taking a max along the grid’s height axis. The visualizations sho... | {
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Remove second part of last sentence and Replace \"grids\" by \"volumes\" ",
"annotator": "annotator_06"
} | {
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Delete unnecessary details. Make the text more formal.",
"annotator": "annotator_07"
} | S1BhqsOsB | 1mgtDFRDc | 3 | [
{
"text": "We show a visualization of the occupancy grids in Figure 9 (right)."
},
{
"text": "We visualize the occupancy grids by converting them to heightmaps."
},
{
"text": "This is achieved by multiplying each voxel’s occupancy value by its height coordinate in the grid, and then taking a ma... | [
{
"text": "We show a visualization of the estimated occupancy volumes in Figure 9-right."
},
{
"text": "We visualize the occupancy volumes by converting them to heightmaps."
},
{
"text": "This is achieved by multiplying each voxel’s occupancy value by its height coordinate in the grid, and then ... |
hegI87bI5S.fL6Q48sfx8.07 | Pointing operation to the edge target takes advantage of the cursor stopping at the edge of the screen to complete the pointing without precise control. However, pushing-edge (pushing the cursor to the edge of the screen) behavior increases the distance traveled by the mouse, thereby increasing the movement time. Yaman... | A pointing operation for an edge target exploits the fact that the cur- sor stops at the edge of the screen to complete the pointing without precise control. However, pushing-edge behavior, i.e., pushing the cursor to the edge of the screen, increases the distance traveled by the mouse, and this increases the movement ... | {
"annotation": [
"Rewriting_light"
],
"instruction": "Change some words in this paragraph for the better ",
"annotator": "annotator_10"
} | {
"annotation": [
"Rewriting_medium",
"Rewriting_light"
],
"instruction": "Improve the linking between ideas to make the paragraph more precise and readable.",
"annotator": "annotator_07"
} | hegI87bI5S | fL6Q48sfx8 | 7 | [
{
"text": "Pointing operation to the edge target takes advantage of the cursor stopping at the edge of the screen to complete the pointing without precise control."
},
{
"text": "However, pushing-edge (pushing the cursor to the edge of the screen) behavior increases the distance traveled by the mouse, t... | [
{
"text": "A pointing operation for an edge target exploits the fact that the cur- sor stops at the edge of the screen to complete the pointing without precise control."
},
{
"text": "However, pushing-edge behavior, i.e., pushing the cursor to the edge of the screen, increases the distance traveled by t... |
vhQqjIOkI.IA5eA6BTPs.00 | The second component focuses on modeling the global temporal patterns in the dataset through identifying a small set of temporal global basis functions . The basis time-series, when combined in different ways, can express the individual dynamics of each time series. In our model, the basis time-series are encoded in a ... | The second component focuses on modeling the global temporal patterns in the dataset through identifying a small set of temporal global basis functions . The basis time-series, when combined in different ways, can express the individual dynamics of each time series. In our model, the basis time-series are encoded in a ... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | vhQqjIOkI | IA5eA6BTPs | 0 | [
{
"text": "The second component focuses on modeling the global temporal patterns in the dataset through identifying a small set of temporal global basis functions ."
},
{
"text": "The basis time-series, when combined in different ways, can express the individual dynamics of each time series."
},
{
... | [
{
"text": "The second component focuses on modeling the global temporal patterns in the dataset through identifying a small set of temporal global basis functions ."
},
{
"text": "The basis time-series, when combined in different ways, can express the individual dynamics of each time series."
},
{
... |
9wfZbn73om.FhHH15YtKt.00 | To this end, we define a kind of ( σ, δ ) -measure to mathematically quantify the data augmentation, and then provide an upper bound of the downstream classification error rate based on the measure. It reveals that the generalization ability of contrastive self-supervised learning is related to three key factors: align... | To this end, we define a kind of ( σ, δ ) -measure to mathematically quantify the data augmentation, and then provide an upper bound of the downstream classification error rate based on the measure. It reveals that the generalization ability of contrastive self-supervised learning is related to three key factors: align... | {
"annotation": [
"Rewriting_light"
],
"instruction": "Use accurate words.",
"annotator": "annotator_08"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Make the second half of this paragraph more precise and direct.",
"annotator": "annotator_07"
} | 9wfZbn73om | FhHH15YtKt | 0 | [
{
"text": "To this end, we define a kind of ( σ, δ ) -measure to mathematically quantify the data augmentation, and then provide an upper bound of the downstream classification error rate based on the measure."
},
{
"text": "It reveals that the generalization ability of contrastive self-supervised learn... | [
{
"text": "To this end, we define a kind of ( σ, δ ) -measure to mathematically quantify the data augmentation, and then provide an upper bound of the downstream classification error rate based on the measure."
},
{
"text": "It reveals that the generalization ability of contrastive self-supervised learn... |
5t8NvKONr.tls-ZX2iE.01 | We now introduce the theorem, which offers a guideline on the neural network architecture for operator learning. It suggests that if the entire architecture can be replaced with a fully connected neural network, large complexity should be required for training. It also verifies that the lower bound for a universal acti... | We now introduce the theorem, which offers a guideline on the neural network architecture for operator learning. It suggests that if the entire architecture can be replaced with a fully connected neural network, large complexity should be required for approximating the target function. It also verifies that the lower b... | {
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Remove redundant details. Use more precise words.",
"annotator": "annotator_08"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Make it more precise when necessary.",
"annotator": "annotator_07"
} | 5t8NvKONr | tls-ZX2iE | 1 | [
{
"text": "We now introduce the theorem, which offers a guideline on the neural network architecture for operator learning."
},
{
"text": "It suggests that if the entire architecture can be replaced with a fully connected neural network, large complexity should be required for training."
},
{
"t... | [
{
"text": "We now introduce the theorem, which offers a guideline on the neural network architecture for operator learning."
},
{
"text": "It suggests that if the entire architecture can be replaced with a fully connected neural network, large complexity should be required for approximating the target f... |
fJhx73ErBg.NeKLbmOxG8.02 | LogicRiskNet - a differentiable parametric ptSTL risk monitor. We describe how we may construct a ptSTL risk monitor based on a learned stochastic risk measure that can be applied to aprobabilistic description of human behaviors. We extend the definition of robustness degree (introduced in Section 2) to encompass belief... | LogicRiskNet - a differentiable parametric ptSTL risk monitor. We describe how we may construct a ptSTL risk monitor based on a learned stochastic risk measure that can be applied to aprobabilistic description of human behaviors. We extend the definition of robustness degree (introduced in Section 2) to encompass belief... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_01"
} | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | fJhx73ErBg | NeKLbmOxG8 | 2 | [
{
"text": "LogicRiskNet - a differentiable parametric ptSTL risk monitor."
},
{
"text": "We describe how we may construct a ptSTL risk monitor based on a learned stochastic risk measure that can be applied to aprobabilistic description of human behaviors."
},
{
"text": "We extend the definition o... | [
{
"text": "LogicRiskNet - a differentiable parametric ptSTL risk monitor."
},
{
"text": "We describe how we may construct a ptSTL risk monitor based on a learned stochastic risk measure that can be applied to aprobabilistic description of human behaviors."
},
{
"text": "We extend the definition o... |
7_CwM-IzWd.zcm6f5HDI.07 | According to the greedy learner hypothesis, we can make multi-modal learning less greedy by controlling the speed at which a multi-modal DNN learns to rely on each modality. To this end, we derive conditional learning speed to measure the speed at which the DNN learns from one modality. It serves as an efficient proxy t... | We aim to make multi-modal learning less greedy by controlling the speed at which a multi-modal DNN learns to rely on each modality. To this end, we define conditional learning speed to measure the speed at which the DNN learns from one modality. It serves as an efficient proxy to the conditional utilization rate of the ... | {
"annotation": [
"Development",
"Rewriting_light"
],
"instruction": "",
"annotator": "annotator_08"
} | {
"annotation": [
"Development",
"Rewriting_light"
],
"instruction": "",
"annotator": "annotator_02"
} | 7_CwM-IzWd | zcm6f5HDI | 7 | [
{
"text": "According to the greedy learner hypothesis, we can make multi-modal learning less greedy by controlling the speed at which a multi-modal DNN learns to rely on each modality."
},
{
"text": "To this end, we derive conditional learning speed to measure the speed at which the DNN learns from one ... | [
{
"text": "We aim to make multi-modal learning less greedy by controlling the speed at which a multi-modal DNN learns to rely on each modality."
},
{
"text": "To this end, we define conditional learning speed to measure the speed at which the DNN learns from one modality."
},
{
"text": "It serves... |
Byyb66j52G.hR5KKRfhQm.13 | On the contrary, random convolution can induce a growing difficulty by increasing the number of factors on a single background. Therefore, the generalization rapidly decreases after augmentationinterrupted when training with a single background because the learning direction toward generalization about various backgroun... | On the contrary, random convolution can induce a growing difficulty by increasing the number of factors on a single background. Therefore, the generalization rapidly decreases after augmentation is interrupted during training with a single background because the learning direction toward generalization about various ba... | {
"annotation": [
"Rewriting_light"
],
"instruction": "Add missing spaces.",
"annotator": "annotator_08"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Improve the english in the paragraph, make it slightly more formal.",
"annotator": "annotator_07"
} | Byyb66j52G | hR5KKRfhQm | 13 | [
{
"text": "On the contrary, random convolution can induce a growing difficulty by increasing the number of factors on a single background."
},
{
"text": "Therefore, the generalization rapidly decreases after augmentationinterrupted when training with a single background because the learning direction tow... | [
{
"text": "On the contrary, random convolution can induce a growing difficulty by increasing the number of factors on a single background."
},
{
"text": "Therefore, the generalization rapidly decreases after augmentation is interrupted during training with a single background because the learning direct... |
v8Vdrwfrg.Hrx_LZTUq.01 | Dynamic p. during training g ( (cid:101) w t ) (on pruned model) dynamically adapting mask m t + adaptive every few iterations + can recover from premature pruning from a theoretical point of view, and to provide further insights and interpretation. We do not require tuning of additional hyperparameters, and no retrain... | Dynamic p. during training g ( (cid:101) w t ) (on pruned model) dynamically adapting mask m t + adaptive every few iterations + can recover from premature pruning tuning of additional hyperparameters, and no retraining of the sparse model is needed (though can further improve performance). | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_02"
} | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | v8Vdrwfrg | Hrx_LZTUq | 1 | [
{
"text": "Dynamic p. during training g ( (cid:101) w t ) (on pruned model) dynamically adapting mask m t"
},
{
"text": "+ adaptive every few iterations + can recover from premature pruning from a theoretical point of view, and to provide further insights and interpretation. We do not require tuning of ... | [
{
"text": "Dynamic p. during training g ( (cid:101) w t ) (on pruned model) dynamically adapting mask m t"
},
{
"text": "+ adaptive every few iterations + can recover from premature pruning tuning of additional hyperparameters, and no retraining of the sparse model is needed (though can further improve ... |
yUZ7b8bJWZ.rcOtDrFL7.00 | For example, Cohen et al. ; Rezende & Racani`ere (2021) approximate the OT map to define normalizing flows on Riemannian manifolds, Hamfeldt & Turnquist (2021a;b); Cui et al. (2019) derive algorithms to approximate the OT map on the sphere, Alvarez-Melis et al. HoyosIdrobo (2020) learn the transport map on hyperbolic s... | For example, Cohen et al. Idrobo (2020) learn the transport map on hyperbolic spaces. However, the computational bottleneck to compute the Wasserstein distance on such spaces remains, and, as underlined in the conclusion of (Nadjahi, 2021), defining SW distances on manifolds would be of much interest. Notably, Rustamov... | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_02"
} | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | yUZ7b8bJWZ | rcOtDrFL7 | 0 | [
{
"text": "For example, Cohen et al."
},
{
"text": "; Rezende & Racani`ere (2021) approximate the OT map to define normalizing flows on Riemannian manifolds, Hamfeldt & Turnquist (2021a;b); Cui et al."
},
{
"text": "(2019) derive algorithms to approximate the OT map on the sphere, Alvarez-Melis ... | [
{
"text": "For example, Cohen et al."
},
{
"text": ""
},
{
"text": ""
},
{
"text": "Idrobo (2020) learn the transport map on hyperbolic spaces."
},
{
"text": "However, the computational bottleneck to compute the Wasserstein distance on such spaces remains, and, as underlined in t... |
CVRUl83zah.I75TtW0V7.21 | We had to exclude 10 runs of DSPN due to significantly worse results making their average uninformative. The following excluded DSPN runs were all using 40 iterations: | We had to exclude 10 runs of DSPN due to significantly worse results making their average uninformative. We did not observe any stability issues with iDSPN, so we did not have to exclude any iDSPN runs. The following excluded DSPN runs were all using 40 iterations: | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | CVRUl83zah | I75TtW0V7 | 21 | [
{
"text": "We had to exclude 10 runs of DSPN due to significantly worse results making their average uninformative."
},
{
"text": ""
},
{
"text": "The following excluded DSPN runs were all using 40 iterations:"
}
] | [
{
"text": "We had to exclude 10 runs of DSPN due to significantly worse results making their average uninformative."
},
{
"text": "We did not observe any stability issues with iDSPN, so we did not have to exclude any iDSPN runs."
},
{
"text": "The following excluded DSPN runs were all using 40 it... |
fDUdAYCQqZy.0cNiGAHFml.01 | Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while avoiding extrapolation error in the action space. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior cloning and optimal value learning. Further, VEM integrates memor... | Figure 1. VEM uses expectile V -learning (EVL) to learn V -functions while avoiding extrapolation error in the action space confines value learning within the dataset to reduce extrapolation error. EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance be... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | null | fDUdAYCQqZy | 0cNiGAHFml | 1 | [
{
"text": "Figure 1."
},
{
"text": "VEM uses expectile V -learning (EVL) to learn V -functions while avoiding extrapolation error in the action space."
},
{
"text": "EVL uses an expectile operator that interpolates between Bellman expectation operator and optimality operator to balance behavior ... | [
{
"text": "Figure 1."
},
{
"text": "VEM uses expectile V -learning (EVL) to learn V -functions while avoiding extrapolation error in the action space confines value learning within the dataset to reduce extrapolation error."
},
{
"text": "EVL uses an expectile operator that interpolates between B... |
nCTSF9BQJ.DGhBYSP_sR.06 | Traditional approaches to predicting the effect of mutation on protein binding can be roughly divided into two classes: biophysical methods and statistical methods. Biophysical methods focus on modeling inter-atomic interactions, e.g. hydrogen bonding, electrostatic forces, etc., with mechanical and statistical energy ... | Traditional approaches to predicting the effect of mutation on protein binding can be roughly divided into two classes: biophysical and statistical methods. Biophysical methods utilize energy functions to model inter-atomic interactions. These methods sample conformations of the mutated protein complex and predict chan... | {
"annotation": [
"Concision",
"Content_deletion"
],
"instruction": "Summarize this:",
"annotator": "annotator_01"
} | {
"annotation": [
"Concision"
],
"instruction": "Make this paragraph shorter.",
"annotator": "annotator_07"
} | nCTSF9BQJ | DGhBYSP_sR | 6 | [
{
"text": "Traditional approaches to predicting the effect of mutation on protein binding can be roughly divided into two classes: biophysical methods and statistical methods."
},
{
"text": "Biophysical methods focus on modeling inter-atomic interactions, e.g. hydrogen bonding, electrostatic forces, etc... | [
{
"text": "Traditional approaches to predicting the effect of mutation on protein binding can be roughly divided into two classes: biophysical and statistical methods."
},
{
"text": "Biophysical methods utilize energy functions to model inter-atomic interactions."
},
{
"text": "These methods sam... |
BkVj6Z-AW.SytnTZWCZ.01 | LSTM or RNN networks for this task Fragkiadaki et al. ; Jain et al. ; Bütepage et al. (2017); Martinez et al. and these works produce reasonably realistic output at a number of tasks such as sitting, talking, smoking, etc. However, these existing methods also have a critical drawback: the motion becomes unrealistic wit... | LSTM or RNN networks for this task (Fragkiadaki et al., 2015; Jain et al., 2016; Bütepage et al., 2017; Martinez et al., 2017), and these works produce reasonably realistic output at a number of tasks such as sitting, talking, smoking, etc. However, these existing methods also have a critical drawback: the motion becom... | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_02"
} | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | BkVj6Z-AW | SytnTZWCZ | 1 | [
{
"text": "LSTM or RNN networks for this task Fragkiadaki et al. ; Jain et al. ; Bütepage et al."
},
{
"text": "(2017); Martinez et al. and these works produce reasonably realistic output at a number of tasks such as sitting, talking, smoking, etc."
},
{
"text": "However, these existing methods ... | [
{
"text": "LSTM or RNN networks for this task (Fragkiadaki et al., 2015; Jain et al., 2016; Bütepage et al., 2017;"
},
{
"text": "Martinez et al., 2017), and these works produce reasonably realistic output at a number of tasks such as sitting, talking, smoking, etc."
},
{
"text": "However, these... |
u9NaukzyJ-.hh0KECXQLv.01 | Electronic calendars have become instrumental in the manage- ment of daily activities [16–18]. They are used to coordinate interactions among individual schedules of family or team members and can convey meaning and values behind the priorities of schedul- ing [19]. Calendars have been used to visualize temporal trends... | Electronic calendars have become instrumental in the manage- ment of daily activities [16–18]. They are used to coordinate interactions among individual schedules of family or team members and can convey meaning and values behind the priorities of scheduling [19]. Calendars have been used to visualize temporal trends t... | {
"annotation": [
"Concision"
],
"instruction": "Rewrite the latter half of this paragraph to make it more concise.",
"annotator": "annotator_02"
} | {
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Merge the two sentences in the middle about integrating prescription management in a new shorter sentence. Improve the english in the last sentence.",
"annotator": "annotator_07"
} | u9NaukzyJ- | hh0KECXQLv | 1 | [
{
"text": "Electronic calendars have become instrumental in the manage- ment of daily activities [16–18]."
},
{
"text": "They are used to coordinate interactions among individual schedules of family or team members and can convey meaning and values behind the priorities of schedul- ing [19]."
},
{
... | [
{
"text": "Electronic calendars have become instrumental in the manage- ment of daily activities [16–18]."
},
{
"text": "They are used to coordinate interactions among individual schedules of family or team members and can convey meaning and values behind the priorities of scheduling [19]."
},
{
... |
RPX7thbt2Mv.PdsbQ4ckYr.00 | Ermon, 2016), require a potentially large number of online samples during training, resulting in poor sample efficiency. Moreover, algorithms such as GAIL follow a similar training paradigm as in the Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which formulate the problem of IRL as a minimax optimi... | Ermon, 2016), require a potentially large number of online samples during training, resulting in poor sample efficiency. Moreover, algorithms, such as GAIL, follow a training paradigm that is similar to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), by formulating the problem of IRL as a minimax opti... | {
"annotation": [
"Rewriting_light",
"Development"
],
"instruction": "",
"annotator": "annotator_06"
} | {
"annotation": [
"Rewriting_light",
"Development"
],
"instruction": "",
"annotator": "annotator_08"
} | RPX7thbt2Mv | PdsbQ4ckYr | 0 | [
{
"text": "Ermon, 2016), require a potentially large number of online samples during training, resulting in poor sample efficiency."
},
{
"text": "Moreover, algorithms such as GAIL follow a similar training paradigm as in the Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which formul... | [
{
"text": "Ermon, 2016), require a potentially large number of online samples during training, resulting in poor sample efficiency."
},
{
"text": "Moreover, algorithms, such as GAIL, follow a training paradigm that is similar to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), by formul... |
HJ1yWQ-A-.BkSfNXWAb.00 | Humans excel at recognizing objects such as handwritten digits from modified or perturbed inputs. Even if presented with digits which are translated, corrupted, or inverted, we can usually correctly label them without the need of re-learning them from scratch. This may be due to the fact that human intelligence utilize... | Humans are able to recognize objects such as handwritten digits based on distorted inputs. When presented with digits which are translated, corrupted, or inverted, we can usually correctly label them without the need of re-learning them from scratch. The same applies for new objects, essentially after having seen them ... | {
"annotation": [
"Development",
"Rewriting_light"
],
"instruction": "",
"annotator": "annotator_07"
} | null | HJ1yWQ-A- | BkSfNXWAb | 0 | [
{
"text": "Humans excel at recognizing objects such as handwritten digits from modified or perturbed inputs."
},
{
"text": "Even if presented with digits which are translated, corrupted, or inverted, we can usually correctly label them without the need of re-learning them from scratch."
},
{
"te... | [
{
"text": "Humans are able to recognize objects such as handwritten digits based on distorted inputs."
},
{
"text": "When presented with digits which are translated, corrupted, or inverted, we can usually correctly label them without the need of re-learning them from scratch."
},
{
"text": "The ... |
JSl6-2Rvl.EjaJa1fUzn.00 | Limitations and Future Work EAGER assumes the QA system was pre-trained using a preexisting set of example trajectories. Next steps will consist in investigating how to remove this limitation, e.g. by implementing autotelic strategies based on QG/QA learned online. Besides, in this work we tested our method on BabyAI, ... | Limitations and Future Work EAGER assumes the QA system was pre-trained using a preexisting set of example trajectories. Next steps will consist in investigating how to remove this limitation, e.g. by implementing autotelic strategies based on QG/QA learned online. Besides, in this work we tested our method on BabyAI, ... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_06"
} | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_08"
} | JSl6-2Rvl | EjaJa1fUzn | 0 | [
{
"text": "Limitations and Future Work EAGER assumes the QA system was pre-trained using a preexisting set of example trajectories."
},
{
"text": "Next steps will consist in investigating how to remove this limitation, e.g. by implementing autotelic strategies based on QG/QA learned online."
},
{
... | [
{
"text": "Limitations and Future Work EAGER assumes the QA system was pre-trained using a preexisting set of example trajectories."
},
{
"text": "Next steps will consist in investigating how to remove this limitation, e.g. by implementing autotelic strategies based on QG/QA learned online."
},
{
... |
xV0XmrSMtk.sYfR73R9z.01 | In the case when a user has no control about the incoming gradient ∆ I ω = − d (cid:96)/ d y , the update ∆ I ωin ker P can be much larger in magnitude than ∆ I ω 1 in Im P . In theory, this is not very problematic, as updates in ker P do not affect the optimization problem. However, in practice, these irrelevant upd... | In the case when a user has no control about the incoming gradient ∆ I ω = − d (cid:96)/ d y , the update ∆ I ωin ker P can be much larger in magnitude than ∆ I ω 1 in Im P . In theory, if updates were applied directly in cost space, this would not be very problematic, as updates in ker P do not affect the optimization... | {
"annotation": [
"Development",
"Rewriting_light"
],
"instruction": "",
"annotator": "annotator_08"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_02"
} | xV0XmrSMtk | sYfR73R9z | 1 | [
{
"text": "In the case when a user has no control about the incoming gradient ∆ I ω = − d (cid:96)/ d y , the update ∆ I ωin ker P can be much larger in magnitude than ∆ I ω 1 in Im P ."
},
{
"text": "In theory, this is not very problematic, as updates in ker P do not affect the optimization problem."... | [
{
"text": "In the case when a user has no control about the incoming gradient ∆ I ω = − d (cid:96)/ d y , the update ∆ I ωin ker P can be much larger in magnitude than ∆ I ω 1 in Im P ."
},
{
"text": "In theory, if updates were applied directly in cost space, this would not be very problematic, as updat... |
PTxXw98AiB.3lJGkowFzW.00 | (Chen et al., 2018) devised GCBD method to learn and generate noise in the given noisy images using W-GAN (Arjovsky et al., 2017) and utilized the unpaired clean images to build a supervised training set. Our GAN2GAN is related to (Chen et al., 2018), but we significantly improve their noise learning step and do not use... | Chen et al. (2018) devised GCBD method to learn and generate noise in the given noisy images using W-GAN Arjovsky et al. and utilized the unpaired clean images to build a supervised training set. Our GAN2GAN is related to Chen et al. , but we significantly improve their noise learning step and do not use the clean data ... | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_02"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Integrates quotations into the text.",
"annotator": "annotator_07"
} | PTxXw98AiB | 3lJGkowFzW | 0 | [
{
"text": "(Chen et al., 2018) devised"
},
{
"text": " GCBD method to learn and generate noise in the given noisy images using W-GAN (Arjovsky et al., 2017) and utilized the unpaired clean images to build a supervised training set."
},
{
"text": "Our GAN2GAN is related to (Chen et al., 2018), bu... | [
{
"text": "Chen et al."
},
{
"text": "(2018) devised GCBD method to learn and generate noise in the given noisy images using W-GAN Arjovsky et al. and utilized the unpaired clean images to build a supervised training set."
},
{
"text": "Our GAN2GAN is related to Chen et al. , but we significantly... |
otEbOIweB6.rbCKB0Uy9.00 | Transition reparametrized actions tion reparamtrization does not have to rely on any hierarchical structures in the offline data, and can therefore utilize highly suboptimal datasets (e.g., with random actions). | Transition reparametrized actions imitation without the need to reduce control frequency. Unlike learning temporal abstractions, action reparamtrization does not have to rely on any hierarchical structures in the offline data, and can therefore utilize highly suboptimal datasets (e.g., with random actions). | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | null | otEbOIweB6 | rbCKB0Uy9 | 0 | [
{
"text": "Transition reparametrized actions tion reparamtrization does not have to rely on any hierarchical structures in the offline data, and can therefore utilize highly suboptimal datasets (e.g., with random actions)."
}
] | [
{
"text": "Transition reparametrized actions imitation without the need to reduce control frequency. Unlike learning temporal abstractions, action reparamtrization does not have to rely on any hierarchical structures in the offline data, and can therefore utilize highly suboptimal datasets (e.g., with random act... |
5t8NvKONr.tls-ZX2iE.03 | The proof can be divided into three parts. Firstly, we come up with a neural network approximation ρ NNn of ρ n of which size is O ( w · ϵ − m/rn ) within an error ϵ n . Next, construct a neural network approximation of Φ using the lemma 3. Finally, the inner product is replaced with a neural network as in the inequal... | The proof can be divided into three parts. Firstly, we come up with a neural network approximation ρ NNn of ρ n of which size is O ( w · ϵ − m/rn ) within an error ϵ n . Next, construct a neural network approximation of Φ using the Lemma 3. Finally, the inner product π p n ( β n , τ n ) is replaced with a neural networ... | {
"annotation": [
"Rewriting_light",
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | 5t8NvKONr | tls-ZX2iE | 3 | [
{
"text": "The proof can be divided into three parts."
},
{
"text": "Firstly, we come up with a neural network approximation ρ NNn of ρ n of which size is O ( w · ϵ − m/rn ) within an error ϵ n ."
},
{
"text": "Next, construct a neural network approximation of Φ using the lemma 3."
},
{
... | [
{
"text": "The proof can be divided into three parts."
},
{
"text": "Firstly, we come up with a neural network approximation ρ NNn of ρ n of which size is O ( w · ϵ − m/rn ) within an error ϵ n ."
},
{
"text": "Next, construct a neural network approximation of Φ using the Lemma 3."
},
{
... |
MXi6uEx-hp.rdZfFcGyf9.05 | Action Utility In order to use the relational action representations with RL, we follow the insight of utility network π u from Jain et al. It takes the relational action representations, state, and action summary as input for each action and outputs a utility score reflecting how useful the action is for the curre... | Action Utility : To use the relational action representations with RL, we follow the utility network architecture π u from Jain et al. It takes the relational action representation, the state, and the action set summary as input for each available action in parallel. It outputs a utility score π u ( c Ra , s, ¯ c R ) f... | {
"annotation": [
"Concision",
"Content_addition"
],
"instruction": "",
"annotator": "annotator_04"
} | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_01"
} | MXi6uEx-hp | rdZfFcGyf9 | 5 | [
{
"text": "Action Utility In order to use the relational action representations with RL, we follow the insight of utility network π u from Jain et al."
},
{
"text": "It takes the relational action representations, state, and action summary as input for each action and outputs a utility score reflecti... | [
{
"text": "Action Utility : To use the relational action representations with RL, we follow the utility network architecture π u from Jain et al."
},
{
"text": "It takes the relational action representation, the state, and the action set summary as input for each available action in parallel. It outputs... |
S1CMuZFor.H1NchtnoS.00 | Linear networks without activation functions are important subject, and there are a number of theoretical works on the implicit regularization in over-parameterized neural networks mainly focusing on linear models (Ji & Telgarsky, 2018; Gidel et al., 2019; Arora et al., 2019a). In contrast, whole properties of over-par... | Linear networks without activation functions are important subject, and there are a number of theoretical works on the implicit regularization in over-parameterized neural networks mainly focusing on linear models (Ji & Telgarsky, 2018; Gidel et al., 2019; Arora et al., 2019a). In contrast, whole properties of over-par... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_09"
} | {
"annotation": [
"Content_substitution"
],
"instruction": "",
"annotator": "annotator_07"
} | S1CMuZFor | H1NchtnoS | 0 | [
{
"text": "Linear networks without activation functions are important subject, and there are a number of theoretical works on the implicit regularization in over-parameterized neural networks mainly focusing on linear models (Ji & Telgarsky, 2018;"
},
{
"text": "Gidel et al., 2019; Arora et al., 2019a).... | [
{
"text": "Linear networks without activation functions are important subject, and there are a number of theoretical works on the implicit regularization in over-parameterized neural networks mainly focusing on linear models (Ji & Telgarsky, 2018;"
},
{
"text": "Gidel et al., 2019; Arora et al., 2019a).... |
hegI87bI5S.fL6Q48sfx8.16 | Table 1 lists the results of model fitting using all (120) data points. Along with comparing the adjusted R 2 data, we showed the Akaike information criterion ( AIC ) values because of the different number of constants included in the model [2]. A model with higher R 2 and lower AIC was defined as the better model. ... | Table 1 lists the results of model fitting using all 120 data points. We showed the Akaike information criterion ( AIC ) values because of the different number of constants included in the model along with the adjusted R 2 data [2]. A model with a higher adj. R 2 and lower AIC was defined as the better model. When the ... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Restructured some sentences in this paragraph and merge the last two sentences ",
"annotator": "annotator_10"
} | {
"annotation": [
"Rewriting_medium",
"Rewriting_light"
],
"instruction": "Improve the linking between phrases.",
"annotator": "annotator_07"
} | hegI87bI5S | fL6Q48sfx8 | 16 | [
{
"text": "Table 1 lists the results of model fitting using all (120) data points."
},
{
"text": "Along with comparing the adjusted R 2 data, we showed the Akaike information criterion ( AIC ) values because of the different number of constants included in the model [2]."
},
{
"text": "A model ... | [
{
"text": "Table 1 lists the results of model fitting using all 120 data points."
},
{
"text": "We showed the Akaike information criterion ( AIC ) values because of the different number of constants included in the model along with the adjusted R 2 data [2]."
},
{
"text": "A model with a higher ... |
hegI87bI5S.fL6Q48sfx8.02 | Pointing (point targets such as buttons or icons) should be fast and accurate. The two main factors the affect the movement time are the target size and the distance from the initial position of the cursor to the target [11,19]. The movement time increases as the distance increases and the target size decreases. Furthe... | Pointing, i e., using the cursor to point at targets such as buttons or icons should be fast and accurate. Two factors that affect the movement time are target size and distance from the initial position of the cursor to the target [11,19]. The movement time increases with an increase in distance and a decrease in targ... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Improve English in this paragraph. Explain more about the experiments",
"annotator": "annotator_10"
} | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Add a sentence to introduce the experiment. Improve the paragraph for better readability.",
"annotator": "annotator_07"
} | hegI87bI5S | fL6Q48sfx8 | 2 | [
{
"text": "Pointing (point targets such as buttons or icons) should be fast and accurate."
},
{
"text": "The two main factors the affect the movement time are the target size and the distance from the initial position of the cursor to the target [11,19]."
},
{
"text": "The movement time increase... | [
{
"text": "Pointing, i e., using the cursor to point at targets such as buttons or icons should be fast and accurate."
},
{
"text": "Two factors that affect the movement time are target size and distance from the initial position of the cursor to the target [11,19]."
},
{
"text": "The movement t... |
u9NaukzyJ-.hh0KECXQLv.20 | Medication entries should have a marker that communicates the time that their reminder will be triggered. The marker should not communicatea time range (as we used in all three designs), but a point in time, such as a minute, when the reminder is triggered. The calendar should have daily summaries. These summaries shou... | Medication entries should have markers that communicate the times that their reminders will be triggered. The markers should not communicate time ranges but points in time when reminders are triggered. The calendar should have daily summaries of the list of medications to be administered each day. These summaries shoul... | {
"annotation": [
"Content_deletion",
"Concision"
],
"instruction": "Heavily remove details from this paragraph to make it more concise.",
"annotator": "annotator_03"
} | {
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Please condense my paragraph related to medication conflicts.",
"annotator": "annotator_09"
} | u9NaukzyJ- | hh0KECXQLv | 20 | [
{
"text": "Medication entries should have a marker that communicates the time that their reminder will be triggered."
},
{
"text": "The marker should not communicatea time range (as we used in all three designs), but a point in time, such as a minute, when the reminder is triggered."
},
{
"text"... | [
{
"text": "Medication entries should have markers that communicate the times that their reminders will be triggered."
},
{
"text": "The markers should not communicate time ranges but points in time when reminders are triggered."
},
{
"text": "The calendar should have daily summaries of the list ... |
nCTSF9BQJ.DGhBYSP_sR.14 | In practice, we stack multiple bijectives to enable more complex transform. The derivative of the composite can be computed efficiently by applying the chain rule. At inference time, the inverse mapping f − 1 ( y ) can be computed efficiently (Rezende et al., 2020). To find the solution of f − 1 ( y ) , the first step... | In practice, we stack multiple bijectives to enable more complex transformation. The derivative of the composite can be computed efficiently using the chain rule. At inference time, we can efficiently compute the inverse mapping f − 1 ( y ) (Rezende et al., 2020): To find the solution of f − 1 ( y ) , the first step is... | {
"annotation": [
"Rewriting_light"
],
"instruction": "Please, review this paragraph, modify only if necessary",
"annotator": "annotator_01"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Improve the English of this paragraph.",
"annotator": "annotator_07"
} | nCTSF9BQJ | DGhBYSP_sR | 14 | [
{
"text": "We model the distribution of rotamer with 1 torsional angle with conditional normalizing flows on S 1 (Rezende et al., 2020)."
},
{
"text": "A normalizing flow is basically a bijective function. We choose to parameterize S 1 by an angle θ ∈ [0 , 2 π ] and construct a bijective function on ... | [
{
"text": "To model the distribution of a rotamer with 1 torsional angle, we utilize conditional normalizing flows on S 1 (Rezende et al., 2020)."
},
{
"text": "A normalizing flow is a bijective function, and to construct one on S 1 , we parameterize it by an angle θ ∈ [0 , 2 π ] , and define a bijectiv... |
OV5v_wBMHk.bw4cqlpLh.04 | Optimal transport defines a powerful geometry to measure the distribution discrepancy. Monge first formulated optimal transport asa problem of finding an optimal mapping between two measures. However, this formulation cannot guarantee the existence and uniqueness of solutions. More applicable is Kantorovich’s formulat... | Optimal transport (OT) instantiates distribution discrepancy as the minimum transport cost, which provides a grip for quantifying the treatment selection bias in Figure 1(a). Monge (1781) first formulated OT as finding an optimal mapping between two distributions. However, this formulation cannot guarantee the existenc... | {
"annotation": [
"Rewriting_medium",
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | null | OV5v_wBMHk | bw4cqlpLh | 4 | [
{
"text": "Optimal transport defines a powerful geometry to measure the distribution discrepancy. Monge first formulated optimal transport asa problem of finding an optimal mapping between two measures."
},
{
"text": "However, this formulation cannot guarantee the existence and uniqueness of solutions.... | [
{
"text": "Optimal transport (OT) instantiates distribution discrepancy as the minimum transport cost, which provides a grip for quantifying the treatment selection bias in Figure 1(a). Monge (1781) first formulated OT as finding an optimal mapping between two distributions."
},
{
"text": "However, this... |
XWtcKUZcZw.EuDy1zb2AN.00 | Scenes based targets namely traversable surfaces and object openability. On the test set, our (Cache) SIR consistently outperforms multiple baselines (Fig 5) including: Auto Encoder on AI2-THOR images, a Navigation agent within AI2-THOR, and Classifier – a CNN trained to classify objects in | Predictions with synthetic images - We use SIRs to predict geometry and appearance-based targets namely depth, surface normals, object class, object depth and object normals, as well as affordancebased targets namely traversable surfaces and object openability. On the test set, our (Cache) SIR consistently outperforms ... | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | null | XWtcKUZcZw | EuDy1zb2AN | 0 | [
{
"text": "Scenes based targets namely traversable surfaces and object openability."
},
{
"text": "On the test set, our (Cache) SIR consistently outperforms multiple baselines (Fig 5) including: Auto Encoder on AI2-THOR images, a Navigation agent within AI2-THOR, and Classifier – a CNN trained to classif... | [
{
"text": "Predictions with synthetic images - We use SIRs to predict geometry and appearance-based targets namely depth, surface normals, object class, object depth and object normals, as well as affordancebased targets namely traversable surfaces and object openability."
},
{
"text": "On the test set,... |
YaBfIcnhEVq.bz9r9z9RFt.00 | We make this assumption for the following reasons. First of all, our offline learning guarantee (Theorem 3.2) provides simultaneously comparison to all the policies, which is stronger than competing with optimal policy only (whereas relaxed assumption suffices, e.g. sup x (cid:62) < ∞ (Uehara & Sun, 2021)). As a consequ... | We make this assumption for the following reasons. First of all, our offline learning guarantee (Theorem 3.2) provides simultaneously comparison to all the policies, which is stronger than competing with optimal policy only (whereas relaxed assumption suffices, e.g. sup x ∈ R d x Σ π(cid:63) x (cid:62) x Σ µ x (cid:62) <... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_01"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | YaBfIcnhEVq | bz9r9z9RFt | 0 | [
{
"text": "We make this assumption for the following reasons."
},
{
"text": "First of all, our offline learning guarantee (Theorem 3.2) provides simultaneously comparison to all the policies, which is stronger than competing with optimal policy only (whereas relaxed assumption suffices, e.g. sup "
},
... | [
{
"text": "We make this assumption for the following reasons."
},
{
"text": "First of all, our offline learning guarantee (Theorem 3.2) provides simultaneously comparison to all the policies, which is stronger than competing with optimal policy only (whereas relaxed assumption suffices, e.g. sup x ∈ R d x... |
jzQGmT-R1q.ugUt9B3XaO.00 | In this section we show that neural networks progressively lose their ability to quickly fit new targets when trained on sequential prediction tasks (i.e. settings in which the agent must solve a regression problem that iteratively changes over the course of training) including but not limited to those found in value-ba... | In this section we demonstrate conditions under which neural networks progressively lose their ability to quickly fit new targets when trained on sequences of prediction tasks including but not limited to those found in value-based RL. We find that capacity loss is particularly pronounced in sparse prediction tasks, wher... | {
"annotation": [
"Rewriting_light",
"Content_deletion"
],
"instruction": "Rewrite the first sentence. Remove the example to make it shorter.",
"annotator": "annotator_04"
} | {
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Revise the first sentence in a more academic style. Remove unnecessary details.",
"annotator": "annotator_07"
} | jzQGmT-R1q | ugUt9B3XaO | 0 | [
{
"text": "In this section we show that neural networks progressively lose their ability to quickly fit new targets when trained on sequential prediction tasks (i.e. settings in which the agent must solve a regression problem that iteratively changes over the course of training) including but not limited to thos... | [
{
"text": "In this section we demonstrate conditions under which neural networks progressively lose their ability to quickly fit new targets when trained on sequences of prediction tasks including but not limited to those found in value-based RL."
},
{
"text": "We find that capacity loss is particularly p... |
WldWha1MT.LL2ZsGpJga.00 | Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, t... | Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology. Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for many segmentation tasks. In recent years, t... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | null | WldWha1MT | LL2ZsGpJga | 0 | [
{
"text": "Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology."
},
{
"text": "Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for m... | [
{
"text": "Image segmentation is a largely researched field where neural networks find vast applications in many facets of technology."
},
{
"text": "Some of the most popular approaches to train segmentation networks employ loss functions optimizing pixel-overlap, an objective that is insufficient for m... |
aomiOZE_m2.rxb2TiQ6bq.01 | Image super-resolution (SR) is a fundamental computer vision task, which aims to recover a high-resolution (HR) image from its low-resolution (LR) counterpart. In general, image SR is ill-posed as a many-to-one mapping problem. To alleviate this problem, plenty of deep convolutional neural networks (CNNs) (Dong et al.,... | Image super-resolution (SR), a classic task in computer vision, aims to recover a high-resolution (HR) image based on its low-resolution (LR) counterpart. Essentially, image SR is ill-posed as a many-to-one mapping problem. To tackle this problem, plenty of deep convolutional neural networks (CNNs) (Dong et al., 2014; ... | {
"annotation": [
"Rewriting_light",
"Content_substitution"
],
"instruction": "Replace the citation to (Tai et al., 2017b) with a citation to (Zhang et al., 2018c; 2020; 2021). Improve the english of this paragraph.",
"annotator": "annotator_02"
} | {
"annotation": [
"Rewriting_light"
],
"instruction": "Make the language of this paragraph a bit more simple.",
"annotator": "annotator_07"
} | aomiOZE_m2 | rxb2TiQ6bq | 1 | [
{
"text": "Image super-resolution (SR) is a fundamental computer vision task, which aims to recover a high-resolution (HR) image from its low-resolution (LR) counterpart."
},
{
"text": "In general, image SR is ill-posed as a many-to-one mapping problem."
},
{
"text": "To alleviate this problem, ... | [
{
"text": "Image super-resolution (SR), a classic task in computer vision, aims to recover a high-resolution (HR) image based on its low-resolution (LR) counterpart."
},
{
"text": "Essentially, image SR is ill-posed as a many-to-one mapping problem."
},
{
"text": "To tackle this problem, plenty ... |
jyac3IgQ44.f4au9jfat5.02 | We gather the non-empty voxels within the query window {+size (e.g. s
= (2 , 2 , 2) )+} and apply Chessboard Sampling (CBS) to sample the queries. {+key window sizes (e.g. s
=+} For the keys, we gather the non-empty voxels from the key windows of different sizes separately, and {+s
= (4 , 4 , 4) ), respectively. ... | In MsSVT block, we gather the voxels with the query window {+size (e.g. s
= (2 , 2 , 2) )+} and the {+key window sizes (e.g. s
=+} (2 , 2 , 2) and {+s
= (4 , 4 , 4) ), respectively. Then, we apply CBS to obtain sampled queries, while employ FPS to+} get sampled keys and values of same number (e.g. N K = 3 ) among... | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_02"
} | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | jyac3IgQ44 | f4au9jfat5 | 2 | [
{
"text": "Fig."
},
{
"text": "2 illustrates the overall architecture of the MsSVT block. We first gather the query and the keyvoxels via chessboard sampling and balanced multi-window sampling, respectively."
},
{
"text": "The obtainedqueries and keys are then fed into multiple head groups to ... | [
{
"text": "Fig."
},
{
"text": "2 illustrates the overall framework of our proposed MsSVT, consisting of two core modules: abalanced multi-window sampling module and a scale-aware head attention module."
},
{
"text": "We first get thequery, key, and value voxels balanced sampled from multiple win... |
nCTSF9BQJ.DGhBYSP_sR.25 | The rotamer density estimator (RDE) is a generative model for sidechain structures. It can be used to predict sidechain conformations by sampling from the estimated distribution. We use the RDE to sample sidechain torsional angles (rotamers) for structures with sidechains removed in our test split of PDB-REDO. For ea... | RDE is a generative model for protein sidechain structures, which can predict sidechain conformations by sampling from the estimated distribution. We use RDE to sample sidechain torsional angles (rotamers) for structures with 10% sidechains removed in our test split of PDB-REDO. For each residue, 10 rotamers are sample... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Make this paragraph more fluid.",
"annotator": "annotator_03"
} | {
"annotation": [
"Rewriting_light",
"Concision"
],
"instruction": "Make the beginning of the paragraph more concise. Make the end of the paragraph more fitting to the academic style.",
"annotator": "annotator_07"
} | nCTSF9BQJ | DGhBYSP_sR | 25 | [
{
"text": "Correlation Between Estimated Entropies and B-factors B-factor is an experimental measurement of conformational flexibility."
},
{
"text": "We collect the average b-factor of sidechain atoms of amino acids in the test split of the PDB-REDO dataset."
},
{
"text": "Simultaneously, we us... | [
{
"text": "Correlation Between Estimated Entropy and B-factors B-factor is an experimental measurement that quantifies the conformational flexibility."
},
{
"text": "We calculate the average b-factor of sidechain atoms of residues in the test split of the PDB-REDO dataset."
},
{
"text": "Then, w... |
VRrAfKMSF8.g8rfar4U7.00 | Besides AT and RND, diverse defenses have also been proposed, and it would be interesting to see their results. DENT [19] optimizes the model in test-time, trying to learn the AE distribution. PNI [51] injects noise during training, making the learned weights less sensitive to input perturbations. TRS [60] ensembles th... | Besides AT and RND, diverse defenses have also been proposed. DENT [19] optimizes the model in test-time. PNI [54] injects noise during training. TRS [64] ensembles three models with low attack transferability. They are developed for gradient-based attacks, but also provide protection against SQAs. However, seeing Tabl... | {
"annotation": [
"Content_deletion",
"Rewriting_light"
],
"instruction": "Remove unnecessary details.",
"annotator": "annotator_01"
} | {
"annotation": [
"Concision"
],
"instruction": "Delete unnecessary details, mostly in the two first sentences.",
"annotator": "annotator_07"
} | VRrAfKMSF8 | g8rfar4U7 | 0 | [
{
"text": "Besides AT and RND, diverse defenses have also been proposed, and it would be interesting to see their results."
},
{
"text": "DENT [19] optimizes the model in test-time, trying to learn the AE distribution."
},
{
"text": "PNI [51] injects noise during training, making the learned wei... | [
{
"text": "Besides AT and RND, diverse defenses have also been proposed."
},
{
"text": "DENT [19] optimizes the model in test-time."
},
{
"text": "PNI [54] injects noise during training."
},
{
"text": "TRS [64] ensembles three models with low attack transferability."
},
{
"text":... |
aomiOZE_m2.rxb2TiQ6bq.13 | Lightweight Network. First, we revise EDSR baseline (i.e.16 residual blocks) (Lim et al., 2017) by removing the final Conv layer to save parameters. Same as IMDN (Hui et al., 2019), the reconstruction was done within the pixel-shuffle layer (Shi et al., 2016). We set the channel number in revised EDSR baseline as 256 a... | Lightweight Networks. First, we adapt the EDSR baseline (Lim et al., 2017) with 16 residual blocks by removing the final convolutional layer to save parameters. The reconstruction upscaling is realized by the pixel-shuffle layer (Shi et al., 2016) following common practice. We set the channel number in the revised EDSR b... | {
"annotation": [
"Content_deletion"
],
"instruction": "Please, remove the clarifications that do are not necessary for the development of the idea:",
"annotator": "annotator_01"
} | {
"annotation": [
"Content_deletion"
],
"instruction": "Remove Hui et al. citation and improve writing of the paragraph",
"annotator": "annotator_06"
} | aomiOZE_m2 | rxb2TiQ6bq | 13 | [
{
"text": "Lightweight Network."
},
{
"text": "First, we revise EDSR baseline (i.e.16 residual blocks) (Lim et al., 2017) by removing the final Conv layer to save parameters."
},
{
"text": "Same as IMDN (Hui et al., 2019), the reconstruction was done within the pixel-shuffle layer (Shi et al., 20... | [
{
"text": "Lightweight Networks."
},
{
"text": "First, we adapt the EDSR baseline (Lim et al., 2017) with 16 residual blocks by removing the final convolutional layer to save parameters."
},
{
"text": "The reconstruction upscaling is realized by the pixel-shuffle layer (Shi et al., 2016) following... |
usz0l2mwO.5ie3V0GP-.02 | We evaluate the performance on seven benchmarks on different tasks, including text classification, natural language inference, similarity, and paraphrase detection. For NLI, we experiment with the SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) benchmarks. For text classification, we evaluate on two senti... | Datasets We evaluate the performance on seven different benchmarks for multiple tasks, in particular text classification, natural language inference, similarity, and paraphrase detection. For NLI, we experiment with two well-known NLI benchmarks, namely SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018). For t... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_03"
} | {
"annotation": [
"Development",
"Rewriting_light"
],
"instruction": "",
"annotator": "annotator_07"
} | usz0l2mwO | 5ie3V0GP- | 2 | [
{
"text": " We evaluate the performance on seven benchmarks on different tasks, including text classification, natural language inference, similarity, and paraphrase detection."
},
{
"text": "For NLI, we experiment with the SNLI (Bowman et al., 2015) and MNLI (Williams et al., 2018) benchmarks."
},
... | [
{
"text": "Datasets We evaluate the performance on seven different benchmarks for multiple tasks, in particular text classification, natural language inference, similarity, and paraphrase detection."
},
{
"text": "For NLI, we experiment with two well-known NLI benchmarks, namely SNLI (Bowman et al., 201... |
hegI87bI5S.fL6Q48sfx8.18 | We used different apparatus for experiments 1 and 2, which did not have a significant effect on the conclusions of this study. We used a desktop PC (Intel Core i9-12900KF, GeForce RTX 3070 Ti, 32GB RAM, Windows 10 Home). The display was manufactured by AOPEN (model 25XV2QFbmiiprx; 24.5” diagonal, 1920 × 1080 pixels) a... | We used a different apparatus for both experiments; however this did not have a significant effect on the conclusions of this study. We used a desktop PC (Intel Core i9-12900KF, GeForce RTX 3070 Ti, 32 GB RAM, Windows 10 Home). The display was manufactured by AOPEN (model 25XV2QFbmiiprx; 24.5” diagonal, 1920 × 1080 pix... | {
"annotation": [
"Rewriting_light"
],
"instruction": "Improve the English in this paragraph by choosing better words",
"annotator": "annotator_10"
} | {
"annotation": [
"Rewriting_medium",
"Rewriting_light"
],
"instruction": "Improve the linking between phrases.",
"annotator": "annotator_07"
} | hegI87bI5S | fL6Q48sfx8 | 18 | [
{
"text": "We used different apparatus for experiments 1 and 2, which did not have a significant effect on the conclusions of this study."
},
{
"text": "We used a desktop PC (Intel Core i9-12900KF, GeForce RTX 3070 Ti, 32GB RAM, Windows 10 Home)."
},
{
"text": "The display was manufactured by A... | [
{
"text": "We used a different apparatus for both experiments; however this did not have a significant effect on the conclusions of this study."
},
{
"text": "We used a desktop PC (Intel Core i9-12900KF, GeForce RTX 3070 Ti, 32 GB RAM, Windows 10 Home)."
},
{
"text": "The display was manufacture... |
VWgazAa3VJ.iaVGYcsIw.00 | Recently, many methods to induce sparsity in neural networks have shown that it is possible to train models with an overwhelming fraction of the weights being
Molchanov et al. ; Gale et al. ; Frankle and Carbin (2018); Louizos et al. ; Evci et al. ; Zhu and Gupta (2017). Many of these methods gradually decrease the n... | Recently, many methods to induce sparsity in neural networks have shown that it is possible to train models with an overwhelming fraction of the weights being
(Molchanov et al., 2017; Gale et al., 2019; Frankle and Carbin, 2019; Louizos et al., 2018; Evci et al., 2019; Zhu and Gupta, 2017). Many of these methods grad... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Review this paragraph, make it easier to read.",
"annotator": "annotator_01"
} | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | VWgazAa3VJ | iaVGYcsIw | 0 | [
{
"text": "Recently, many methods to induce sparsity in neural networks have shown that it is possible to train models with an overwhelming fraction of the weights being \n"
},
{
"text": "Molchanov et al. ; Gale et al. ; Frankle and Carbin (2018); Louizos et al."
},
{
"text": "; Evci et al. ; Zh... | [
{
"text": "Recently, many methods to induce sparsity in neural networks have shown that it is possible to train models with an overwhelming fraction of the weights being \n"
},
{
"text": "(Molchanov et al., 2017; Gale et al., 2019; Frankle and Carbin, 2019; Louizos et al., 2018;"
},
{
"text": "E... |
dD-sDO1KaC.N7AsVzdCxV.00 | Backdoor-based model watermarking relied on an assumption that the trigger matches hidden backdoors contained in the suspicious model. However, the assumption may not hold since the backdoor may be changed during the stealing process. In this section, we verify this limitation. | Backdoor-based model watermarking relied on an assumption that the trigger matches hidden backdoors contained in the suspicious model. However, the assumption may not hold. In this section, we verify this limitation. | {
"annotation": [
"Content_deletion"
],
"instruction": "Remove information on why the assumption might not hold.",
"annotator": "annotator_03"
} | {
"annotation": [
"Content_deletion"
],
"instruction": "Make the second sentence much shorter, only keep the main idea.",
"annotator": "annotator_07"
} | dD-sDO1KaC | N7AsVzdCxV | 0 | [
{
"text": "Backdoor-based model watermarking relied on an assumption that the trigger matches hidden backdoors contained in the suspicious model."
},
{
"text": "However, the assumption may not hold since the backdoor may be changed during the stealing process."
},
{
"text": "In this section, we ... | [
{
"text": "Backdoor-based model watermarking relied on an assumption that the trigger matches hidden backdoors contained in the suspicious model."
},
{
"text": "However, the assumption may not hold."
},
{
"text": "In this section, we verify this limitation."
}
] |
xV0XmrSMtk.sYfR73R9z.03 | The explanation is that, for ranking-based loss, the incoming gradient does not point toward achievable solutions, as discussed in Sec. 3.2. For more details see Suppl. B.3. We conclude that for the retrieval experiment, Identity does not match the performance of BB. Presumably, this is because the crude approximation ... | The explanation is that, for ranking-based loss, the incoming gradient does not point toward achievable solutions, as discussed in Sec. 3.2. For more details, including additional evaluation of other applicable projections, see Suppl. B.3. We conclude that for the retrieval experiment, Identity does not match the perfo... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_02"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | xV0XmrSMtk | sYfR73R9z | 3 | [
{
"text": "The explanation is that, for ranking-based loss, the incoming gradient does not point toward achievable solutions, as discussed in Sec. 3.2."
},
{
"text": "For more details see Suppl."
},
{
"text": "B.3."
},
{
"text": "We conclude that for the retrieval experiment, Identity do... | [
{
"text": "The explanation is that, for ranking-based loss, the incoming gradient does not point toward achievable solutions, as discussed in Sec. 3.2."
},
{
"text": "For more details, including additional evaluation of other applicable projections, see Suppl."
},
{
"text": "B.3."
},
{
"... |
MZYBK_Wp2X.HVFitLjAId.03 | Fig. 3a shows that the first two components take about 90% of the explained variance. Fig. 3b shows that these components include only τ 1 , avg degree, and modularity. The fact that n is not used means that the size of the graph as well as the density are not important for choosing the best measure. The fact that τ 2 i... | Fig. 3a shows that the first two components take about 90% of the explained variance. Fig. 3b shows that these components include only τ 1 , avg degree, and modularity. The fact that n is not used means that the size of the graph as well as the density are not of primary importance for choosing the best measure. So is n... | {
"annotation": [
"Content_substitution",
"Rewriting_light"
],
"instruction": "",
"annotator": "annotator_06"
} | {
"annotation": [
"Content_substitution",
"Rewriting_medium"
],
"instruction": "",
"annotator": "annotator_08"
} | MZYBK_Wp2X | HVFitLjAId | 3 | [
{
"text": "We range the measures by their ARI score on every graph of the dataset."
},
{
"text": "The rank is defined as the position of the measure in this list, averaged over the dataset (see Table 2)."
},
{
"text": "It is important to note that the global leadership does not give a comprehensi... | [
{
"text": "We range the measures by their ARI score on every graph of the dataset."
},
{
"text": "The rank is defined as the position of the measure in this list, averaged over the dataset (see Table 2)."
},
{
"text": "It is important to note that the global leadership does not give a comprehensi... |
nCTSF9BQJ.DGhBYSP_sR.04 | Therefore, we seek to predict the change in binding free energy via estimating the change in conformational flexibility . Specifically, our method consists of three main parts. The first part is a conditional generative model built upon normalizing flows on torus for estimating the density of amino acid sidechain con... | Therefore, by comparing the entropy losses of wild-type and mutated protein complexes, we can estimate the effect of mutations on binding affinity. Based on this principle, we introduce a novel approach to predict the impact of amino acid mutations on proteinprotein interaction. The core of our method is the Rotamer De... | {
"annotation": [
"Content_substitution",
"Rewriting_heavy"
],
"instruction": "",
"annotator": "annotator_07"
} | null | nCTSF9BQJ | DGhBYSP_sR | 4 | [
{
"text": "Therefore, we seek to predict the change in binding free energy via estimating the change in conformational flexibility ."
},
{
"text": "Specifically, our method consists of three main parts. The first part is a conditional generative model built upon normalizing flows on torus for estimati... | [
{
"text": "Therefore, by comparing the entropy losses of wild-type and mutated protein complexes, we can estimate the effect of mutations on binding affinity."
},
{
"text": "Based on this principle, we introduce a novel approach to predict the impact of amino acid mutations on proteinprotein interaction... |
IoTyuVEanE.Et-c0vQfeb.00 | Follow-up studies [12] on Snorkel have also shown that Snorkel users tended to select rules by looking at individual labeled instances and seeking to create rules reflecting the patterns they observe in the data. This study further demonstrated that label efficiency can be improved by showing users currently unlabeled in... | Follow-up studies [12] on Snorkel have also shown that Snorkel users tended to select rules by looking at individual labeled instances and seeking to create rules reflecting the patterns they observe in the data. This study further demonstrated that label efficiency can be improved by showing users currently unlabeled in... | {
"annotation": [
"Content_substitution"
],
"instruction": "",
"annotator": "annotator_09"
} | {
"annotation": [
"Content_substitution"
],
"instruction": "",
"annotator": "annotator_07"
} | IoTyuVEanE | Et-c0vQfeb | 0 | [
{
"text": "Follow-up studies [12] on Snorkel have also shown that Snorkel users tended to select rules by looking at individual labeled instances and seeking to create rules reflecting the patterns they observe in the data."
},
{
"text": "This study further demonstrated that label efficiency can be improv... | [
{
"text": "Follow-up studies [12] on Snorkel have also shown that Snorkel users tended to select rules by looking at individual labeled instances and seeking to create rules reflecting the patterns they observe in the data."
},
{
"text": "This study further demonstrated that label efficiency can be improv... |
CVRUl83zah.I75TtW0V7.18 | The only set prediction model we are aware of that can be exclusively multiset-equivariant is DSPN (Zhang et al., 2019; Huang et al., 2020). DESP (Zhang et al., 2021) is exclusively multiset-equivariant but not a standard set predictor. It also uses the Jacobian of sorting, but has the slightly different goal of diver... | The only set prediction model we are aware of that can be exclusively multiset-equivariant is DSPN (Zhang et al., 2019; Huang et al., 2020). Note that Zhang et al. (2021a) also make use of the exclusively multiset-equivariant Jacobian of sorting, but instead focus on learning multimodal densities over sets, which is ta... | {
"annotation": [
"Concision"
],
"instruction": "Make this paragraph more concise by using more direct formulations",
"annotator": "annotator_04"
} | {
"annotation": [
"Concision"
],
"instruction": "Make the paragraph shorter but don't touch at the first sentence.",
"annotator": "annotator_07"
} | CVRUl83zah | I75TtW0V7 | 18 | [
{
"text": "The only set prediction model we are aware of that can be exclusively multiset-equivariant is DSPN (Zhang et al., 2019; Huang et al., 2020). "
},
{
"text": "DESP (Zhang et al., 2021) is exclusively multiset-equivariant but not a standard set predictor. It also uses the Jacobian of sorting, b... | [
{
"text": "The only set prediction model we are aware of that can be exclusively multiset-equivariant is DSPN (Zhang et al., 2019; Huang et al., 2020). Note that Zhang et al."
},
{
"text": "(2021a) also make use of the exclusively multiset-equivariant Jacobian of sorting, but instead focus on learning m... |
txe2sPPkO.id6Xr1pUq.00 | In this section we discuss how SafeNet can be instantiated in practice. There are two aspects the data owners need to agree upon before instantiating SafeNet: i) The MPC framework used for secure training and prediction phase and ii) the parameters in Theorem 6 to achieve poisoning robustness. The MPC framework is agre... | In this section we discuss how SafeNet can be instantiated in practice. There are two aspects the data owners need to agree upon before instantiating SafeNet: i) The MPC framework used for secure training and prediction phase and ii) the parameters in Theorem 6 to achieve poisoning robustness. The owners agree upon the... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Rewrite the middle sentence of this paragraph to make it clearer.",
"annotator": "annotator_02"
} | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Rewrite the long sentence in the middle sentence to improve clarity.",
"annotator": "annotator_07"
} | txe2sPPkO | id6Xr1pUq | 0 | [
{
"text": "In this section we discuss how SafeNet can be instantiated in practice."
},
{
"text": "There are two aspects the data owners need to agree upon before instantiating SafeNet: i)"
},
{
"text": "The MPC framework used for secure training and prediction phase and ii) the parameters in The... | [
{
"text": "In this section we discuss how SafeNet can be instantiated in practice."
},
{
"text": "There are two aspects the data owners need to agree upon before instantiating SafeNet: i)"
},
{
"text": "The MPC framework used for secure training and prediction phase and ii) the parameters in The... |
NwOG107NKJ.0PPYM22rdB.03 | TERGM models estimated within the markov chain assumption are typically incapable of generating and reproducing realistic dynamics observed in real-world online social networks. We hypothesized that increasing the model’s capacity to describe triadic network properties would reduce the error between the model and empir... | TERGM models estimated within the markov chain assumption are typically incapable of generating and reproducing realistic dynamics observed in real-world online social networks. We hypothesized that increasing the model’s capacity to describe triadic network properties would reduce the error between the model and empir... | {
"annotation": [
"Concision"
],
"instruction": "Remove the information about the code. And remove the last sentence.",
"annotator": "annotator_10"
} | {
"annotation": [
"Content_deletion"
],
"instruction": "Remove the mentions to the code and to other sections.",
"annotator": "annotator_03"
} | NwOG107NKJ | 0PPYM22rdB | 3 | [
{
"text": "TERGM models estimated within the markov chain assumption are typically incapable of generating and reproducing realistic dynamics observed in real-world online social networks."
},
{
"text": "We hypothesized that increasing the model’s capacity to describe triadic network properties would re... | [
{
"text": "TERGM models estimated within the markov chain assumption are typically incapable of generating and reproducing realistic dynamics observed in real-world online social networks."
},
{
"text": "We hypothesized that increasing the model’s capacity to describe triadic network properties would re... |
atxti8SVk.3K9AmPwALM.08 | Low-level image similarity. We propagatethe labels within visually coherent regions,as visual similarity often goes with semantic similarity. To generate an over-segmentation, we follow Hwang et al. by using the HED contour detector (Xie & Tu, 2015) (pre-trained on BSDS dataset (Arbelaez et al., 2010)) and the procedur... | Low-level image similarity. To propagate labels within visually coherent regions, we generate a low-level over-segmentation. Following SegSort Hwang et al. (2019), we use the HED contour detector (Xie & Tu, 2015) (pre-trained on BSDS500 dataset (Arbelaez et al., 2010)) and gPb-owtucm (Arbelaez et al., 2010) to generate... | {
"annotation": [
"Concision",
"Content_deletion"
],
"instruction": "Make this paragraph considerably more concise. Remove any unnecessary details that are not essential for the main point of the paragraph.",
"annotator": "annotator_03"
} | {
"annotation": [
"Content_deletion",
"Concision"
],
"instruction": "This paragraph is too long, make it almost 50% shorter but keep the important informations.",
"annotator": "annotator_07"
} | atxti8SVk | 3K9AmPwALM | 8 | [
{
"text": "Low-level image similarity."
},
{
"text": "We propagatethe labels within visually coherent regions,as visual similarity often goes with semantic similarity. To generate an over-segmentation, we follow"
},
{
"text": " Hwang et al."
},
{
"text": "by using the HED contour detecto... | [
{
"text": "Low-level image similarity."
},
{
"text": "To propagate labels within visually coherent regions, we generate a low-level over-segmentation."
},
{
"text": "Following SegSort Hwang et al."
},
{
"text": "(2019), we use the HED contour detector (Xie & Tu, 2015) (pre-trained on BSD... |
nCTSF9BQJ.DGhBYSP_sR.17 | Neural Network Predictor The hidden representation h i of each amino acid for parameterizing the normalizing flows contains sufficient information about the rotamer probability density. To extract binding information from the representations in a more flexible way, we also try neural networks. Specifically, we use MLPs... | Neural Network Predictor Each residue’s hidden representation h i used to parameterize the normalizing flows contains sufficient information about the rotamer distribution. To extract binding information from these representations in a more flexible way, we employed neural networks. Specifically, we utilized a network ... | {
"annotation": [
"Rewriting_light",
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | null | nCTSF9BQJ | DGhBYSP_sR | 17 | [
{
"text": "We use the stochastic method to estimate the entropy. First, we sample a set of rotamers from the distribution using the inversion of the flows (Eq.1, 4)."
},
{
"text": "Then, we compute the negative log probability of the samples and take the average as an estimation of the entropy."
},
... | [
{
"text": "To estimate the entropy, we use a stochastic method: First, we sample a set of rotamers from the distribution using the inverted flows (Eq.1, 4)."
},
{
"text": "Then, we compute the negative log probability of the samples and take their average as an estimate of the entropy."
},
{
"te... |
Sjtl3UReL.bnGXjxmB3.00 | Remark 3 . The major novelty of obtaining these theoretical results relies on the developed variant of the augmented Lagrangian function and subsequently derived recursion of the successive dual variables, which quantify well the consensus errors resulting from both UL and LL optimization processes in terms of primal v... | Remark 3 . The major novelty of obtaining these theoretical results relies on the developed variant of the augmented Lagrangian function and subsequently derived recursion of the successive dual variables, which quantify well the consensus errors resulting from both UL and LL optimization processes in terms of primal v... | {
"annotation": [
"Content_substitution"
],
"instruction": "",
"annotator": "annotator_07"
} | null | Sjtl3UReL | bnGXjxmB3 | 0 | [
{
"text": "Remark 3 ."
},
{
"text": "The major novelty of obtaining these theoretical results relies on the developed variant of the augmented Lagrangian function and subsequently derived recursion of the successive dual variables, which quantify well the consensus errors resulting from both UL and LL o... | [
{
"text": "Remark 3 ."
},
{
"text": "The major novelty of obtaining these theoretical results relies on the developed variant of the augmented Lagrangian function and subsequently derived recursion of the successive dual variables, which quantify well the consensus errors resulting from both UL and LL o... |
IuxfzBFSR0.CSFycBGzvd.00 | Update criteria As mentioned before, Algorithm 1 runs in epochs indexed by j , and one epoch ends when either of the two update criteria is triggered (Line 9). The first updating criterion is satisfied once the determinant of Σ t is doubled compared to the determinant at the end of the previous epoch. This is called lazy... | Update criteria As mentioned before, Algorithm 1 runs in epochs indexed by j , and one epoch ends when either of the two update criteria is triggered (Line 9). The first updating criterion is satisfied once the determinant of Σ t is doubled compared to the determinant at the end of the previous epoch. This is called lazy... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_06"
} | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_08"
} | IuxfzBFSR0 | CSFycBGzvd | 0 | [
{
"text": "Update criteria As mentioned before, Algorithm 1 runs in epochs indexed by j , and one epoch ends when either of the two update criteria is triggered (Line 9)."
},
{
"text": "The first updating criterion is satisfied once the determinant of Σ t is doubled compared to the determinant at the end ... | [
{
"text": "Update criteria As mentioned before, Algorithm 1 runs in epochs indexed by j , and one epoch ends when either of the two update criteria is triggered (Line 9)."
},
{
"text": "The first updating criterion is satisfied once the determinant of Σ t is doubled compared to the determinant at the end ... |
hegI87bI5S.fL6Q48sfx8.24 | We used a general cursor in allof our experiments. However, pointing-facilitation technique have been proposed to improve the pointing performance. For example,when using Bubble Cursor [13] or Ninja Cursors [18], the notch effect for the movement time may be reduced. However, we did not consider them in this study bec... | Bubble Clusters [26] and Attribute Gates [24]). We are interested in whether Eq. 8 can be applied to more than just predicting movement time in the scenario where the notch is placed. We used a general cursor in all our experiments. However, pointing-facilitation technique have been proposed to improve the pointing per... | {
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
} | null | hegI87bI5S | fL6Q48sfx8 | 24 | [
{
"text": " We used a general cursor in allof our experiments."
},
{
"text": "However, pointing-facilitation technique have been proposed to improve the pointing performance."
},
{
"text": "For example,when using Bubble Cursor [13] or Ninja Cursors [18], the notch effect for the movement time ma... | [
{
"text": "Bubble Clusters [26] and Attribute Gates [24]). We are interested in whether Eq. 8 can be applied to more than just predicting movement time in the scenario where the notch is placed. We used a general cursor in all our experiments."
},
{
"text": "However, pointing-facilitation technique have... |
MXi6uEx-hp.rdZfFcGyf9.23 | The user model takes as input the user information(i.e. , a concatenation of user attributes and a sequence of the user interactions) and a set of item embeddings in the list. The user information is passed on to a single layer gated recurrent network(GRU (Cho et al., 2014)) followed by a 2-layer MLP to extract the com... | The user model takes as input the user information(i.e. , a concatenation of user attributes and a sequence of the user interactions) and a set of item embeddings in the list. The user information is passed on to a single layer gated recurrent network(GRU (Cho et al., 2014)) followed by a 2-layer MLP to extract the com... | {
"annotation": [
"Concision"
],
"instruction": "Make the third sentence shorter and easier to understand",
"annotator": "annotator_10"
} | {
"annotation": [
"Concision"
],
"instruction": "Simplify the convoluted sentences to make the paragraph more concise.",
"annotator": "annotator_03"
} | MXi6uEx-hp | rdZfFcGyf9 | 23 | [
{
"text": "The user model takes as input the user information(i.e."
},
{
"text": ", a concatenation of user attributes and a sequence of the user interactions) and a set of item embeddings in the list."
},
{
"text": "The user information is passed on to a single layer gated recurrent network(GRU... | [
{
"text": "The user model takes as input the user information(i.e."
},
{
"text": ", a concatenation of user attributes and a sequence of the user interactions) and a set of item embeddings in the list."
},
{
"text": "The user information is passed on to a single layer gated recurrent network(GRU... |
fJhx73ErBg.NeKLbmOxG8.03 | Temporal logic (TL)-guided policy learning is an area that we take have taken inspiration from. In this area, TL is often used to specify the ego agent’s desired high-level behavior and used to generaterewards. The authors of [17, 18, 19] provide surveys of recent work on the use of TL in RL. The exploration problem st... | Temporal logic (TL)-guided policy learning is an area that we take have taken inspiration from. In this area, TL is often used to specify the ego agent’s desired high-level behavior and used togenerate rewards. The authors of [18, 19, 20] provide surveys of recent work on the use of TL in RL. The exploration problem st... | {
"annotation": [
"Concision",
"Rewriting_medium"
],
"instruction": "Make this paragraph easier to read, remove unnecessary details if needed",
"annotator": "annotator_01"
} | {
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Summarize the last third of this paragraph in one sentence. Smooth out the writing.",
"annotator": "annotator_07"
} | fJhx73ErBg | NeKLbmOxG8 | 3 | [
{
"text": "Temporal logic (TL)-guided policy learning is an area that we take have taken inspiration from."
},
{
"text": "In this area, TL is often used to specify the ego agent’s desired high-level behavior and used to generaterewards."
},
{
"text": "The authors of [17, 18, 19] provide surveys ... | [
{
"text": "Temporal logic (TL)-guided policy learning is an area that we take have taken inspiration from."
},
{
"text": "In this area, TL is often used to specify the ego agent’s desired high-level behavior and used togenerate rewards."
},
{
"text": "The authors of [18, 19, 20] provide surveys ... |
jyac3IgQ44.f4au9jfat5.06 | We build our 3D backbone by stacking multiple MsSVT blocks, as shown in Fig. Noted that weset both the query and the key window size in the last MsSVT block as (1 , 1 , ∞ ) to compress the 3Dvoxels into a 2D feature map, where the query is the average voxel features within the pillar window. | As shown in Fig 2, we build our 3D backbone by stacking Mixed-scale Sparse Voxel Transformer(MsSVT) blocks. It is worth noting that both the query and key window size in the last block of MsSVT are set to (1 , 1 , ∞ ) so as to compressing the 3D voxels into 2D feature map, where the queryis the average of all the voxel... | {
"annotation": [
"Rewriting_medium"
],
"instruction": "Improve the English of this paragraph.",
"annotator": "annotator_02"
} | {
"annotation": [
"Rewriting_light",
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | jyac3IgQ44 | f4au9jfat5 | 6 | [
{
"text": "To leverage the natural sparsity of point clouds andfurther improve efficiency, we sparsely implementall our window center searching, window gathering, and balanced window sampling into CUDAoperations."
},
{
"text": "These operations are mainly based on a hash map that establishes the mapping... | [
{
"text": "We implement all our window center searching,window gathering, and balanced window samplingsparsely in cuda operations to leverage the natural sparsity of point clouds and improve efficiency."
},
{
"text": "These operations are mainly based on a hash mapwhich establishes the mapping from coor... |
hAi0PMz9T7.Ut8ESfYp1.04 | This work studied the distillation of NN-based deep reinforcement learning agents into symbolic policies for performance-oriented congestion control in TCP. Our branched symbolic frameworkenjoys better simplicity and efficiency while exhibiting comparable and often improved performancesover their black-box teacher count... | This work studies the distillation of NN-based deep reinforcement learning agents into symbolic policies for performance-oriented congestion control in TCP. Our branched symbolic framework has better simplicity and efficiency while exhibiting comparable and often improved performance over their black-box teacher counte... | {
"annotation": [
"Rewriting_medium",
"Concision"
],
"instruction": "Rewrite the second-to-last sentence to make it more general and shorten some formulations.",
"annotator": "annotator_04"
} | {
"annotation": [
"Rewriting_light",
"Content_substitution"
],
"instruction": "",
"annotator": "annotator_07"
} | hAi0PMz9T7 | Ut8ESfYp1 | 4 | [
{
"text": "This work studied the distillation of NN-based deep reinforcement learning agents into symbolic policies for performance-oriented congestion control in TCP."
},
{
"text": "Our branched symbolic frameworkenjoys better simplicity and efficiency while exhibiting comparable and often improved perf... | [
{
"text": "This work studies the distillation of NN-based deep reinforcement learning agents into symbolic policies for performance-oriented congestion control in TCP."
},
{
"text": "Our branched symbolic framework has better simplicity and efficiency while exhibiting comparable and often improved perfo... |
S1qImCcFQ.Ske132uA7.01 | Let { A n , b n } be the dynamics parameters associated with node n . Even though only the discrete states are associated with the leaf nodes, we will introduce dynamics at the internal nodes as well. These internal dynamics serve as a link between the leaf node dynamics via a hierarchical prior, | Let { A n , b n } be the dynamics parameters associated with node n . Although the locally linear dynamics of a discrete state are specified by the leaf nodes, we introduce dynamics at the internal nodes as well. These internal dynamics serve as a link between the leaf node dynamics via a hierarchical prior, | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | null | S1qImCcFQ | Ske132uA7 | 1 | [
{
"text": "Let { A n , b n } be the dynamics parameters associated with node n ."
},
{
"text": "Even though only the discrete states are associated with the leaf nodes, we will introduce dynamics at the internal nodes as well."
},
{
"text": "These internal dynamics serve as a link between the l... | [
{
"text": "Let { A n , b n } be the dynamics parameters associated with node n ."
},
{
"text": "Although the locally linear dynamics of a discrete state are specified by the leaf nodes, we introduce dynamics at the internal nodes as well."
},
{
"text": "These internal dynamics serve as a link bet... |
ByngnZiT7.BkN8nWiaX.00 | Table 1 summarizes the results, comparing the cost-sensitive robust error between the baseline model trained for overall robustness and a model trained using our cost-sensitive robust optimization. The proposed cost-sensitive robust defense model is trained with (cid:15) = 0 . 2 based on loss function (3.1) and the cor... | Table 1 summarizes the results, comparing the cost-sensitive robust error between the baseline model trained for overall robustness and a model trained using our cost-sensitive robust optimization. The proposed cost-sensitive robust defense model is trained with (cid:15) = 0 . 2 based on loss function (3.2) and the cor... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | ByngnZiT7 | BkN8nWiaX | 0 | [
{
"text": "Table 1 summarizes the results, comparing the cost-sensitive robust error between the baseline model trained for overall robustness and a model trained using our cost-sensitive robust optimization."
},
{
"text": "The proposed cost-sensitive robust defense model is trained with (cid:15) = 0 ."... | [
{
"text": "Table 1 summarizes the results, comparing the cost-sensitive robust error between the baseline model trained for overall robustness and a model trained using our cost-sensitive robust optimization."
},
{
"text": "The proposed cost-sensitive robust defense model is trained with (cid:15) = 0 ."... |
nCTSF9BQJ.DGhBYSP_sR.03 | Cole & Warwicker, 2002; Kastritis & Bonvin, 2013). Intuitively speaking, amino acids on the interface become less flexible (lower entropy) after they get contact with other proteins due to the geometrical and physical restraints imposed by the binding partner (Figure 1). Higher binding affinity implies better compleme... | Cole & Warwicker, 2002; Kastritis & Bonvin, 2013). When two proteins bind, the residues located at the interface tend to become less flexible (i.e. having lower entropy) due to the physical and geometric constraints imposed by the binding partner (Figure 1). A higher amount of entropy loss corresponds to a stronger bin... | {
"annotation": [
"Content_substitution"
],
"instruction": "",
"annotator": "annotator_07"
} | null | nCTSF9BQJ | DGhBYSP_sR | 3 | [
{
"text": "Cole & Warwicker, 2002; Kastritis & Bonvin, 2013)."
},
{
"text": "Intuitively speaking, amino acids on the interface become less flexible (lower entropy) after they get contact with other proteins due to the geometrical and physical restraints imposed by the binding partner (Figure 1)."
},... | [
{
"text": "Cole & Warwicker, 2002; Kastritis & Bonvin, 2013)."
},
{
"text": "When two proteins bind, the residues located at the interface tend to become less flexible (i.e. having lower entropy) due to the physical and geometric constraints imposed by the binding partner (Figure 1)."
},
{
"text... |
QSAQjBO0aj.srl-4uM-pl.00 | Trigger Feature Hypothesis We hypothesize that the trigger features are sparsely encoded in only afew channels, while clean image features need to be encoded across many channels for the effectiveclassification . This is a key difference from normal data features that are presumably distributed moreevenly across channe... | Trigger Feature Hypothesis We hypothesize that the trigger features are sparsely encoded in only afew channels, while clean image features need to be encoded across many channels . This is a keydifference from normal data features that are presumably distributed more evenly across channels,which indicates that these tw... | {
"annotation": [
"Concision"
],
"instruction": "Make this paragraph more concise.",
"annotator": "annotator_02"
} | {
"annotation": [
"Concision"
],
"instruction": "Make this paragraph a bit shorter.",
"annotator": "annotator_07"
} | QSAQjBO0aj | srl-4uM-pl | 0 | [
{
"text": "Trigger Feature Hypothesis We hypothesize that the trigger features are sparsely encoded in only afew channels, while clean image features need to be encoded across many channels for the effectiveclassification ."
},
{
"text": "This is a key difference from normal data features that are presu... | [
{
"text": "Trigger Feature Hypothesis We hypothesize that the trigger features are sparsely encoded in only afew channels, while clean image features need to be encoded across many channels ."
},
{
"text": "This is a keydifference from normal data features that are presumably distributed more evenly acr... |
nkOpNqg-ip.OwJsIhe_p.02 | Despite being only an example scheme, one advantage of Naive AutoML over black-box optimization that already becomes clear here is that it directly generates important insights that can significantly support the data scientist working with it. For example, even such a simple question as “what is the potential of feature... | Despite being only an example scheme, one advantage of Naive AutoML that already becomes clear here is that it directly generates important insights that can significantly support the data scientist working with it. For example, a question like “what is the potential of feature selection on the given data?” can be answe... | {
"annotation": [
"Concision"
],
"instruction": "Make the paragraph more concise by focusing on the main points.",
"annotator": "annotator_04"
} | {
"annotation": [
"Concision",
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | nkOpNqg-ip | OwJsIhe_p | 2 | [
{
"text": "Despite being only an example scheme, one advantage of Naive AutoML over black-box optimization that already becomes clear here is that it directly generates important insights that can significantly support the data scientist working with it."
},
{
"text": "For example, even such a simple que... | [
{
"text": "Despite being only an example scheme, one advantage of Naive AutoML that already becomes clear here is that it directly generates important insights that can significantly support the data scientist working with it."
},
{
"text": "For example, a question like “what is the potential of feature ... |
ssjKKm0b5y.3wi5X8wrM_.02 | We evaluate Pareto HyperNetworks (PHNs) on a set of diverse multi-objective problems. The experiments show the superiority of PHN over other MOO methods. Wewill make our code publicly available in order to facilitate further research. | We evaluate Pareto HyperNetworks (PHNs) on a set of diverse multi-objective problems. The experiments show the superiority of PHN over previous MOO methods. We make our source code publicly available at: https://github.com/AvivNavon/pareto-hypernetworks . | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_08"
} | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_02"
} | ssjKKm0b5y | 3wi5X8wrM_ | 2 | [
{
"text": "We evaluate Pareto HyperNetworks (PHNs) on a set of diverse multi-objective problems."
},
{
"text": "The experiments show the superiority of PHN over other MOO methods."
},
{
"text": "Wewill make our code publicly available in order to facilitate further research."
}
] | [
{
"text": "We evaluate Pareto HyperNetworks (PHNs) on a set of diverse multi-objective problems."
},
{
"text": "The experiments show the superiority of PHN over previous MOO methods."
},
{
"text": "We make our source code publicly available at: https://github.com/AvivNavon/pareto-hypernetworks .... |
9ALnOEcGN_.4eEIRZ-dm.01 | Our proposed method in this paper belongs to the category of construction heuristics leaners in the sense of producing a one-shot solution per problem instance. But unlike previous methods which generate the solutions via a constructive Markov decision process (MDP) with rather costly decoding steps (adding one un-visi... | Our proposed method in this paper belongs to the category of construction heuristics learners in the sense of producing a one-shot solution per problem instance. However, there are major distinctions between previous methods and ours. One distinction is how to construct solutions. Unlike previous methods which generate... | {
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
} | null | 9ALnOEcGN_ | 4eEIRZ-dm | 1 | [
{
"text": "Our proposed method in this paper belongs to the category of construction heuristics leaners in the sense of producing a one-shot solution per problem instance."
},
{
"text": "But unlike previous methods which generate the solutions via a constructive Markov decision process (MDP) with rather... | [
{
"text": "Our proposed method in this paper belongs to the category of construction heuristics learners in the sense of producing a one-shot solution per problem instance."
},
{
"text": "However, there are major distinctions between previous methods and ours. One distinction is how to construct solutio... |
RPX7thbt2Mv.PdsbQ4ckYr.01 | Hopper-v2 ) from the OpenAI Gym MuJoCo locomotion tasks. For each environment, we use the medium-v2 , medium-replay-v2 and medium-expert-v2 datasets to construct the expert demonstrations and the unlabeled dataset. For the expert demonstrations, we choose the best episodes from the D4RL dataset based on the episodic re... | Hopper-v2 ) from the OpenAI Gym MuJoCo locomotion tasks. For each environment, we use the medium-v2 , medium-replay-v2 and medium-expert-v2 datasets to construct the expert demonstrations and the unlabeled dataset. For the expert demonstrations, we choose the best episodes from the D4RL dataset based on the episodic re... | {
"annotation": [
"Concision"
],
"instruction": "Remove the fourth sentence",
"annotator": "annotator_06"
} | {
"annotation": [
"Concision"
],
"instruction": "Exclude unnecessary information.",
"annotator": "annotator_08"
} | RPX7thbt2Mv | PdsbQ4ckYr | 1 | [
{
"text": "Hopper-v2 ) from the OpenAI Gym MuJoCo locomotion tasks."
},
{
"text": "For each environment, we use the medium-v2 , medium-replay-v2 and medium-expert-v2 datasets to construct the expert demonstrations and the unlabeled dataset."
},
{
"text": "For the expert demonstrations, we choose... | [
{
"text": "Hopper-v2 ) from the OpenAI Gym MuJoCo locomotion tasks."
},
{
"text": "For each environment, we use the medium-v2 , medium-replay-v2 and medium-expert-v2 datasets to construct the expert demonstrations and the unlabeled dataset."
},
{
"text": "For the expert demonstrations, we choose... |
rJv8ichjB.k-Li2JE2Z.00 | Much (though not all) work on program synthesis is focused on domain specific languages that are less than maximally expressive (Gulwani, 2011; Balog et al., 2016; Wang et al., 2017; Alur et al., 2015). We would like to focus on the synthesis of programs in a Turing complete language, but this presents technical challen... | Much (though not all) work on program synthesis is focused on domain specific languages that are less than maximally expressive (Gulwani, 2011; Balog et al., 2016; Wang et al., 2017; Alur et al., the search procedure used will tend to emit ‘shorter’ programs first, and so there is an Occam’s-Razor-type argument (Spade & ... | {
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
} | null | rJv8ichjB | k-Li2JE2Z | 0 | [
{
"text": "Much (though not all) work on program synthesis is focused on domain specific languages that are less than maximally expressive (Gulwani, 2011; Balog et al., 2016; Wang et al., 2017; Alur et al., 2015)."
},
{
"text": "We would like to focus on the synthesis of programs in a Turing complete lan... | [
{
"text": "Much (though not all) work on program synthesis is focused on domain specific languages that are less than maximally expressive (Gulwani, 2011; Balog et al., 2016; Wang et al., 2017; Alur et al., the search procedure used will tend to emit ‘shorter’ programs first, and so there is an Occam’s-Razor-type... |
9ALnOEcGN_.4eEIRZ-dm.02 | Here the higher valued θ i,j means the higher probability for the edge from node i to node j to be sampled. More importantly, notice that we use matrix θ ∈ R n × n to parameterize the probabilisticdistribution of n ! discrete feasible solutions. The compact, continuous and differentiable spaceof θ allows us to leverage... | Here a higher valued θ i,j corresponds to a higher probability for the edge from node i to node j to be sampled. The compact, continuous and differentiable space of θ allows us to leverage gradientbased optimization without costly MDP-based construction of feasible solutions, which has been a bottleneck for scaling up ... | {
"annotation": [
"Concision"
],
"instruction": "Make this paragraph more concise.",
"annotator": "annotator_02"
} | {
"annotation": [
"Content_deletion",
"Rewriting_light"
],
"instruction": "Delete the second sentence. Improve the english in the first sentence.",
"annotator": "annotator_07"
} | 9ALnOEcGN_ | 4eEIRZ-dm | 2 | [
{
"text": "Here the higher valued θ i,j means the higher probability for the edge from node i to node j to be sampled."
},
{
"text": "More importantly, notice that we use matrix θ ∈ R n × n to parameterize the probabilisticdistribution of n !"
},
{
"text": "discrete feasible solutions. The compa... | [
{
"text": "Here a higher valued θ i,j corresponds to a higher probability for the edge from node i to node j to be sampled."
},
{
"text": ""
},
{
"text": "The compact, continuous and differentiable space of θ allows us to leverage gradientbased optimization without costly MDP-based construction ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.