Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
id_paragraph
stringlengths
20
26
parag_1
stringlengths
101
3.02k
parag_2
stringlengths
173
2.77k
annot_1
dict
annot_2
dict
id_source
stringlengths
8
11
id_target
stringlengths
8
11
index_paragraph
int64
0
26
list_sentences_1
listlengths
1
36
list_sentences_2
listlengths
1
36
7_CwM-IzWd.zcm6f5HDI.04
During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until the highest accuracy on D val is reached.
During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until ˆ y = y for all samples in D train and take the checkpoint of it when ˆ y reaches the highest accuracy on D val .
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_08" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
4
[ { "text": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 )" }, { "text": "+ CE( y, ˆ y 1 ) , where CE stands for cross-entropy." }, { "text": "We refer to each of the cross-entropy losses as a modality-specific loss." }, { "text": "We train the model until the highest accuracy on D val is reached." } ]
[ { "text": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 )" }, { "text": "+ CE( y, ˆ y 1 ) , where CE stands for cross-entropy." }, { "text": "We refer to each of the cross-entropy losses as a modality-specific loss." }, { "text": "We train the model until ˆ y = y for all samples in D train and take the checkpoint of it when ˆ y reaches the highest accuracy on D val ." } ]
hegI87bI5S.fL6Q48sfx8.09
The task was created with reference to the previous study [25]. Fig- ure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground. First, participants clicked on the start area, and the cursor was fixed at the center of the start area. Assuming the initial position of the cursor may affect the cursor path andthe performance of pointing, we strictly fixed a starting position of the trial. Participants clicked again at the starting position, and the trial began. The start area disappeared as a feedback for the beginning of the trial. Partici- pants aimed at the target and ended the trial with the next click. If participants clicked correctly on the target, we marked the trial as a success; else, the trial was marked as a failure (error). We presented a sound feedback in response to the success or failure of the trial.
The task was created by referring to a previous study [28]. Figure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray background. The participants clicked on the start area; the cursor positioned at the center of the start area. We strictly fixed the starting position of the cursor for the trial assuming that the initial position of the cursor can affect the cursor path and performance of pointing [28]. The trial started once the participant clicked on the starting position. The start area then disappeared, which acted as feedback to indicate the start of the trial. Participants aimed at the target and ended the trial with the next click. If participants clicked the target correctly, we marked the trial as a success; else, the trial was marked as a failure (error). We presented a sound feedback in response to the success or failure of the trial.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the middle part of the paragraph to make it more better. Replace some words in the paragraph.", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Slightly revise for readability, you can reorganise ideas in sentences if necessary.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
9
[ { "text": "The task was created with reference to the previous study [25]." }, { "text": "Fig- ure 3 shows a schematic of the task." }, { "text": "A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground." }, { "text": "First, participants clicked on the start area, and the cursor was fixed at the center of the start area." }, { "text": "Assuming the initial position of the cursor may affect the cursor path andthe performance of pointing, we strictly fixed a starting position of the trial." }, { "text": "Participants clicked again at the starting position, and the trial began." }, { "text": "The start area disappeared as a feedback for the beginning of the trial." }, { "text": "Partici- pants aimed at the target and ended the trial with the next click." }, { "text": "If participants clicked correctly on the target, we marked the trial as a success; else, the trial was marked as a failure (error)." }, { "text": "We presented a sound feedback in response to the success or failure of the trial." } ]
[ { "text": "The task was created by referring to a previous study [28]." }, { "text": "Figure 3 shows a schematic of the task." }, { "text": "A pink circular start area (251-pixel radius) and a green target were displayed on a gray background." }, { "text": "The participants clicked on the start area; the cursor positioned at the center of the start area." }, { "text": "We strictly fixed the starting position of the cursor for the trial assuming that the initial position of the cursor can affect the cursor path and performance of pointing [28]." }, { "text": "The trial started once the participant clicked on the starting position." }, { "text": "The start area then disappeared, which acted as feedback to indicate the start of the trial." }, { "text": "Participants aimed at the target and ended the trial with the next click." }, { "text": "If participants clicked the target correctly, we marked the trial as a success; else, the trial was marked as a failure (error)." }, { "text": "We presented a sound feedback in response to the success or failure of the trial." } ]
SyGfyinsH.I2YVGmIp0.00
A + C + D refers to our approach. In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on average; these gains are most noticeable in the tails. Using the accumulated confidence produces a smaller, but still significant, gain. In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively. The trends are similar those for ResNet.
A + C + D is our approach. As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ . In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor produces a large gain; these gains are most noticeable in the tails. Using the accumulated confidence produces a smaller, but still significant, gain. In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively. The trends are similar those for ResNet.
{ "annotation": [ "Content_addition", "Concision" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
SyGfyinsH
I2YVGmIp0
0
[ { "text": "A + C + D refers to our approach." }, { "text": "" }, { "text": "In (b), we show the same ablations over the entire trajectory until t = 20 ." }, { "text": "As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on average; these gains are most noticeable in the tails." }, { "text": "Using the accumulated confidence produces a smaller, but still significant, gain." }, { "text": "In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively." }, { "text": "The trends are similar those for ResNet." } ]
[ { "text": "A + C + D is our approach." }, { "text": "As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ ." }, { "text": "In (b), we show the same ablations over the entire trajectory until t = 20 ." }, { "text": "As can be seen, using the calibrated predictor produces a large gain; these gains are most noticeable in the tails." }, { "text": "Using the accumulated confidence produces a smaller, but still significant, gain." }, { "text": "In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively." }, { "text": "The trends are similar those for ResNet." } ]
WldWha1MT.LL2ZsGpJga.03
A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G . However, it is limited as it ignores the spatial correspondence ofthe topological features within their respective images (see Figure 2(b)).
Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G . However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial correspondence (see Fig.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this definition in a more direct and academic style.", "annotator": "annotator_07" }
WldWha1MT
LL2ZsGpJga
3
[ { "text": "A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G ." }, { "text": "However, it is limited as it ignores the spatial correspondence ofthe topological features within their respective images (see Figure 2(b))." } ]
[ { "text": "Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G ." }, { "text": "However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial correspondence (see Fig." } ]
7_CwM-IzWd.zcm6f5HDI.03
We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020). The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions. Next we concatenate these representations and applya linear transformation to obtain cross-modal context representation. We predict channel-wise weights for each modality based this context representation through two independent fully-connected layers. Finally, these weights are used tore-calibrate the channel-wise features per modality.
We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020). Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector. We concatenate the two vectors and apply linear transformation. We refer to its output as context representation. Next, for each uni-modal branch, we implement a fully connected layer on the context representation and get a vector with a dimension of the number of feature maps. Feature maps are re-scaled by this vector before passing to the next layer of the uni-modal branch.
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rearrange the structure to make the structure clearer.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this paragraph completely to make it clearer.", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
3
[ { "text": "We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020)." }, { "text": "The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions." }, { "text": "Next we concatenate these representations and applya linear transformation to obtain cross-modal context representation." }, { "text": "We predict channel-wise weights for each modality based this context representation through two independent fully-connected layers. Finally, these weights are used tore-calibrate the channel-wise features per modality." } ]
[ { "text": "We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020)." }, { "text": "Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector." }, { "text": "We concatenate the two vectors and apply linear transformation. We refer to its output as context representation." }, { "text": "Next, for each uni-modal branch, we implement a fully connected layer on the context representation and get a vector with a dimension of the number of feature maps. Feature maps are re-scaled by this vector before passing to the next layer of the uni-modal branch." } ]
uJRtLYIOIq.e9xxGlB_c.00
Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants; for example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design, which is Eq. (1) insection 4. Still, given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the N × N matrix c + ˜ k ( x i , x j )] Ni,j =1 ⪰ 0 . Hence, in this work, we do not need thevalue of c , but we can compute it if we do need its value, e.g., deriving the feature map of c + ˜ k .
Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added. For example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design (Eq. (1) in section 4). Given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the N × N matrix [ c + ˜ k ( x i , x j )] Ni,j =1 ⪰ 0 . Hence, we do not need the value of c , but we can compute it if needed, e.g., deriving the feature map of c + ˜ k .
{ "annotation": [ "Concision" ], "instruction": "Rewrite some formulations, giving preference to shorter ones.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Shorten this paragraph a bit while keeping all the informations.", "annotator": "annotator_07" }
uJRtLYIOIq
e9xxGlB_c
0
[ { "text": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants;" }, { "text": "for example, c − ∥ x − x ′ ∥ p for large enough c ." }, { "text": "Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design, which is Eq." }, { "text": "(1) insection 4." }, { "text": "Still, given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the N × N matrix" }, { "text": " c + ˜ k" }, { "text": "( x i , x j )]" }, { "text": "Ni,j =1 ⪰ 0 ." }, { "text": "Hence, in this work, we do not need thevalue of c , but we can compute it if we do need its value, e.g., deriving the feature map of c + ˜ k ." } ]
[ { "text": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added." }, { "text": "For example, c − ∥ x − x ′ ∥ p for large enough c ." }, { "text": "Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design (Eq." }, { "text": "(1) in section 4)." }, { "text": "Given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the" }, { "text": "N × N matrix [ c + ˜ k" }, { "text": "( x i , x j )]" }, { "text": "Ni,j =1 ⪰ 0 ." }, { "text": "Hence, we do not need the value of c , but we can compute it if needed, e.g., deriving the feature map of c + ˜ k ." } ]
xV0XmrSMtk.sYfR73R9z.02
Discrete Variational Auto-Encoder. In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image.
Discrete Variational Auto-Encoder (DVAE). In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image.
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise by introducing acronyms earlier.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Introduce the acronym DVAE earlier to avoid repeating it.", "annotator": "annotator_07" }
xV0XmrSMtk
sYfR73R9z
2
[ { "text": "Discrete Variational Auto-Encoder." }, { "text": "In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder." }, { "text": "We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image." } ]
[ { "text": "Discrete Variational Auto-Encoder (DVAE)." }, { "text": "In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder." }, { "text": "We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image." } ]
PDvmJtmgQb.gGrpxbc7UI.02
Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own. Another line of work (Papernot et al., 2016; 2018; Bassily et al., 2018b; Dwork & Feldman, 2018; Nandi & Bassily, 2020) considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily. So far only two papers have considered out-of-distribution data. Bassily et al. (2020c) assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples. They show that halfspaces can be learned in this model. Liu et al. (2021) consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions. Abadi et al. and Tramer & Boneh (2020) provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD. However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work.
Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own. Another line of work [8, 16, 29, 31, 32] considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily. So far only two papers have considered out-of-distribution data. [12] assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples. They show that halfspaces can be learned in this model. [26] consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions. [1] and [37] provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD. However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "I want to use numbers for in-text citations. ", "annotator": "annotator_09" }
PDvmJtmgQb
gGrpxbc7UI
2
[ { "text": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically." }, { "text": "On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own." }, { "text": "Another line of work (Papernot et al., 2016; 2018; Bassily et al., 2018b; Dwork & Feldman, 2018; Nandi & Bassily, 2020) considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily." }, { "text": "So far only two papers have considered out-of-distribution data." }, { "text": "Bassily et al." }, { "text": "(2020c) assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples." }, { "text": "They show that halfspaces can be learned in this model." }, { "text": "Liu et al. (2021) consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions. Abadi et al." }, { "text": " and Tramer & Boneh (2020) provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD." }, { "text": "However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work." } ]
[ { "text": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically." }, { "text": "On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own." }, { "text": "Another line of work [8, 16, 29, 31, 32] considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily." }, { "text": "So far only two papers have considered out-of-distribution data." }, { "text": "" }, { "text": "[12] assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples." }, { "text": "They show that halfspaces can be learned in this model." }, { "text": "[26] consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions." }, { "text": "[1] and [37] provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD." }, { "text": "However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work." } ]
E2pFUCGYZ1.5hMS4Fg2b_b.00
ADO iterations in the Bayesian framework are shown in Sec. 3.3 and Appendix A.3. Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters. To further improve the prediction capability, especially for chaoticsystems, we propose to leverage data assimilation techniques, which is shown in the green box anddiscussed in Sec.3.4 and Appendix A.5.
ADO iterations in the Bayesian framework are shown in Sec. 3.3 and supplemental materials. Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters. To further improve the prediction capability, especially forchaotic systems, we propose to leverage data assimilation techniques, which is shown in the greenbox and discussed in Sec.3.4 and supplemental materials.
{ "annotation": [ "Rewriting_light" ], "instruction": "Use \"supplemental materials\" instead of \"Appendix\"", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Lightly revise for readability.", "annotator": "annotator_07" }
E2pFUCGYZ1
5hMS4Fg2b_b
0
[ { "text": "ADO iterations in the Bayesian framework are shown in Sec." }, { "text": "3.3 and Appendix A.3." }, { "text": "Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters." }, { "text": "To further improve the prediction capability, especially for chaoticsystems, we propose to leverage data assimilation techniques, which is shown in the green box anddiscussed in Sec.3.4 and Appendix A.5." } ]
[ { "text": "ADO iterations in the Bayesian framework are shown in Sec." }, { "text": "3.3 and supplemental materials." }, { "text": "Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters." }, { "text": "To further improve the prediction capability, especially forchaotic systems, we propose to leverage data assimilation techniques, which is shown in the greenbox and discussed in Sec.3.4 and supplemental materials." } ]
MXi6uEx-hp.rdZfFcGyf9.14
AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQNbased architectures because the top-K greedy list action building ignores list interdependence.
AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQN-based architectures because the top-K greedy list-action ignores intra-list dependence.
{ "annotation": [ "Rewriting_light" ], "instruction": "Remove unnecessary words and fix the words if they are not in the correct form", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Remove terms that might be considered biased. Make the writing more clear.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
14
[ { "text": "AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy." }, { "text": "RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys." }, { "text": "Additionally, DQN is worse than CDQNbased architectures because the top-K greedy list action building ignores list interdependence." } ]
[ { "text": "AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy." }, { "text": "RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys." }, { "text": "Additionally, DQN is worse than CDQN-based architectures because the top-K greedy list-action ignores intra-list dependence." } ]
mFNezF8ubW.g-sOkbqBcm.00
Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile. The hidden nodes of a concept is also connected to the output prediction node for the concept itself and those for each of its children category nodes. An additional type of connectivity constrains the concept and category predictions to follow the hierarchical organization of the ontology. We illustrate each of these connections below.
Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile. Consequently, the hidden representation for a child concept is computed from that of its parent. Given the representation in capture in the hidden nodes, two types of output prediction nodes detects the presence of the concept itself and any children category in the input. An additional type of connectivity explicitly constrains the concept and category predictions to follow the hierarchical organization of the ontology. We illustrate each of these connections below.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
mFNezF8ubW
g-sOkbqBcm
0
[ { "text": "Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any." }, { "text": "For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile." }, { "text": "The hidden nodes of a concept is also connected to the output prediction node for the concept itself and those for each of its children category nodes." }, { "text": "An additional type of connectivity constrains the concept and category predictions to follow the hierarchical organization of the ontology." }, { "text": "We illustrate each of these connections below." } ]
[ { "text": "Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any." }, { "text": "For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile." }, { "text": "Consequently, the hidden representation for a child concept is computed from that of its parent. Given the representation in capture in the hidden nodes, two types of output prediction nodes detects the presence of the concept itself and any children category in the input." }, { "text": "An additional type of connectivity explicitly constrains the concept and category predictions to follow the hierarchical organization of the ontology." }, { "text": "We illustrate each of these connections below." } ]
CVRUl83zah.I75TtW0V7.25
• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach does improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 . 125 for 128 × 128 ), but is also a bit slower. For simplicity, we therefore opted to not using relation networks. The architecture of the set encoder g is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )–FSPool (Zhang et al., 2020). The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 . • Instead of ResNet34 to encode the input image, we use the smaller ResNet18. This did not appear to affect results. • We increase the batch size from 32 to 128. There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization. • We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0. instead of standard gradient descent without momentum. • Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs. This had slightly better training loss than starting training with 40 iterations. We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. • We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs. This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss. • In preliminary experiments, we rarely observed spikes in the training loss. Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help.
• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach would improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 .for 128 × 128 ), but is also a bit slower. For simplicity, we therefore opted to not using relation networks. The architecture of the set encoder g in iDSPN is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )– FSPool. The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 . • Instead of ResNet34 to encode the input image, we use the smaller ResNet18. This did not appear to affect results. • Instead of using a learned initial set Y 0 as in DSPN, we find that it makes no difference to randomly sample the initial set for every example. We therefore use the latter for simplicity. In initial experiments we found that even initializing every element to 0 causes no problems. • We increase the batch size from 32 to 128. There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization. • We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0. instead of standard gradient descent without momentum. • Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs. This had slightly better training loss than starting training with 40 iterations. We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. iDSPN training was stable in all of these configurations. • We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs. This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss. • In preliminary experiments, we rarely observed spikes in the training loss. Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
25
[ { "text": "• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely." }, { "text": "This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) ." }, { "text": "Using the relation network approach does improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 . 125 for 128 × 128 ), but is also a bit slower." }, { "text": "For simplicity, we therefore opted to not using relation networks." }, { "text": "The architecture of the set encoder g is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )–FSPool (Zhang et al., 2020)." }, { "text": "The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 ." }, { "text": "• Instead of ResNet34 to encode the input image, we use the smaller ResNet18." }, { "text": "This did not appear to affect results." }, { "text": "" }, { "text": "" }, { "text": "" }, { "text": "• We increase the batch size from 32 to 128." }, { "text": "There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization." }, { "text": "• We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0." }, { "text": "instead of standard gradient descent without momentum." }, { "text": "• Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs." }, { "text": "This had slightly better training loss than starting training with 40 iterations." }, { "text": "We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. " }, { "text": "• We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs." }, { "text": "This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss." }, { "text": "• In preliminary experiments, we rarely observed spikes in the training loss." }, { "text": "Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help." } ]
[ { "text": "• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely." }, { "text": "This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) ." }, { "text": "Using the relation network approach would improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 .for 128 × 128 ), but is also a bit slower." }, { "text": "For simplicity, we therefore opted to not using relation networks." }, { "text": "The architecture of the set encoder g in iDSPN is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )– FSPool." }, { "text": "The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 ." }, { "text": "• Instead of ResNet34 to encode the input image, we use the smaller ResNet18." }, { "text": "This did not appear to affect results." }, { "text": "• Instead of using a learned initial set Y 0 as in DSPN, we find that it makes no difference to randomly sample the initial set for every example." }, { "text": "We therefore use the latter for simplicity." }, { "text": "In initial experiments we found that even initializing every element to 0 causes no problems." }, { "text": "• We increase the batch size from 32 to 128." }, { "text": "There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization." }, { "text": "• We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0." }, { "text": "instead of standard gradient descent without momentum." }, { "text": "• Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs." }, { "text": "This had slightly better training loss than starting training with 40 iterations." }, { "text": "We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. iDSPN training was stable in all of these configurations." }, { "text": "• We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs." }, { "text": "This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss." }, { "text": "• In preliminary experiments, we rarely observed spikes in the training loss." }, { "text": "Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help." } ]
lLwt-9RJ2tm.XJsauLjck.03
That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative for at least the dissimilarity objective of [15]; ourstructural decomposition of the cost function and its subsequent implications carry over identically. In particular for this cost function, our results imply (1 + o (1)) ψ -approximate algorithms for HCin weighted graphs that use ( i ) a single-pass and e
That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative; we can in fact achieve even stronger performance guarantees for both objectives in the sublinear resource regime by exploiting the fact that their corresponding optimal hierarchies have large objective function values 5 , allowing us to tolerate even larger additive errors in our cut-sparsifiers. A straightforward application of our structural decomposition of the cost function along with its downstream implications in each of the three models of computation directly gives us (1 − o (1 /ψ )) ψ -approximate algorithms for both HC maximization objectives in weighted graphs that use ( i ) a single-pass and e
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
3
[ { "text": "That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider." }, { "text": "We answer this in the affirmative for at least the dissimilarity objective of [15]; ourstructural decomposition of the cost function and its subsequent implications carry over identically. In particular for this cost function, our results imply (1 + o (1))" }, { "text": "ψ -approximate algorithms for HCin weighted graphs that use ( i ) a single-pass and e" } ]
[ { "text": "That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider." }, { "text": "We answer this in the affirmative; we can in fact achieve even stronger performance guarantees for both objectives in the sublinear resource regime by exploiting the fact that their corresponding optimal hierarchies have large objective function values 5 , allowing us to tolerate even larger additive errors in our cut-sparsifiers. A straightforward application of our structural decomposition of the cost function along with its downstream implications in each of the three models of computation directly gives us (1 − o (1 /ψ ))" }, { "text": "ψ -approximate algorithms for both HC maximization objectives in weighted graphs that use ( i ) a single-pass and e" } ]
9ALnOEcGN_.4eEIRZ-dm.00
We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]. However, thereare several major distinctions between the existing methods and our proposed one. Previous workgenerates heatmaps based on supervised signals (each training graph is paired with its best solution)[4, 19], which are costly to obtain. DIMES is directly optimized with gradients estimated by the REINFORCE algorithm, which do not require supervised signals. As a result, DIMES can scale tolarge graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions withoutthe need for costly generation of supervised training data or human specification of problem-specificheuristics.
We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]. However, there are major distinctions between the existing methods and our DIMES. For instance, Fu et al. [17] learn to generate heatmaps via supervised learning (i.e., each training instance is paired with its best solution) [4, 19], which is very costly to obtain on large graphs. DIMES is directly optimized with gradients estimated by the REINFORCE algorithm without any supervision, so it can be trained on large graphs directly. As a result, DIMES can scale to large graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions without the need for costly generation of supervised training data or human specification of problem-specific heuristics.
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
9ALnOEcGN_
4eEIRZ-dm
0
[ { "text": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]." }, { "text": "However, thereare several major distinctions between the existing methods and our proposed one." }, { "text": "Previous workgenerates heatmaps based on supervised signals (each training graph is paired with its best solution)[4, 19], which are costly to obtain." }, { "text": "DIMES is directly optimized with gradients estimated by the REINFORCE algorithm, which do not require supervised signals." }, { "text": "As a result, DIMES can scale tolarge graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions withoutthe need for costly generation of supervised training data or human specification of problem-specificheuristics." } ]
[ { "text": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]." }, { "text": "However, there are major distinctions between the existing methods and our DIMES. For instance, Fu et al." }, { "text": "[17] learn to generate heatmaps via supervised learning (i.e., each training instance is paired with its best solution) [4, 19], which is very costly to obtain on large graphs." }, { "text": "DIMES is directly optimized with gradients estimated by the REINFORCE algorithm without any supervision, so it can be trained on large graphs directly." }, { "text": "As a result, DIMES can scale to large graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions without the need for costly generation of supervised training data or human specification of problem-specific heuristics." } ]
atxti8SVk.3K9AmPwALM.16
Pascal: Scribble annotations. Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new SOTA: We get 75 . 9% mIoU, achieving 98 . 6% of full supervision performance.
Pascal: Scribble annotations. Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing. We get 74 . 2% ( 76 . 1% ) mIoU, achieving 97 . 5% ( 98 . 4% ) of full supervision performance in these two categories respectively.
{ "annotation": [ "Content_substitution", "Rewriting_light" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
16
[ { "text": "Pascal: Scribble annotations." }, { "text": "Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new" }, { "text": "SOTA: We get 75 ." }, { "text": "9% mIoU, achieving 98 ." }, { "text": "6% of full supervision performance." } ]
[ { "text": "Pascal: Scribble annotations." }, { "text": "Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing." }, { "text": "We get 74 ." }, { "text": "2% ( 76 . 1% ) mIoU, achieving 97 ." }, { "text": "5% ( 98 . 4% ) of full supervision performance in these two categories respectively." } ]
ByZyHzZC-.HktKf7-AW.01
Our work is also related to other work on the importance of noise in SGDs, which have been previously explored. The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our analysis allows us to derive the impact of the gradient noise in the SGD stationary distribution. Additionally, our work also provides intuition toward explaining the recently proposed Cyclic Learning Rate (CLR) schedule (Smith, 2015). CLR schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation rather than on a theoretical understanding. We show that one can replace learning rate annealing with an equivalent batch size schedule. It suggests that the benefit of CLR relates to the noise that it induces and can be thought of as mixing in Monte Carlo Markov Chain (MCMC) methods. In the MCMC setting, annealing processes enable better mixing (Graham & Storkey, 2017).
Our work is also related to the importance of noise in SGD, which has been previously explored. The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our analysis allows us to derive the impact of the gradient’s noise in the SGD stationary distribution. Additionally, our work also provides intuitions toward explaining the recently proposed Cyclic learning rate (CLR) schedule (Smith, 2015). Cyclic learning rate schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation. We also show that one can replace learning rate annealing with an equivalent batch size schedule. It suggests that the benefit of cyclic learning rate relates to the noise that it induces.
{ "annotation": [ "Content_deletion", "Rewriting_light" ], "instruction": "Remove unnecessary content in the last sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Make the last sentence shorter, only keep the main idea. Slightly concise this paragraph and improve the english.", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
1
[ { "text": "Our work is also related to other work on the importance of noise in SGDs, which have been previously explored." }, { "text": "The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998)." }, { "text": "Neelakantan et al." }, { "text": "(2015) observe empirically that adding noise can aid optimization of very deep networks." }, { "text": "Our analysis allows us to derive the impact of the gradient noise in the SGD stationary distribution." }, { "text": "Additionally, our work also provides intuition toward explaining the recently proposed Cyclic Learning Rate (CLR) schedule (Smith, 2015)." }, { "text": "CLR schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation rather than on a theoretical understanding." }, { "text": "We show that one can replace learning rate annealing with an equivalent batch size schedule." }, { "text": "It suggests that the benefit of CLR relates to the noise that it induces and can be thought of as mixing in Monte Carlo Markov Chain (MCMC) methods. In the MCMC setting, annealing processes enable better mixing (Graham & Storkey, 2017)." } ]
[ { "text": "Our work is also related to the importance of noise in SGD, which has been previously explored." }, { "text": "The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998)." }, { "text": "Neelakantan et al." }, { "text": "(2015) observe empirically that adding noise can aid optimization of very deep networks." }, { "text": "Our analysis allows us to derive the impact of the gradient’s noise in the SGD stationary distribution." }, { "text": "Additionally, our work also provides intuitions toward explaining the recently proposed Cyclic learning rate (CLR) schedule (Smith, 2015)." }, { "text": "Cyclic learning rate schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation." }, { "text": "We also show that one can replace learning rate annealing with an equivalent batch size schedule." }, { "text": "It suggests that the benefit of cyclic learning rate relates to the noise that it induces." } ]
u9NaukzyJ-.hh0KECXQLv.11
Design A supportstwo sorts of medication entries: drug or phys- ical activity. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity. All other calendar entries are represented with rectangles filled with different shades of grey.
Design A supports medication (or drug) entries and physical activ- ities. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity. All other calendar entries are represented with rectangles filled with different shades of grey.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make this paragraph a bit more fluid.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "I want to rewrite the first sentence.", "annotator": "annotator_09" }
u9NaukzyJ-
hh0KECXQLv
11
[ { "text": "Design A supportstwo sorts of medication entries: drug or phys- ical activity." }, { "text": "Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage." }, { "text": "The suffix -WF indicates that the drug should be administered with food." }, { "text": "Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity." }, { "text": "All other calendar entries are represented with rectangles filled with different shades of grey." } ]
[ { "text": "Design A supports medication (or drug) entries and physical activ- ities." }, { "text": "Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage." }, { "text": "The suffix -WF indicates that the drug should be administered with food." }, { "text": "Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity." }, { "text": "All other calendar entries are represented with rectangles filled with different shades of grey." } ]
CVRUl83zah.I75TtW0V7.04
Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass,Zhang et al. (2019) backpropagate through the gradient descent iterations in order to compute the gradients of the training objective with respect to the input vector z and the parameters θ of the encoder.
Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass, the goal is to differentiate Equation 7 with respect to the input vector z and the parameters θ of the encoder. To do this, Zhang et al. (2019) unroll the gradient descent applied in the forward pass and backpropagate through each gradient descent step.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Add a sentence to explain the last sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Improve the logical flow of the last half of the paragraph.", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
4
[ { "text": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L ." }, { "text": "In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps." }, { "text": "In the backward pass,Zhang et al. (2019) backpropagate through the gradient descent iterations in order to compute the gradients of the training objective with respect to the input vector z and the parameters θ of the encoder. " } ]
[ { "text": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L ." }, { "text": "In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps." }, { "text": "In the backward pass, the goal is to differentiate Equation 7 with respect to the input vector z and the parameters θ of the encoder. To do this, Zhang et al. (2019) unroll the gradient descent applied in the forward pass and backpropagate through each gradient descent step." } ]
cW17DDjQa_.6iDdN7-bYz.00
We propose an algorithm to solve above optimization problem (3). The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables. Then we tackle the non-differentiable objective function using self-defined numerical differentiation. At last, we summarize all the optimization details into a gradient-based optimization formulation.
To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation. In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables. Then we tackle the non-differentiable objective function using self-defined numerical differentiation. At last, we summarize all the optimization details into a gradient-based optimization formulation.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
cW17DDjQa_
6iDdN7-bYz
0
[ { "text": "We propose an algorithm to solve above optimization problem (3)." }, { "text": "The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve." }, { "text": "Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables." }, { "text": "Then we tackle the non-differentiable objective function using self-defined numerical differentiation." }, { "text": "At last, we summarize all the optimization details into a gradient-based optimization formulation." } ]
[ { "text": "To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation." }, { "text": "In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make the problem difficult to solve." }, { "text": "Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables." }, { "text": "Then we tackle the non-differentiable objective function using self-defined numerical differentiation." }, { "text": "At last, we summarize all the optimization details into a gradient-based optimization formulation." } ]
33RNh69fYq.kMvWVl725x.02
Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [3]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[37] respectively have the channel of 24, 32, 56, and 160, and they are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamWoptimizer [18] with weight decay 1 × 10 − 4 is used for training. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs. The neighbor size is set as 7 × 7. The jittering scale and jitteringprobability are chosen as 20 and 1, respectively. The evaluation is run with 5 random seeds.
Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [4]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[39] are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamW optimizer [20] with weight decay 1 × 10 − 4 is used. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs. The neighbor size, jittering scale, andjittering probability are set as 7 × 7, 20, and 1, respectively. The evaluation is run with 5 random seeds.
{ "annotation": [ "Concision" ], "instruction": "Remove some details on model training to make the paragraph more concise.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary details to shorten this paragraph.", "annotator": "annotator_07" }
33RNh69fYq
kMvWVl725x
2
[ { "text": "Setup ." }, { "text": "Anomaly detection aims to detect whether an image contains anomalous regions." }, { "text": "Theperformance is evaluated on MVTec-AD [3]." }, { "text": "The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 ." }, { "text": "The feature maps from stage-1 to stage-4 of EfficientNet-b4[37] respectively have the channel of 24, 32, 56, and 160, and they are resized and concatenated together to form a 272-channel feature map." }, { "text": "The reduced channel dimension is set as 256." }, { "text": "AdamWoptimizer [18] with weight decay 1 × 10 − 4 is used for training." }, { "text": "Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64." }, { "text": "The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs." }, { "text": "The neighbor size is set as 7 × 7. The jittering scale and jitteringprobability are chosen as 20 and 1, respectively." }, { "text": "The evaluation is run with 5 random seeds." } ]
[ { "text": "Setup ." }, { "text": "Anomaly detection aims to detect whether an image contains anomalous regions." }, { "text": "Theperformance is evaluated on MVTec-AD [4]." }, { "text": "The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 ." }, { "text": "The feature maps from stage-1 to stage-4 of EfficientNet-b4[39] are resized and concatenated together to form a 272-channel feature map." }, { "text": "The reduced channel dimension is set as 256." }, { "text": "AdamW optimizer [20] with weight decay 1 × 10 − 4 is used." }, { "text": "Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64." }, { "text": "The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs." }, { "text": "The neighbor size, jittering scale, andjittering probability are set as 7 × 7, 20, and 1, respectively." }, { "text": "The evaluation is run with 5 random seeds." } ]
MXi6uEx-hp.rdZfFcGyf9.21
In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. Then, we hypothesized that the existence of the complex relations between actionsin the environment (e.g., tools and activators in CREATE) injects the complex action relations in the environment. For instance, an appropriate pair of an activator and a tool to use in CREATE depends on the situation. To this end, we implemented the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended. Since action relations are complex, AGILE is expected to outperform the ablations. Figure 14 shows that AGILE beats the baselines and in Fig. AGILE slightly but consistently outperforms the ablations. In Fig.16, AGILE outperformed AGILEGCN shows that a GAT is capable of modeling the action relations correctly and AGILE converging faster than AGILE Only Action shows that the intermediate list information is crucial to efficiently learn to attend the other half in the pairing of items.
In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations. In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. We hypothesize that these environments require complex relations between actions (e.g., tools and activators in CREATE). To this end, we implement the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended. Since action relations are complex, AGILE is expected to outperform the ablations. Figure 14 shows that AGILE beats the baselines and in Fig.15 AGILE slightly but consistently outperforms the ablations. In Fig.16, AGILE outperforming AGILE-GCN shows that a GAT is capable of modeling the action relations correctly. AGILE converges faster than AGILE Only-Action. This shows that the state and the partially constructed list are crucial to learning to attend the other half in pairing items efficiently.
{ "annotation": [ "Rewriting_medium", "Content_deletion" ], "instruction": "Make this paragraph shorter and easier to understand", "annotator": "annotator_10" }
{ "annotation": [ "Concision" ], "instruction": "Simplify the less essential ideas of the paragraph to make it more concise.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
21
[ { "text": "In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that " }, { "text": "AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making." }, { "text": "Then, we hypothesized that the existence of the complex relations between actionsin the environment (e.g., tools and activators in CREATE) injects the complex action relations in the environment. For instance, an appropriate pair of an activator and a tool to use in CREATE depends on the situation." }, { "text": "To this end, we implemented the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended." }, { "text": "Since action relations are complex, AGILE is expected to outperform the ablations." }, { "text": "Figure 14 shows that AGILE beats the baselines and in Fig. AGILE slightly but consistently outperforms the ablations." }, { "text": "In Fig.16, AGILE outperformed AGILEGCN shows that a GAT is capable of modeling the action relations correctly and AGILE converging faster than AGILE Only Action shows that the intermediate list information is crucial to efficiently learn to attend the other half in the pairing of items." } ]
[ { "text": "In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations." }, { "text": "In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making." }, { "text": "We hypothesize that these environments require complex relations between actions (e.g., tools and activators in CREATE)." }, { "text": "To this end, we implement the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended." }, { "text": "Since action relations are complex, AGILE is expected to outperform the ablations." }, { "text": "Figure 14 shows that AGILE beats the baselines and in Fig.15 AGILE slightly but consistently outperforms the ablations." }, { "text": "In Fig.16, AGILE outperforming AGILE-GCN shows that a GAT is capable of modeling the action relations correctly. AGILE converges faster than AGILE Only-Action. This shows that the state and the partially constructed list are crucial to learning to attend the other half in pairing items efficiently." } ]
NwOG107NKJ.0PPYM22rdB.02
Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) Weber and Luo [2014]. Other features includeproject volume, documentation volume, presence ofsupporting files, codevolume and standardlibrary usage. The popularity velocity can be measured by (Total_Stars / project_life). Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs.
Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) [Weber and Luo, 2014]. Other features include project size, file volume, critical folder, lines of code and calling of basic functions. The popularity rate can be measured by (Total_Stars / project_life). Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs.
{ "annotation": [ "Rewriting_light" ], "instruction": "Make the use of a citation in the second sentence correct. Update the third sentence.", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the readability of this paragraph.", "annotator": "annotator_03" }
NwOG107NKJ
0PPYM22rdB
2
[ { "text": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users." }, { "text": "Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest)" }, { "text": "Weber and Luo [2014]." }, { "text": "Other features includeproject volume, documentation volume, presence ofsupporting files, codevolume and standardlibrary usage." }, { "text": "The popularity velocity can be measured by (Total_Stars / project_life)." }, { "text": "Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs." } ]
[ { "text": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users." }, { "text": "Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest)" }, { "text": "[Weber and Luo, 2014]." }, { "text": "Other features include project size, file volume, critical folder, lines of code and calling of basic functions." }, { "text": "The popularity rate can be measured by (Total_Stars / project_life)." }, { "text": "Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs." } ]
ByZyHzZC-.HktKf7-AW.00
The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In particular, Mandt et al. (2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases. In the first phase, weights diffuse and move away from the initialization. In the second phase the gradient magnitude dominates the noise in the gradient estimate. In the final phase, the weights are near the optimum. (Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation. In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation. Our derivation bears similarity with Mandt et al. However, while Mandt et al. (2017) aims at performing approximate Bayesian inference, our end goal is to analyse the stationary distribution reached by SGD.
The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In particular, Mandt et al. (2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases. In the first phase, weights diffuse and move away from the initialization. In the second phase the gradient magnitude dominates the noise in the gradient estimate. In the final phase, the weights are near the optimum. (Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation. In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation. Our derivation bears similarity with Mandt et al. However, while Mandt et al. (2017) study SGD as an approximate Bayesian inference method in the final phase of optimization in a locally convex setting, our end goal is to analyse the stationary distribution over the entire parameter space reached by SGD. Further, our analysis allows us to compare the probability of SGD ending up in one minima over another, which is novel in our case.
{ "annotation": [ "Development", "Content_addition" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
0
[ { "text": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014;" }, { "text": "Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014)." }, { "text": "In particular, Mandt et al." }, { "text": "(2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases." }, { "text": "In the first phase, weights diffuse and move away from the initialization." }, { "text": "In the second phase the gradient magnitude dominates the noise in the gradient estimate." }, { "text": "In the final phase, the weights are near the optimum." }, { "text": "(Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation." }, { "text": "In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation." }, { "text": "Our derivation bears similarity with Mandt et al." }, { "text": "However, while Mandt et al." }, { "text": "(2017) aims at performing approximate Bayesian inference, our end goal is to analyse the stationary distribution reached by " }, { "text": "SGD." } ]
[ { "text": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014;" }, { "text": "Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014)." }, { "text": "In particular, Mandt et al." }, { "text": "(2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases." }, { "text": "In the first phase, weights diffuse and move away from the initialization." }, { "text": "In the second phase the gradient magnitude dominates the noise in the gradient estimate." }, { "text": "In the final phase, the weights are near the optimum." }, { "text": "(Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation." }, { "text": "In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation." }, { "text": "Our derivation bears similarity with Mandt et al." }, { "text": "However, while Mandt et al." }, { "text": "(2017) study SGD as an approximate Bayesian inference method in the final phase of optimization in a locally convex setting, our end goal is to analyse the stationary distribution over the entire parameter space reached by SGD." }, { "text": "Further, our analysis allows us to compare the probability of SGD ending up in one minima over another, which is novel in our case." } ]
7_CwM-IzWd.zcm6f5HDI.05
During training, the uni-modal branch largely focuses on the associated modality. The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:
During training, each uni-modal branch largely focuses on its associate input modality. The fusion modules generate context representation using all modalities and feed such information to the unimodal branches. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make the sentence understandable.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the wording of this paragraph.", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
5
[ { "text": "During training, the uni-modal branch largely focuses on the associated modality." }, { "text": "The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them." }, { "text": "Both ˆ y 0 and ˆ y 1 depend on information from both modalities." }, { "text": "We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:" } ]
[ { "text": "During training, each uni-modal branch largely focuses on its associate input modality." }, { "text": "The fusion modules generate context representation using all modalities and feed such information to the unimodal branches." }, { "text": "Both ˆ y 0 and ˆ y 1 depend on information from both modalities." }, { "text": "We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:" } ]
eyheq0JfG.lDLi0nFVcl.00
For example, using mixup on top of random scaling and cropping improves the results by 0.4%. This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations).
For example, using mixup on top of random scaling and cropping improves the results by 0.4%. In comparison, when we trained Real-to-Bin Martinez et al. (2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II. This suggests that, thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations).
{ "annotation": [ "Content_addition", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
eyheq0JfG
lDLi0nFVcl
0
[ { "text": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%." }, { "text": "" }, { "text": "" }, { "text": "This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations)." } ]
[ { "text": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%." }, { "text": "In comparison, when we trained Real-to-Bin Martinez et al." }, { "text": "(2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II." }, { "text": "This suggests that, thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations)." } ]
CVRUl83zah.I75TtW0V7.05
Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not necessarily setequivariant. Zhang et al. find that FSPool-based encoders (Zhang et al., 2020) perform by far the best among the ones they have tried. With this type of encoder, DSPN becomes exclusively multiset-equivariant . This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant. We prove this in Appendix A.
Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but depending on the encoder, it is not necessarily set-equivariant. Zhang et al. find that FSPool-based encoders (Zhang et al., 2020) achieved by far the best results among the encoders they have tried. With FSPool, DSPN becomes exclusively multiset-equivariant to its initialization Y 0 . This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant (Appendix A).
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light", "Development" ], "instruction": "", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
5
[ { "text": "Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets." }, { "text": "The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not necessarily setequivariant." }, { "text": "Zhang et al." }, { "text": "find that FSPool-based encoders (Zhang et al., 2020) perform by far the best among the ones they have tried." }, { "text": "With this type of encoder, DSPN becomes exclusively multiset-equivariant ." }, { "text": "This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant. We prove this in Appendix A." } ]
[ { "text": "Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets." }, { "text": "The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but depending on the encoder, it is not necessarily set-equivariant." }, { "text": "Zhang et al." }, { "text": "find that FSPool-based encoders (Zhang et al., 2020) achieved by far the best results among the encoders they have tried." }, { "text": "With FSPool, DSPN becomes exclusively multiset-equivariant to its initialization Y 0 ." }, { "text": "This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant (Appendix A)." } ]
aomiOZE_m2.rxb2TiQ6bq.05
Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly introduced recursive learning in DRCN to decrease model size (Kim et al., 2016b). Ahn et al . designed a cascading mechanism upon a residual network in CARN (Ahn et al., 2018). Hui et al . proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019). Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020). Recently, neural architecture search was introduced for image SR in FALSR (Chu et al., 2019a). Besides, model compression techniques, like knowledge distillation, have been investigated for image SR. He et al . proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020). Lee et al . trained a teacher network to distill its knowledge to a student (Lee et al., 2020). Although those lightweight networks have achieved great progress, we still need to investigate deeper for more efficient image SR models.
Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly decreased parameter number by utilizing recursive learning in DRCN (Kim et al., 2016b). Ahn et al . proposed CARN by designing a cascading mechanism upon a residual network (Ahn et al., 2018). Hui et al . proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019). Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020). Recently, neural architecture search was applied for image SR, like FALSR (Chu et al., 2019a). Also, model compression techniques have been explored for image SR. He et al . proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020). Lee et al . distilled knowledge from a larger teacher network to a student one (Lee et al., 2020). Those lightweight image SR models have obtained great progress, but we still need to investigate deeper for more efficient image SR models.
{ "annotation": [ "Rewriting_medium", "Concision" ], "instruction": "Can you make my paragraph more concise?", "annotator": "annotator_09" }
{ "annotation": [ "Concision" ], "instruction": "Use shorter formulations and more direct language to make the paragraph more concise.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
5
[ { "text": "Lightweight Image SR Models." }, { "text": "Recent years have been rising interest in investigating lightweight image SR models." }, { "text": "These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting." }, { "text": "Kim et al ." }, { "text": "firstly introduced recursive learning in DRCN to decrease model size (Kim et al., 2016b)." }, { "text": "Ahn et al ." }, { "text": "designed a cascading mechanism upon a residual network in CARN (Ahn et al., 2018)." }, { "text": "Hui et al ." }, { "text": "proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019)." }, { "text": "Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020)." }, { "text": "Recently, neural architecture search was introduced for image SR in FALSR (Chu et al., 2019a)." }, { "text": "Besides, model compression techniques, like knowledge distillation, have been investigated for image SR." }, { "text": "He et al ." }, { "text": "proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020)." }, { "text": "Lee et al ." }, { "text": "trained a teacher network to distill its knowledge to a student (Lee et al., 2020)." }, { "text": "Although those lightweight networks have achieved great progress, we still need to investigate deeper for more efficient image SR models." } ]
[ { "text": "Lightweight Image SR Models." }, { "text": "Recent years have been rising interest in investigating lightweight image SR models." }, { "text": "These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting." }, { "text": "Kim et al ." }, { "text": "firstly decreased parameter number by utilizing recursive learning in DRCN (Kim et al., 2016b)." }, { "text": "Ahn et al ." }, { "text": "proposed CARN by designing a cascading mechanism upon a residual network (Ahn et al., 2018)." }, { "text": "Hui et al ." }, { "text": "proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019)." }, { "text": "Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020)." }, { "text": "Recently, neural architecture search was applied for image SR, like FALSR (Chu et al., 2019a)." }, { "text": "Also, model compression techniques have been explored for image SR." }, { "text": "He et al ." }, { "text": "proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020)." }, { "text": "Lee et al ." }, { "text": "distilled knowledge from a larger teacher network to a student one (Lee et al., 2020)." }, { "text": "Those lightweight image SR models have obtained great progress, but we still need to investigate deeper for more efficient image SR models." } ]
gIp_U0JsFa.T3RdAsTpzN.00
Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [78, 84, 32]. 5 A Algorithm 1 (Conditional) independence testing assessing the nature of shift S on a single variable U ∈ G .
Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [66, 71, 27].
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
gIp_U0JsFa
T3RdAsTpzN
0
[ { "text": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1)." }, { "text": "As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [78, 84, 32]. 5 A Algorithm 1 (Conditional) independence testing assessing the nature of shift S on a single variable U ∈ G ." } ]
[ { "text": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1)." }, { "text": "As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [66, 71, 27]." } ]
7_CwM-IzWd.zcm6f5HDI.22
We report means and standard deviations of the models’ test accuracy in Table 1.[- -] The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, where their performances are very close.
We report means and standard deviations of the models’ test accuracies in Table 1.[- -] 3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods in all four cases.
{ "annotation": [ "Content_substitution", "Development" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Content_substitution", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
7_CwM-IzWd
zcm6f5HDI
22
[ { "text": "We report means and standard deviations of the models’ test accuracy in Table 1.[-\n-]" }, { "text": " The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, where their performances are very close." } ]
[ { "text": "We report means and standard deviations of the models’ test accuracies in Table 1.[-\n-]" }, { "text": "3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods in all four cases." } ]
S1-LZxvKX.rJ009I8RX.03
Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch. NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude. Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training. These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks. As will be discussed in Section 5, our method, more scalable and computationally efficient than these previous approaches, fully closed the generalization gap for the first time between training a compact sparse network and compression of a large deep CNN.
Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch. NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude. Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training. These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks. We show that the method we propose in this paper is more scalable and computationally efficient than these previous approaches, while achieving better performance on deep convolutional networks.
{ "annotation": [ "Concision" ], "instruction": "Edit the last sentence of this paragraph to make it shorter and remove the reference to Section 5.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Rewrite the last sentence to make it more concise.", "annotator": "annotator_07" }
S1-LZxvKX
rJ009I8RX
3
[ { "text": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently." }, { "text": "Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training." }, { "text": "Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch." }, { "text": "NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude." }, { "text": "Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training." }, { "text": "These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks." }, { "text": "As will be discussed in Section 5, our method, more scalable and computationally efficient than these previous approaches, fully closed the generalization gap for the first time between training a compact sparse network and compression of a large deep CNN." } ]
[ { "text": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently." }, { "text": "Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training." }, { "text": "Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch." }, { "text": "NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude." }, { "text": "Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training." }, { "text": "These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks." }, { "text": "We show that the method we propose in this paper is more scalable and computationally efficient than these previous approaches, while achieving better performance on deep convolutional networks." } ]
XXtXW925iG.JHwYPw52XHb.00
In the previous section, we showed that the limiting diffusion exists when ⌘ and go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘ ! 0 while ⌘ varies and is only upper bounded by some constant. A concrete example is ⌘ ! 0and being fixed.
In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant. A concrete example is η → 0 and λ beingfixed.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
XXtXW925iG
JHwYPw52XHb
0
[ { "text": "In the previous section, we showed that the limiting diffusion exists when ⌘ and \u0000 go to zero witha fixed ratio." }, { "text": "However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘\u0000 ! 0 while ⌘\u0000 varies and is only upper bounded by some constant." }, { "text": "A concrete example is ⌘ ! 0and \u0000 being fixed." } ]
[ { "text": "In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio." }, { "text": "However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant." }, { "text": "A concrete example is η → 0 and λ beingfixed." } ]
aFWzpdwEna.MCecpd3utK.00
In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to. To address this problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method, Pareto policy pool (P3), that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, providing the flexibility to select the best policy for each realistic environment from the pool. P3 provides a simple and principal approach that addresses the two major challenges in model-based offline RL: “model exploitation” and generalization to different unseen states. On the D4RL benchmark, P3 substantially outperforms several recent baseline methods over multiple tasks and shows the potentiality of learning a generalizable policy when the quality of pre-collected experiences is low.
In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment. To address the problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, which provides flexibility to select the best policy in the inference stage. We extensively validate the efficacy of our method on the D4RL benchmark, where ours largely outperforms several recent baselines and exhibits promising results on low-quality datasets.
{ "annotation": [ "Concision", "Rewriting_heavy" ], "instruction": "Make this paragraph more concise by rewriting the second half.", "annotator": "annotator_02" }
{ "annotation": [ "Concision", "Content_deletion" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
aFWzpdwEna
MCecpd3utK
0
[ { "text": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to." }, { "text": "To address this problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method, Pareto policy pool (P3), that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, providing the flexibility to select the best policy for each realistic environment from the pool." }, { "text": "P3 provides a simple and principal approach that addresses the two major challenges in model-based offline RL: “model exploitation” and generalization to different unseen states." }, { "text": "On the D4RL benchmark, P3 substantially outperforms several recent baseline methods over multiple tasks and shows the potentiality of learning a generalizable policy when the quality of pre-collected experiences is low." } ]
[ { "text": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment." }, { "text": "To address the problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, which provides flexibility to select the best policy in the inference stage." }, { "text": "" }, { "text": "We extensively validate the efficacy of our method on the D4RL benchmark, where ours largely outperforms several recent baselines and exhibits promising results on low-quality datasets." } ]
YkiRt7L93m.jgDbnUD7s.01
We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. Italso provides a unique solution to the projection problem under mild conditions and can replicate the geometric properties of the target measure, such as its shape and support. To achieve this, we work in the 2Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance.
A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of general probability measures supported on Euclidean spaces. It provides a unique solution to the projection problem under mild conditions. To achieve this, we work in the 2 -Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Please, make this paragraph easier to read.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite and reorganise this paragraph to improve the english and be more convincing, let the last sentence as it is.", "annotator": "annotator_07" }
YkiRt7L93m
jgDbnUD7s
1
[ { "text": "We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. " }, { "text": "Italso provides a unique solution to the projection problem under mild conditions and can replicate the geometric properties of the target measure, such as its shape and support." }, { "text": "To achieve this, we work in the 2Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance." } ]
[ { "text": "A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of general probability measures supported on Euclidean spaces." }, { "text": "It provides a unique solution to the projection problem under mild conditions." }, { "text": "To achieve this, we work in the 2 -Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance." } ]
jzQGmT-R1q.ugUt9B3XaO.02
In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, but we see a similar trend across most architectures and prediction tasks. The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks. This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions.
In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by saturated units and that this phenomenon will be easiest to detect in settings where the network architecture is not highly over-parameterized relative to the prediction task. The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks. This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions.
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
jzQGmT-R1q
ugUt9B3XaO
2
[ { "text": "In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget." }, { "text": "This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, but we see a similar trend across most architectures and prediction tasks." }, { "text": "The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks." }, { "text": "This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions." } ]
[ { "text": "In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget." }, { "text": "This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by saturated units and that this phenomenon will be easiest to detect in settings where the network architecture is not highly over-parameterized relative to the prediction task." }, { "text": "The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks." }, { "text": "This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions." } ]
hegI87bI5S.fL6Q48sfx8.08
VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI). The mouse-cursor speed via the OS setting was set to the middle of the slider in the control display and ” Enhance pointer precision ” setting was turned on to match the participant’s usual settings. The experimental system was implemented with Hot soup processor 3.6 and used in full-screen mode.
VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an optical mouse (Logitech gaming mouse, G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the control display and the “ Enhance pointer precision ” setting was turned on to match the usual settings of the participant.). The experimental system was implemented with Hot Soup Processor 3.6 and used in the full-screen mode 1 .
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English of this paragraph", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium", "Rewriting_light" ], "instruction": "Slightly revise the linking between phrases.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
8
[ { "text": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz." }, { "text": "We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI)." }, { "text": "The mouse-cursor speed via the OS setting was set to the middle of the slider in the control display and ” Enhance pointer precision ” setting was turned on to match the participant’s usual settings." }, { "text": "The experimental system was implemented with Hot soup processor 3.6 and used in full-screen mode." } ]
[ { "text": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz." }, { "text": "We used an optical mouse (Logitech gaming mouse," }, { "text": "G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the control display and the “ Enhance pointer precision ” setting was turned on to match the usual settings of the participant.)." }, { "text": "The experimental system was implemented with Hot Soup Processor 3.6 and used in the full-screen mode 1 ." } ]
_nwyDQp-7.85dN7i1zNm.00
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumption leads to the bounds having the following form:
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. Intuitively, this means that source and target tasks are independent, which does not reflect real-world applications of few-shot learning where the former are often different draws (without replacement) from the same dataset. Under this unrealistic assumption, the above-mentioned works obtained the bounds having the following form:
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
_nwyDQp-7
85dN7i1zNm
0
[ { "text": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018;" }, { "text": "Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d." }, { "text": "from the same random distribution." }, { "text": "" }, { "text": "This assumption leads to the bounds having the following form:" } ]
[ { "text": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018;" }, { "text": "Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d." }, { "text": "from the same random distribution." }, { "text": "Intuitively, this means that source and target tasks are independent, which does not reflect real-world applications of few-shot learning where the former are often different draws (without replacement) from the same dataset." }, { "text": "Under this unrealistic assumption, the above-mentioned works obtained the bounds having the following form:" } ]
OV5v_wBMHk.bw4cqlpLh.02
Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i . e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i . e ., individuals have their preferences regarding treatment selection, making the population across different groups heterogeneous. To cope with missing counterfactuals, meta-learners (R et al., 2019) decompose the ITE estimation task into solvable subproblems. However,as shown in Section 2.1, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained over the treated/untreated group to the entire population, and the ITE estimation isthus biased.Representation-based methods mitigate this selection bias by minimizing the distribution discrepancy between groups in the representation space. In particular, Uri et al.
Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i . e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i . e ., individuals have their preferences for treatment selection, making units in different treatment groups heterogeneous. To handle missing counterfactuals, meta-learners (K¨unzel et al., 2019) decompose the ITE estimation task into solvable factual outcome estimation subproblems. However, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained within respective treatment groups to the entire population; consequently, the derived ITE estimator is biased.
{ "annotation": [ "Unusable", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
null
OV5v_wBMHk
bw4cqlpLh
2
[ { "text": "Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i ." }, { "text": "e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i ." }, { "text": "e ., individuals have their preferences regarding treatment selection, making the population across different groups heterogeneous." }, { "text": "To cope with missing counterfactuals, meta-learners (R et al., 2019) decompose the ITE estimation task into solvable subproblems." }, { "text": "However,as shown in Section 2.1, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained over the treated/untreated group to the entire population, and the ITE estimation isthus biased.Representation-based methods mitigate this selection bias by minimizing the distribution discrepancy between groups in the representation space. In particular, Uri et al." } ]
[ { "text": "Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i ." }, { "text": "e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i ." }, { "text": "e ., individuals have their preferences for treatment selection, making units in different treatment groups heterogeneous." }, { "text": "To handle missing counterfactuals, meta-learners (K¨unzel et al., 2019) decompose the ITE estimation task into solvable factual outcome estimation subproblems." }, { "text": "However, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained within respective treatment groups to the entire population; consequently, the derived ITE estimator is biased." } ]
aomiOZE_m2.rxb2TiQ6bq.07
We first give a brief view of the problem setting about deep CNN for image SR. We also observe that there exists heavy redundancy in the networks. To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to compress them.
We first present an overview of the problem setting about deep CNN for image SR. It is also observed that excessive redundancy exists in the SR deep CNNs. Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more efficient SR networks.
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Can you paraphrase the last sentence?", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite the last sentence preferring passive voice over active.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
7
[ { "text": "We first give a brief view of the problem setting about deep CNN for image SR." }, { "text": "We also observe that there exists heavy redundancy in the networks." }, { "text": "To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to compress them." } ]
[ { "text": "We first present an overview of the problem setting about deep CNN for image SR." }, { "text": "It is also observed that excessive redundancy exists in the SR deep CNNs." }, { "text": "Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more efficient SR networks." } ]
nCTSF9BQJ.DGhBYSP_sR.02
Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021). Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on protein-protein binding. The major challenge is the scarcity of experimental data — only a few thousands of protein mutations annotated with the change in binding affinity are publicly available (Geng et al., 2019b). This hinders supervised learning as the insufficiency of training data tends to cause over-fitting. Another difficulty is the absence of the structure of mutated protein-protein complexes. Mutating amino acids on a protein complex leads to changes on sidechain conformations (rotamers) (Najmanovich et al., 2000; Gaudreault et al., 2012). They account for the change in binding free energy but we do not have the knowledge of how exactly the conformation changes upon mutation.
Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021). However, developing deep learning-based models to predict mutational effects on protein-protein binding is challenging due to the scarcity of experimental data. Only a few thousand protein mutations, annotated with changes in binding affinity, are publicly available (Geng et al., 2019b), making supervised learning challenging due to the potential for overfitting with insufficient training data. Another difficulty is the absence of the structure of mutated protein-protein complexes. Mutating amino acids on a protein complex leads to changes mainly in sidechain conformations (Najmanovich et al., 2000; Gaudreault et al., 2012), which contribute to the change in binding free energy. However, the exact conformational changes upon mutation are unknown.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the following paragraph using a more formal language.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite this paragraph for better readability.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
2
[ { "text": "Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021)." }, { "text": "Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on protein-protein binding." }, { "text": "The major challenge is the scarcity of experimental data — only a few thousands of protein mutations annotated with the change in binding affinity are publicly available (Geng et al., 2019b). This hinders supervised learning as the insufficiency of training data tends to cause over-fitting." }, { "text": "Another difficulty is the absence of the structure of mutated protein-protein complexes." }, { "text": "Mutating amino acids on a protein complex leads to changes on sidechain conformations (rotamers) (Najmanovich et al., 2000; Gaudreault et al., 2012)." }, { "text": "They account for the change in binding free energy but we do not have the knowledge of how exactly the conformation changes upon mutation." } ]
[ { "text": "Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021)." }, { "text": "However, developing deep learning-based models to predict mutational effects on protein-protein binding is challenging due to the scarcity of experimental data." }, { "text": "Only a few thousand protein mutations, annotated with changes in binding affinity, are publicly available (Geng et al., 2019b), making supervised learning challenging due to the potential for overfitting with insufficient training data." }, { "text": "Another difficulty is the absence of the structure of mutated protein-protein complexes." }, { "text": "Mutating amino acids on a protein complex leads to changes mainly in sidechain conformations (Najmanovich et al., 2000;" }, { "text": "Gaudreault et al., 2012), which contribute to the change in binding free energy. However, the exact conformational changes upon mutation are unknown." } ]
g5N2H6sr7.6J3ec8Dl3p.02
Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 1 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN.
Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We also include the results of recent supervised graph classification models: GCN (Kipf & Welling, 2017), GAT (Veliˇckovi´c et al., 2018), GIN (Xu et al., 2019b). We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 2 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
g5N2H6sr7
6J3ec8Dl3p
2
[ { "text": "Kernel (MLG) (Kondor & Pan, 2016)." }, { "text": "In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC" }, { "text": "(Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020)." }, { "text": "" }, { "text": " We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 1 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN." } ]
[ { "text": "Kernel (MLG) (Kondor & Pan, 2016)." }, { "text": "In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC" }, { "text": "(Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020)." }, { "text": "We also include the results of recent supervised graph classification models: GCN (Kipf & Welling, 2017), GAT (Veliˇckovi´c et al., 2018)," }, { "text": "GIN (Xu et al., 2019b). We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 2 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN." } ]
hegI87bI5S.fL6Q48sfx8.11
We defined the notch position ( Position ) as the condition. Position = Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target. When the angle of entry to a target adjacent to a top edge with respect to the target was based on they-axis, an equivalent effect was observed at the angles of entry that were lineally symmetric about the y-axis [3]. Therefore, the performance would be the same whether the target was to the left or right of the starting area. To avoid increasing the workload of the participant, we always placed the starting area to the left of the target.
We defined the notch position ( Position ) as the condition. Position = Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target. An equivalent effect is observed at angles of entry that are lineally symmetric about the y-axis when the angle of entry the target adjacent to a top edge with respect to the target is based on the y-axis [3]. Therefore, the performance is the same whether the target is to the left or right of the starting area. We always place the starting area to the left of the target to avoid increasing the workload of the participant.
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
hegI87bI5S
fL6Q48sfx8
11
[ { "text": "We defined the notch position ( Position ) as the condition." }, { "text": "Position =" }, { "text": "Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target." }, { "text": "When the angle of entry to a target adjacent to a top edge with respect to the target was based on they-axis, an equivalent effect was observed at the angles of entry that were lineally symmetric about the y-axis [3]." }, { "text": "Therefore, the performance would be the same whether the target was to the left or right of the starting area." }, { "text": "To avoid increasing the workload of the participant, we always placed the starting area to the left of the target." } ]
[ { "text": "We defined the notch position ( Position ) as the condition." }, { "text": "Position =" }, { "text": "Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target." }, { "text": "An equivalent effect is observed at angles of entry that are lineally symmetric about the y-axis when the angle of entry the target adjacent to a top edge with respect to the target is based on the y-axis [3]." }, { "text": "Therefore, the performance is the same whether the target is to the left or right of the starting area." }, { "text": "We always place the starting area to the left of the target to avoid increasing the workload of the participant." } ]
aVemIPPM7t.-8hV3QV4L9.00
Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM. It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described in this paper.
Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM. It takes less than a week of compute on a single r5.24xlarge instance to run all the experiments described in this paper.
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
aVemIPPM7t
-8hV3QV4L9
0
[ { "text": "Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM." }, { "text": "It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described in this paper." } ]
[ { "text": "Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM." }, { "text": "It takes less than a week of compute on a single r5.24xlarge instance to run all the experiments described in this paper." } ]
SRquLaHRM4.vI2x5N-YHC.00
We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy. At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics. Compared with conventional distance (such as Euclideandistance of mean features), optimal transport can align different visual features for each local prompt,which is more robust to the visual misalignment and tolerates well feature shift [44]. It is because OTlearns an adaptive transport plan to align features, which achieves fine-grained matching across twomodalities. We conduct experiments on 11 datasets following the standard setting of CLIP [39] and CoOp [63] to evaluate our method. These experiments span the visual classification of generic objects,scenes, actions, fine-grained categories, and so on. The significant result improvement demonstratesthat PLOT can effectively learn representative and comprehensive prompts.
We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy. At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics. We conduct comprehensive experiments on 11 datasetsfollowing the standard setting of CLIP [39] and CoOp [62] to evaluate our method. These experimentsspan the visual classification on generic objects, scenes, actions, fine-grained categories and so on. The significant result improvement demonstrates that PLOT can effectively learn representative andcomprehensive prompts.
{ "annotation": [ "Content_deletion" ], "instruction": "Remove any unessential information in this paragraph.", "annotator": "annotator_03" }
{ "annotation": [ "Content_deletion", "Rewriting_light" ], "instruction": "Please exclude the content related to optimal transport.", "annotator": "annotator_09" }
SRquLaHRM4
vI2x5N-YHC
0
[ { "text": "We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value." }, { "text": "Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy." }, { "text": "At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm" }, { "text": "Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics." }, { "text": "Compared with conventional distance (such as Euclideandistance of mean features), optimal transport can align different visual features for each local prompt,which is more robust to the visual misalignment and tolerates well feature shift [44]." }, { "text": "It is because OTlearns an adaptive transport plan to align features, which achieves fine-grained matching across twomodalities." }, { "text": "We conduct experiments on 11 datasets following the standard setting of CLIP [39] and CoOp [63] to evaluate our method." }, { "text": "These experiments span the visual classification of generic objects,scenes, actions, fine-grained categories, and so on." }, { "text": "The significant result improvement demonstratesthat PLOT can effectively learn representative and comprehensive prompts." } ]
[ { "text": "We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value." }, { "text": "Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy." }, { "text": "At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm" }, { "text": "Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics." }, { "text": "" }, { "text": "" }, { "text": "We conduct comprehensive experiments on 11 datasetsfollowing the standard setting of CLIP [39] and CoOp [62] to evaluate our method." }, { "text": "These experimentsspan the visual classification on generic objects, scenes, actions, fine-grained categories and so on." }, { "text": "The significant result improvement demonstrates that PLOT can effectively learn representative andcomprehensive prompts." } ]
aomiOZE_m2.rxb2TiQ6bq.20
Model Size and Mult-Adds. Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number. We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720. Our SRPN-L operates less Mult-Adds than most compared methods. Those comparisons indicate that SRP reduces parameters and operations efficiently.
Model Size and Mult-Adds. Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN. The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also presented. As seen, our SRPNLite costs fewer Mult-Adds than most comparison methods. These results demonstrate the merits of SRP against other counterparts in striking a better network performance-complexity trade-off.
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Give me a more formal version of this paragraph", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the text and change SRPN-L to SRPN-Lite", "annotator": "annotator_06" }
aomiOZE_m2
rxb2TiQ6bq
20
[ { "text": "Model Size and Mult-Adds." }, { "text": "Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number." }, { "text": "We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720." }, { "text": "Our SRPN-L operates less Mult-Adds than most compared methods." }, { "text": "Those comparisons indicate that SRP reduces parameters and operations efficiently." } ]
[ { "text": "Model Size and Mult-Adds." }, { "text": "Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN." }, { "text": "The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also presented." }, { "text": "As seen, our SRPNLite costs fewer Mult-Adds than most comparison methods." }, { "text": "These results demonstrate the merits of SRP against other counterparts in striking a better network performance-complexity trade-off." } ]
MnewiFDvHZ.iAYttXl-uH.00
• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when makingdecision at round t and can be arbitrarily and adversarially chosen, as in [24, 20, 30].
• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adversarially chosen, as in [22, 18, 27].
{ "annotation": [ "Concision" ], "instruction": "Make paragraph more concise", "annotator": "annotator_06" }
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
MnewiFDvHZ
iAYttXl-uH
0
[ { "text": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when makingdecision at round t and can be arbitrarily and adversarially chosen, as in [24, 20, 30]." } ]
[ { "text": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adversarially chosen, as in [22, 18, 27]." } ]
3686sm4Cs.AJMXMDLVn.01
Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters. shows that our approach outperforms or is on par with prior work in efficient ensembling. b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles. See Section 4 for discussion
Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters. Unlike methods like BatchEnsemble (BE) (Wen et al., 2020) and MIMO (Havasi et al., 2021), which shows that our approach outperforms or is on par with prior work in efficient ensembling. (b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles. See Section 4 for discussion
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
3686sm4Cs
AJMXMDLVn
1
[ { "text": "Results." }, { "text": "Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling." }, { "text": "Note that all the methods boost performance over a single model without requiring additional model parameters." }, { "text": "However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters." }, { "text": " shows that our approach outperforms or is on par with prior work in efficient ensembling." }, { "text": "b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles." }, { "text": "See Section 4 for discussion" } ]
[ { "text": "Results." }, { "text": "Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling." }, { "text": "Note that all the methods boost performance over a single model without requiring additional model parameters." }, { "text": "However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters." }, { "text": "Unlike methods like BatchEnsemble (BE) (Wen et al., 2020) and MIMO (Havasi et al., 2021), which shows that our approach outperforms or is on par with prior work in efficient ensembling." }, { "text": "(b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles." }, { "text": "See Section 4 for discussion" } ]
OV5v_wBMHk.bw4cqlpLh.08
However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration. As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochastic mini-batch level:
However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration. A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:
{ "annotation": [ "Rewriting_light" ], "instruction": "check the wordings but keep the original content as much as possible", "annotator": "annotator_05" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the language to make it more formal.", "annotator": "annotator_07" }
OV5v_wBMHk
bw4cqlpLh
8
[ { "text": "However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration." }, { "text": "As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochastic mini-batch level:" } ]
[ { "text": "However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration." }, { "text": "A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:" } ]
5Eyr2crzI.s502diDSt.00
We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made. We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable.
We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. 7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration from 2 to 128. From N = 16 and lower, coverage performance starts to diminish while little speed gains are made. We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
5Eyr2crzI
s502diDSt
0
[ { "text": "We also display the trade-off between inference speed and coverage from hierarchical refinement in " }, { "text": "Fig." }, { "text": "From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made." }, { "text": "We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable." } ]
[ { "text": "We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig." }, { "text": "7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration from 2 to 128." }, { "text": "From N = 16 and lower, coverage performance starts to diminish while little speed gains are made." }, { "text": "We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable." } ]
atxti8SVk.3K9AmPwALM.15
Pascal: Image tag annotations. On Pascal VOC dataset, our method outperforms others by a large margin. Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable 4 . 5% .
Pascal: Image tag annotations. Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 .
{ "annotation": [ "Content_deletion", "Content_substitution" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
15
[ { "text": "Pascal: Image tag annotations." }, { "text": "On Pascal VOC dataset, our method outperforms others by a large margin." }, { "text": "Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable 4 . 5% ." } ]
[ { "text": "Pascal: Image tag annotations." }, { "text": "" }, { "text": "Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 ." } ]
OzYyHKPyj7.O9Mk1uqXra.01
The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively). In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed. The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018). The stack reading is the top cell of the stack. This model has quadratic time and space complexity with respect to input length. We refer the reader to Appendix A.2 for full details.
The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stacks. In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed. The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018). The stack reading is the top cell of the stack. This model has quadratic time and space complexity with respect to input length. We refer the reader to Appendix A.2 for full details.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
OzYyHKPyj7
O9Mk1uqXra
1
[ { "text": "The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively)." }, { "text": "In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed." }, { "text": "The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018)." }, { "text": "The stack reading is the top cell of the stack." }, { "text": "This model has quadratic time and space complexity with respect to input length." }, { "text": "We refer the reader to Appendix A.2 for full details." } ]
[ { "text": "The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stacks." }, { "text": "In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed." }, { "text": "The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018)." }, { "text": "The stack reading is the top cell of the stack." }, { "text": "This model has quadratic time and space complexity with respect to input length." }, { "text": "We refer the reader to Appendix A.2 for full details." } ]
BkwlK_dPB.SJfZLu8oB.00
It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem such asvolume of the goal set |F RLgoal | and how complex and long the solution needs to be. ˆ b depends on the probability of sampling states that will expand the solution in the right direction. Therefore, ˆ b is a function of the dimensionality of S and the visibility of F , i.e. how constrained the tree expansion is. We refer the reader to Appendix S for more details on how the tail bound in Theorem 1 is derived.
It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem. It grows as |F RLgoal | becomes smaller or as the length of the solution trajectory becomes longer. ˆ b depends on the probability of sampling states that will expand the tree in the right direction. It therefore shrinks as the dimensionality of S increases. We refer the reader to Appendix S2 for more details on the meaning of ˆ a, ˆ b and the derivation of the tail bound in Theorem 1.
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_04" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the text to make it more direct and readable when necessary.", "annotator": "annotator_07" }
BkwlK_dPB
SJfZLu8oB
0
[ { "text": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b ." }, { "text": "Intuitively, ˆ a depends on the scale of the problem such asvolume of the goal set |F RLgoal | and how complex and long the solution needs to be." }, { "text": "ˆ b depends on the probability of sampling states that will expand the solution in the right direction." }, { "text": "Therefore, ˆ b is a function of the dimensionality of S and the visibility of F , i.e. how constrained the tree expansion is." }, { "text": "We refer the reader to Appendix S for more details on how the tail bound in Theorem 1 is derived." } ]
[ { "text": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b ." }, { "text": "Intuitively, ˆ a depends on the scale of the problem. It grows as |F RLgoal | becomes smaller or as the length of the solution trajectory becomes longer." }, { "text": "ˆ b depends on the probability of sampling states that will expand the tree in the right direction." }, { "text": "It therefore shrinks as the dimensionality of S increases." }, { "text": "We refer the reader to Appendix S2 for more details on the meaning of ˆ a, ˆ b and the derivation of the tail bound in Theorem 1." } ]
URRc6L6nmE.yUoqIf6zGY.00
A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators. A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al.
A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators. A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
URRc6L6nmE
yUoqIf6zGY
0
[ { "text": "A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space." }, { "text": "Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators." }, { "text": "A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al." } ]
[ { "text": "A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work." }, { "text": "Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators." }, { "text": "A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al." } ]
kAwMEYEIN.RlDWAM6qF.00
HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training. We believe this work provides important insights into the loss design in Physics-Informed deep learning.
HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training. One limitation of this workis that we only work on the HJB Equation. Theoretical investigation of other important equations canbe an exciting direction for future works. We believe this work provides important insights into the loss design in Physics-Informed deep learning.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
kAwMEYEIN
RlDWAM6qF
0
[ { "text": "HJB equation is stable only if p is sufficiently large." }, { "text": "Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice." }, { "text": "The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training." }, { "text": "" }, { "text": "" }, { "text": "We believe this work provides important insights into the loss design in Physics-Informed deep learning." } ]
[ { "text": "HJB equation is stable only if p is sufficiently large." }, { "text": "Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice." }, { "text": "The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training." }, { "text": "One limitation of this workis that we only work on the HJB Equation." }, { "text": "Theoretical investigation of other important equations canbe an exciting direction for future works." }, { "text": "We believe this work provides important insights into the loss design in Physics-Informed deep learning." } ]
YCmehaMzt.kHwUIOFr_.00
In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this composing method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the abilities to learm critical information under the disturbance of composed non-semantic representations.
Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this ensemble method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the ability to learn critical information under the disturbance of composed non-semantic representations.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Change the idea of \"composition\" to \"ensemble\" if this paragraph. Fix any spelling mistake.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the first sentence. Improve English in this paragraph.", "annotator": "annotator_07" }
YCmehaMzt
kHwUIOFr_
0
[ { "text": "In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples." }, { "text": "Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed." }, { "text": "We evaluate the effectiveness of this composing method under different training strategies and find that it can always keep effective." }, { "text": "Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level." }, { "text": "Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises." }, { "text": "It can serve as a new benchmark to evaluate the abilities to learm critical information under the disturbance of composed non-semantic representations." } ]
[ { "text": "Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut." }, { "text": "Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed." }, { "text": "We evaluate the effectiveness of this ensemble method under different training strategies and find that it can always keep effective." }, { "text": "Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level." }, { "text": "Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises." }, { "text": "It can serve as a new benchmark to evaluate the ability to learn critical information under the disturbance of composed non-semantic representations." } ]
NcdK3bdqnA.kF_TmXY8G0.00
The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections. The two types of image-specific linear projections do not lead to substantial performance differences. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency.
The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance increase. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency.
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
NcdK3bdqnA
kF_TmXY8G0
0
[ { "text": "The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections." }, { "text": "The two types of image-specific linear projections do not lead to substantial performance differences." }, { "text": "Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency." } ]
[ { "text": "The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias." }, { "text": "Introducing additional image-specific linear projection weights does not lead to further performance increase." }, { "text": "Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency." } ]
mS4xvgSiEH.i-a3xp3usm.00
The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.
The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
mS4xvgSiEH
i-a3xp3usm
0
[ { "text": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels." }, { "text": "To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE." } ]
[ { "text": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels." }, { "text": "We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE." } ]
g5N2H6sr7.6J3ec8Dl3p.04
Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.
Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
g5N2H6sr7
6J3ec8Dl3p
4
[ { "text": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS." }, { "text": "This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL." } ]
[ { "text": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s." }, { "text": "This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL." } ]
aomiOZE_m2.rxb2TiQ6bq.06
Neural Network Pruning. Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pruning (a.k.a. unstructured pruning). The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., a scalar). Structured pruning results in regular sparsity after pruning. It does not demand any special hardware features to achieve considerable practical acceleration. In contrast, unstructured pruning leads to irregular sparsity. Leveraging the irregular sparsity for acceleration typically demands special software libraries, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platform (Han et al., 2016a). In this paper, we focus on filter pruning for easy acceleration. Most efforts in pruning (mainly in classification task) have been spent on finding a better pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017). Magnitude-based (Han et al., 2015; 2016b; Li et al., 2017) is the most prevailing criterion, which we will also employ to develop our method in this paper.As far as we know, no work before has managed to apply filter pruning to compressing image SR networks. This paper is meant to fill the blank.
Neural Network Pruning. Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). The methodology of pruning mainly falls into two groups: filter pruning (or more generally known as structured pruning) * and weight-element pruning (also referred to as unstructured pruning). The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., scalars). Structured pruning results in regular sparsity after pruning. It does not demand any special hardware features to achieve considerable practical acceleration. In contrast, unstructured pruning leads to irregular sparsity. Leveraging the irregular sparsity for acceleration typically demands special software supports, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platforms (Han et al., 2016a). In this paper, we tackle filter pruning instead of weight-element pruning for effortless acceleration. The major efforts in pruning (mainly in image classification) have been focusing on proposing a more sound pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017). Criteria based on weight magnitude (Han et al., 2015; 2016b; Li et al., 2017) are the most prevailing ones, which we will also employ to develop our method in this paper.
{ "annotation": [ "Development", "Concision" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Rewrite the last sentence to make it more concise by removing shortcomings of other work.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
6
[ { "text": "Neural Network Pruning." }, { "text": "Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017)." }, { "text": "It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pruning (a.k.a. unstructured pruning)." }, { "text": "The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., a scalar)." }, { "text": "Structured pruning results in regular sparsity after pruning." }, { "text": "It does not demand any special hardware features to achieve considerable practical acceleration." }, { "text": "In contrast, unstructured pruning leads to irregular sparsity." }, { "text": "Leveraging the irregular sparsity for acceleration typically demands special software libraries, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platform (Han et al., 2016a)." }, { "text": "In this paper, we focus on filter pruning for easy acceleration." }, { "text": "Most efforts in pruning (mainly in classification task) have been spent on finding a better pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017)." }, { "text": "Magnitude-based (Han et al., 2015; 2016b; Li et al., 2017) is the most prevailing criterion, which we will also employ to develop our method in this paper.As far as we know, no work before has managed to apply filter pruning to compressing image SR networks. This paper is meant to fill the blank." } ]
[ { "text": "Neural Network Pruning." }, { "text": "Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017)." }, { "text": "The methodology of pruning mainly falls into two groups: filter pruning (or more generally known as structured pruning) * and weight-element pruning (also referred to as unstructured pruning)." }, { "text": "The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., scalars)." }, { "text": "Structured pruning results in regular sparsity after pruning." }, { "text": "It does not demand any special hardware features to achieve considerable practical acceleration." }, { "text": "In contrast, unstructured pruning leads to irregular sparsity." }, { "text": "Leveraging the irregular sparsity for acceleration typically demands special software supports, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platforms (Han et al., 2016a)." }, { "text": "In this paper, we tackle filter pruning instead of weight-element pruning for effortless acceleration." }, { "text": "The major efforts in pruning (mainly in image classification) have been focusing on proposing a more sound pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017)." }, { "text": "Criteria based on weight magnitude (Han et al., 2015; 2016b; Li et al., 2017) are the most prevailing ones, which we will also employ to develop our method in this paper." } ]
7_CwM-IzWd.zcm6f5HDI.21
Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla). For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40 and
Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019). For each algorithm, we train each model three times with the same learning rate. We use 0.01, 0.1 and 0.01 as learning rate for Colored-and-gray-MNIST, ModelNet40 and
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_08" }
7_CwM-IzWd
zcm6f5HDI
21
[ { "text": "Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla)." }, { "text": "For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40 and" } ]
[ { "text": "Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019)." }, { "text": "For each algorithm, we train each model three times with the same learning rate. We use 0.01, 0.1 and 0.01 as learning rate for Colored-and-gray-MNIST, ModelNet40 and" } ]
sIqSoZ9KiO.KLlOZMoJ9G.01
To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there is a single stochastic vector and not a hierarchy of feature maps, which enables better control of the latent space. In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations.
To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning, at least in the sense of (Higgins et al., 2016) where the aim is to factorize the dimensions of a latent vector. In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make sentence precise.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Rephrase the second sentence, mostly focusing on the second half.", "annotator": "annotator_07" }
sIqSoZ9KiO
KLlOZMoJ9G
1
[ { "text": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE." }, { "text": "Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there is a single stochastic vector and not a hierarchy of feature maps, which enables better control of the latent space." }, { "text": "In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations." } ]
[ { "text": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE." }, { "text": "Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning, at least in the sense of (Higgins et al., 2016) where the aim is to factorize the dimensions of a latent vector." }, { "text": "In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations." } ]
q4rMz7ZfFG.uyxGiQeMP.01
We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”. The source code finds all substrings by calling re.findall () build-in fucntion. In the second case, the query is “Combing the individual byte arrays into one array”, and the model searches a source code from Java candidate codes. As we can see, the source code concatenates multiple arrays into one array by calling System.arraycopy () build-in fucntion.
We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes. In the fine-turning step, we set the learning rate as 2e-5, the batch size as 32, the max sequence length of queries and codes as 128 and 256, and the max number of nodes as 64. We use the Adam optimizer to update model parameters and perform early stopping on the development set.
{ "annotation": [ "Rewriting_heavy", "Content_substitution" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
q4rMz7ZfFG
uyxGiQeMP
1
[ { "text": "We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”." }, { "text": "The source code finds all substrings by calling re.findall () build-in fucntion. In the second case, the query is “Combing the individual byte arrays into one array”, and the model searches a source code from Java candidate codes. As we can see, the source code concatenates multiple arrays into one array by calling System.arraycopy () build-in fucntion." } ]
[ { "text": "We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes." }, { "text": "In the fine-turning step, we set the learning rate as 2e-5, the batch size as 32, the max sequence length of queries and codes as 128 and 256, and the max number of nodes as 64. We use the Adam optimizer to update model parameters and perform early stopping on the development set." } ]
End of preview. Expand in Data Studio

ParaRev: Building a dataset for Scientific Paragraph Revision annotated with revision instruction

About

This repository contains ParaRev, a dataset of 48k revised scientific paragraphs with an evaluation subset of 641 paragraphs manually annotated with revision instructions. This dataset is extracted from the CASIMIR corpus, the extraction, and annotation process is described in:

ParaRev : Building a dataset for Scientific Paragraph Revision annotated with revision instruction (Jourdan et al., WRAICOGS 2025)

Content

The dataset is composed of two subsets:

  • pararev_full: The full dataset, composed of 48k pairs of revised paragraphs without annotation.
  • pararev_annot_subset: The manually annotated subset composed of 641 paragraphs, each paragraph have 2 annotations. Those paragraphs are also included in pararev_full

The data in pararev_full follow this distribution:

Distribution # chars Src # chars Tgt # words Src # words Tgt # sents Src # sents Tgt % words deleted % words added Lev dist
Min 47 48 7 7 1 1 0 0 0
Avg 680.16 715.58 125.54 132.99 5.26 5.50 21.54 25.63 194.80
Max 5202 5588 1003 1147 70 68 96.51 97.90 2265
Avg 374.11 394.20 69.04 73.32 3.07 3.19 18.19 18.15 160.10

The data in pararev_annot_subset are labelled with the following distribution:

Label Rewriting light Rewriting medium Rewriting heavy Development Content add Content subs Concision Content del Unusable
Prct % 15.44 14.27 4.13 19.07 12.99 6.47 12.83 4.72 10.06

The following data fields are available:

  • WIP

Please cite this work as:

@inproceedings{jourdan-etal-2025-pararev,
    title = "{P}ara{R}ev : Building a dataset for Scientific Paragraph Revision annotated with revision instruction",
    author = "Jourdan, L{\'e}ane  and
      Boudin, Florian  and
      Dufour, Richard  and
      Hernandez, Nicolas  and
      Aizawa, Akiko",
    editor = "Zock, Michael  and
      Inui, Kentaro  and
      Yuan, Zheng",
    booktitle = "Proceedings of the First Workshop on Writing Aids at the Crossroads of AI, Cognitive Science and NLP (WRAICOGS 2025)",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "International Committee on Computational Linguistics",
    url = "https://aclanthology.org/2025.wraicogs-1.4/",
    pages = "35--44",
    abstract = "Revision is a crucial step in scientific writing, where authors refine their work to improve clarity, structure, and academic quality. Existing approaches to automated writing assistance often focus on sentence-level revisions, which fail to capture the broader context needed for effective modification. In this paper, we explore the impact of shifting from sentence-level to paragraph-level scope for the task of scientific text revision. The paragraph level definition of the task allows for more meaningful changes, and is guided by detailed revision instructions rather than general ones. To support this task, we introduce ParaRev, the first dataset of revised scientific paragraphs with an evaluation subset manually annotated with revision instructions. Our experiments demonstrate that using detailed instructions significantly improves the quality of automated revisions compared to general approaches, no matter the model or the metric considered."
}
Downloads last month
220