id_paragraph
stringlengths
20
26
parag_1
stringlengths
101
3.02k
parag_2
stringlengths
173
2.77k
annot_1
dict
annot_2
dict
id_source
stringlengths
8
11
id_target
stringlengths
8
11
index_paragraph
int64
0
26
list_sentences_1
listlengths
1
36
list_sentences_2
listlengths
1
36
7_CwM-IzWd.zcm6f5HDI.04
During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until the highest accuracy on D val is reached.
During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until ˆ y = y for all samples in D train and take the checkpoint of it when ˆ y reaches the highest accuracy on D val .
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_08" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
4
[ { "text": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 )" }, { "text": "+ CE( y, ˆ y 1 ) , where CE stands for cross-entropy." }, { "text": "We refer to each of the cross-entropy losses as a modality-specific loss." }, { "text": "We train the model until the highest accuracy on D val is reached." } ]
[ { "text": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 )" }, { "text": "+ CE( y, ˆ y 1 ) , where CE stands for cross-entropy." }, { "text": "We refer to each of the cross-entropy losses as a modality-specific loss." }, { "text": "We train the model until ˆ y = y for all samples in D train and take the checkpoint of it when ˆ y reaches the highest accuracy on D val ." } ]
hegI87bI5S.fL6Q48sfx8.09
The task was created with reference to the previous study [25]. Fig- ure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground. First, participants clicked on the start area, and the cursor was fixed at the center of the start area. Assuming the initial position of the cursor may affect the cursor path andthe performance of pointing, we strictly fixed a starting position of the trial. Participants clicked again at the starting position, and the trial began. The start area disappeared as a feedback for the beginning of the trial. Partici- pants aimed at the target and ended the trial with the next click. If participants clicked correctly on the target, we marked the trial as a success; else, the trial was marked as a failure (error). We presented a sound feedback in response to the success or failure of the trial.
The task was created by referring to a previous study [28]. Figure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray background. The participants clicked on the start area; the cursor positioned at the center of the start area. We strictly fixed the starting position of the cursor for the trial assuming that the initial position of the cursor can affect the cursor path and performance of pointing [28]. The trial started once the participant clicked on the starting position. The start area then disappeared, which acted as feedback to indicate the start of the trial. Participants aimed at the target and ended the trial with the next click. If participants clicked the target correctly, we marked the trial as a success; else, the trial was marked as a failure (error). We presented a sound feedback in response to the success or failure of the trial.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the middle part of the paragraph to make it more better. Replace some words in the paragraph.", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Slightly revise for readability, you can reorganise ideas in sentences if necessary.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
9
[ { "text": "The task was created with reference to the previous study [25]." }, { "text": "Fig- ure 3 shows a schematic of the task." }, { "text": "A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground." }, { "text": "First, participants clicked on the start area, and the cursor was fixed at the center of the start area." }, { "text": "Assuming the initial position of the cursor may affect the cursor path andthe performance of pointing, we strictly fixed a starting position of the trial." }, { "text": "Participants clicked again at the starting position, and the trial began." }, { "text": "The start area disappeared as a feedback for the beginning of the trial." }, { "text": "Partici- pants aimed at the target and ended the trial with the next click." }, { "text": "If participants clicked correctly on the target, we marked the trial as a success; else, the trial was marked as a failure (error)." }, { "text": "We presented a sound feedback in response to the success or failure of the trial." } ]
[ { "text": "The task was created by referring to a previous study [28]." }, { "text": "Figure 3 shows a schematic of the task." }, { "text": "A pink circular start area (251-pixel radius) and a green target were displayed on a gray background." }, { "text": "The participants clicked on the start area; the cursor positioned at the center of the start area." }, { "text": "We strictly fixed the starting position of the cursor for the trial assuming that the initial position of the cursor can affect the cursor path and performance of pointing [28]." }, { "text": "The trial started once the participant clicked on the starting position." }, { "text": "The start area then disappeared, which acted as feedback to indicate the start of the trial." }, { "text": "Participants aimed at the target and ended the trial with the next click." }, { "text": "If participants clicked the target correctly, we marked the trial as a success; else, the trial was marked as a failure (error)." }, { "text": "We presented a sound feedback in response to the success or failure of the trial." } ]
SyGfyinsH.I2YVGmIp0.00
A + C + D refers to our approach. In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on average; these gains are most noticeable in the tails. Using the accumulated confidence produces a smaller, but still significant, gain. In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively. The trends are similar those for ResNet.
A + C + D is our approach. As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ . In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor produces a large gain; these gains are most noticeable in the tails. Using the accumulated confidence produces a smaller, but still significant, gain. In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively. The trends are similar those for ResNet.
{ "annotation": [ "Content_addition", "Concision" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
SyGfyinsH
I2YVGmIp0
0
[ { "text": "A + C + D refers to our approach." }, { "text": "" }, { "text": "In (b), we show the same ablations over the entire trajectory until t = 20 ." }, { "text": "As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on average; these gains are most noticeable in the tails." }, { "text": "Using the accumulated confidence produces a smaller, but still significant, gain." }, { "text": "In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively." }, { "text": "The trends are similar those for ResNet." } ]
[ { "text": "A + C + D is our approach." }, { "text": "As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ ." }, { "text": "In (b), we show the same ablations over the entire trajectory until t = 20 ." }, { "text": "As can be seen, using the calibrated predictor produces a large gain; these gains are most noticeable in the tails." }, { "text": "Using the accumulated confidence produces a smaller, but still significant, gain." }, { "text": "In (c) and (d), we show how the sizes vary with (cid:15) and δ , respectively." }, { "text": "The trends are similar those for ResNet." } ]
WldWha1MT.LL2ZsGpJga.03
A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G . However, it is limited as it ignores the spatial correspondence ofthe topological features within their respective images (see Figure 2(b)).
Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G . However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial correspondence (see Fig.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this definition in a more direct and academic style.", "annotator": "annotator_07" }
WldWha1MT
LL2ZsGpJga
3
[ { "text": "A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G ." }, { "text": "However, it is limited as it ignores the spatial correspondence ofthe topological features within their respective images (see Figure 2(b))." } ]
[ { "text": "Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G ." }, { "text": "However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial correspondence (see Fig." } ]
7_CwM-IzWd.zcm6f5HDI.03
We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020). The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions. Next we concatenate these representations and applya linear transformation to obtain cross-modal context representation. We predict channel-wise weights for each modality based this context representation through two independent fully-connected layers. Finally, these weights are used tore-calibrate the channel-wise features per modality.
We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020). Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector. We concatenate the two vectors and apply linear transformation. We refer to its output as context representation. Next, for each uni-modal branch, we implement a fully connected layer on the context representation and get a vector with a dimension of the number of feature maps. Feature maps are re-scaled by this vector before passing to the next layer of the uni-modal branch.
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rearrange the structure to make the structure clearer.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this paragraph completely to make it clearer.", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
3
[ { "text": "We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020)." }, { "text": "The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions." }, { "text": "Next we concatenate these representations and applya linear transformation to obtain cross-modal context representation." }, { "text": "We predict channel-wise weights for each modality based this context representation through two independent fully-connected layers. Finally, these weights are used tore-calibrate the channel-wise features per modality." } ]
[ { "text": "We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020)." }, { "text": "Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector." }, { "text": "We concatenate the two vectors and apply linear transformation. We refer to its output as context representation." }, { "text": "Next, for each uni-modal branch, we implement a fully connected layer on the context representation and get a vector with a dimension of the number of feature maps. Feature maps are re-scaled by this vector before passing to the next layer of the uni-modal branch." } ]
uJRtLYIOIq.e9xxGlB_c.00
Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants; for example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design, which is Eq. (1) insection 4. Still, given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the N × N matrix c + ˜ k ( x i , x j )] Ni,j =1 ⪰ 0 . Hence, in this work, we do not need thevalue of c , but we can compute it if we do need its value, e.g., deriving the feature map of c + ˜ k .
Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added. For example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design (Eq. (1) in section 4). Given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the N × N matrix [ c + ˜ k ( x i , x j )] Ni,j =1 ⪰ 0 . Hence, we do not need the value of c , but we can compute it if needed, e.g., deriving the feature map of c + ˜ k .
{ "annotation": [ "Concision" ], "instruction": "Rewrite some formulations, giving preference to shorter ones.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Shorten this paragraph a bit while keeping all the informations.", "annotator": "annotator_07" }
uJRtLYIOIq
e9xxGlB_c
0
[ { "text": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants;" }, { "text": "for example, c − ∥ x − x ′ ∥ p for large enough c ." }, { "text": "Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design, which is Eq." }, { "text": "(1) insection 4." }, { "text": "Still, given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the N × N matrix" }, { "text": " c + ˜ k" }, { "text": "( x i , x j )]" }, { "text": "Ni,j =1 ⪰ 0 ." }, { "text": "Hence, in this work, we do not need thevalue of c , but we can compute it if we do need its value, e.g., deriving the feature map of c + ˜ k ." } ]
[ { "text": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added." }, { "text": "For example, c − ∥ x − x ′ ∥ p for large enough c ." }, { "text": "Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determined constant in our positional embedding design (Eq." }, { "text": "(1) in section 4)." }, { "text": "Given a set of test points { x i } Ni =1 , one can do a geometric sequence search 1 to search for a c such that the" }, { "text": "N × N matrix [ c + ˜ k" }, { "text": "( x i , x j )]" }, { "text": "Ni,j =1 ⪰ 0 ." }, { "text": "Hence, we do not need the value of c , but we can compute it if needed, e.g., deriving the feature map of c + ˜ k ." } ]
xV0XmrSMtk.sYfR73R9z.02
Discrete Variational Auto-Encoder. In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image.
Discrete Variational Auto-Encoder (DVAE). In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image.
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise by introducing acronyms earlier.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Introduce the acronym DVAE earlier to avoid repeating it.", "annotator": "annotator_07" }
xV0XmrSMtk
sYfR73R9z
2
[ { "text": "Discrete Variational Auto-Encoder." }, { "text": "In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder." }, { "text": "We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image." } ]
[ { "text": "Discrete Variational Auto-Encoder (DVAE)." }, { "text": "In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder." }, { "text": "We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of k -hot binary vector of length 20 in the latent space and the decoder reconstructs the image." } ]
PDvmJtmgQb.gGrpxbc7UI.02
Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own. Another line of work (Papernot et al., 2016; 2018; Bassily et al., 2018b; Dwork & Feldman, 2018; Nandi & Bassily, 2020) considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily. So far only two papers have considered out-of-distribution data. Bassily et al. (2020c) assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples. They show that halfspaces can be learned in this model. Liu et al. (2021) consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions. Abadi et al. and Tramer & Boneh (2020) provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD. However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work.
Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own. Another line of work [8, 16, 29, 31, 32] considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily. So far only two papers have considered out-of-distribution data. [12] assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples. They show that halfspaces can be learned in this model. [26] consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions. [1] and [37] provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD. However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "I want to use numbers for in-text citations. ", "annotator": "annotator_09" }
PDvmJtmgQb
gGrpxbc7UI
2
[ { "text": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically." }, { "text": "On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own." }, { "text": "Another line of work (Papernot et al., 2016; 2018; Bassily et al., 2018b; Dwork & Feldman, 2018; Nandi & Bassily, 2020) considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily." }, { "text": "So far only two papers have considered out-of-distribution data." }, { "text": "Bassily et al." }, { "text": "(2020c) assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples." }, { "text": "They show that halfspaces can be learned in this model." }, { "text": "Liu et al. (2021) consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions. Abadi et al." }, { "text": " and Tramer & Boneh (2020) provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD." }, { "text": "However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work." } ]
[ { "text": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically." }, { "text": "On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees than either on their own." }, { "text": "Another line of work [8, 16, 29, 31, 32] considers public data that is unlabelled, but otherwise comes from the same distribution as the private data; the primary goal is to use the private data to generate labels for the public data, which can then be used arbitrarily." }, { "text": "So far only two papers have considered out-of-distribution data." }, { "text": "" }, { "text": "[12] assume that whether a data record is public or private depends on its label; e.g., the public data may contain many negative examples, but few positive examples." }, { "text": "They show that halfspaces can be learned in this model." }, { "text": "[26] consider synthetic data generation and provide guarantees that depend on the R ´ enyi divergences between the public and private distributions." }, { "text": "[1] and [37] provided techniques to effectively use out-of-distribution public data for pre-training for DP-SGD." }, { "text": "However, they did not consider techniques to improve a pre-trained model using private and public data, which is the focus of our work." } ]
E2pFUCGYZ1.5hMS4Fg2b_b.00
ADO iterations in the Bayesian framework are shown in Sec. 3.3 and Appendix A.3. Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters. To further improve the prediction capability, especially for chaoticsystems, we propose to leverage data assimilation techniques, which is shown in the green box anddiscussed in Sec.3.4 and Appendix A.5.
ADO iterations in the Bayesian framework are shown in Sec. 3.3 and supplemental materials. Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters. To further improve the prediction capability, especially forchaotic systems, we propose to leverage data assimilation techniques, which is shown in the greenbox and discussed in Sec.3.4 and supplemental materials.
{ "annotation": [ "Rewriting_light" ], "instruction": "Use \"supplemental materials\" instead of \"Appendix\"", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Lightly revise for readability.", "annotator": "annotator_07" }
E2pFUCGYZ1
5hMS4Fg2b_b
0
[ { "text": "ADO iterations in the Bayesian framework are shown in Sec." }, { "text": "3.3 and Appendix A.3." }, { "text": "Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters." }, { "text": "To further improve the prediction capability, especially for chaoticsystems, we propose to leverage data assimilation techniques, which is shown in the green box anddiscussed in Sec.3.4 and Appendix A.5." } ]
[ { "text": "ADO iterations in the Bayesian framework are shown in Sec." }, { "text": "3.3 and supplemental materials." }, { "text": "Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters." }, { "text": "To further improve the prediction capability, especially forchaotic systems, we propose to leverage data assimilation techniques, which is shown in the greenbox and discussed in Sec.3.4 and supplemental materials." } ]
MXi6uEx-hp.rdZfFcGyf9.14
AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQNbased architectures because the top-K greedy list action building ignores list interdependence.
AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQN-based architectures because the top-K greedy list-action ignores intra-list dependence.
{ "annotation": [ "Rewriting_light" ], "instruction": "Remove unnecessary words and fix the words if they are not in the correct form", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Remove terms that might be considered biased. Make the writing more clear.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
14
[ { "text": "AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy." }, { "text": "RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys." }, { "text": "Additionally, DQN is worse than CDQNbased architectures because the top-K greedy list action building ignores list interdependence." } ]
[ { "text": "AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy." }, { "text": "RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys." }, { "text": "Additionally, DQN is worse than CDQN-based architectures because the top-K greedy list-action ignores intra-list dependence." } ]
mFNezF8ubW.g-sOkbqBcm.00
Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile. The hidden nodes of a concept is also connected to the output prediction node for the concept itself and those for each of its children category nodes. An additional type of connectivity constrains the concept and category predictions to follow the hierarchical organization of the ontology. We illustrate each of these connections below.
Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile. Consequently, the hidden representation for a child concept is computed from that of its parent. Given the representation in capture in the hidden nodes, two types of output prediction nodes detects the presence of the concept itself and any children category in the input. An additional type of connectivity explicitly constrains the concept and category predictions to follow the hierarchical organization of the ontology. We illustrate each of these connections below.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
mFNezF8ubW
g-sOkbqBcm
0
[ { "text": "Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any." }, { "text": "For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile." }, { "text": "The hidden nodes of a concept is also connected to the output prediction node for the concept itself and those for each of its children category nodes." }, { "text": "An additional type of connectivity constrains the concept and category predictions to follow the hierarchical organization of the ontology." }, { "text": "We illustrate each of these connections below." } ]
[ { "text": "Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any." }, { "text": "For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those accounting for Mammal, Bird and Reptile." }, { "text": "Consequently, the hidden representation for a child concept is computed from that of its parent. Given the representation in capture in the hidden nodes, two types of output prediction nodes detects the presence of the concept itself and any children category in the input." }, { "text": "An additional type of connectivity explicitly constrains the concept and category predictions to follow the hierarchical organization of the ontology." }, { "text": "We illustrate each of these connections below." } ]
CVRUl83zah.I75TtW0V7.25
• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach does improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 . 125 for 128 × 128 ), but is also a bit slower. For simplicity, we therefore opted to not using relation networks. The architecture of the set encoder g is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )–FSPool (Zhang et al., 2020). The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 . • Instead of ResNet34 to encode the input image, we use the smaller ResNet18. This did not appear to affect results. • We increase the batch size from 32 to 128. There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization. • We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0. instead of standard gradient descent without momentum. • Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs. This had slightly better training loss than starting training with 40 iterations. We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. • We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs. This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss. • In preliminary experiments, we rarely observed spikes in the training loss. Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help.
• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach would improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 .for 128 × 128 ), but is also a bit slower. For simplicity, we therefore opted to not using relation networks. The architecture of the set encoder g in iDSPN is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )– FSPool. The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 . • Instead of ResNet34 to encode the input image, we use the smaller ResNet18. This did not appear to affect results. • Instead of using a learned initial set Y 0 as in DSPN, we find that it makes no difference to randomly sample the initial set for every example. We therefore use the latter for simplicity. In initial experiments we found that even initializing every element to 0 causes no problems. • We increase the batch size from 32 to 128. There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization. • We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0. instead of standard gradient descent without momentum. • Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs. This had slightly better training loss than starting training with 40 iterations. We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. iDSPN training was stable in all of these configurations. • We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs. This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss. • In preliminary experiments, we rarely observed spikes in the training loss. Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
25
[ { "text": "• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely." }, { "text": "This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) ." }, { "text": "Using the relation network approach does improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 . 125 for 128 × 128 ), but is also a bit slower." }, { "text": "For simplicity, we therefore opted to not using relation networks." }, { "text": "The architecture of the set encoder g is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )–FSPool (Zhang et al., 2020)." }, { "text": "The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 ." }, { "text": "• Instead of ResNet34 to encode the input image, we use the smaller ResNet18." }, { "text": "This did not appear to affect results." }, { "text": "" }, { "text": "" }, { "text": "" }, { "text": "• We increase the batch size from 32 to 128." }, { "text": "There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization." }, { "text": "• We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0." }, { "text": "instead of standard gradient descent without momentum." }, { "text": "• Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs." }, { "text": "This had slightly better training loss than starting training with 40 iterations." }, { "text": "We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. " }, { "text": "• We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs." }, { "text": "This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss." }, { "text": "• In preliminary experiments, we rarely observed spikes in the training loss." }, { "text": "Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help." } ]
[ { "text": "• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely." }, { "text": "This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) ." }, { "text": "Using the relation network approach would improve our results slightly (e.g. 3.5 percentage points improvement on AP 0 .for 128 × 128 ), but is also a bit slower." }, { "text": "For simplicity, we therefore opted to not using relation networks." }, { "text": "The architecture of the set encoder g in iDSPN is Linear( 19 → 512 )–ReLU–Linear( 512 → 512 )– FSPool." }, { "text": "The main difference to DSPN is that since there is no concatenation of pairs, so the input dimensionality is 19 instead of 38 and everything is applied on sets of size n rather than sets of size n 2 ." }, { "text": "• Instead of ResNet34 to encode the input image, we use the smaller ResNet18." }, { "text": "This did not appear to affect results." }, { "text": "• Instead of using a learned initial set Y 0 as in DSPN, we find that it makes no difference to randomly sample the initial set for every example." }, { "text": "We therefore use the latter for simplicity." }, { "text": "In initial experiments we found that even initializing every element to 0 causes no problems." }, { "text": "• We increase the batch size from 32 to 128." }, { "text": "There appeared to be no difference in results between the two, with 128 being faster by making better use of parallelization." }, { "text": "• We use Nesterov’s Accelerated Gradient (Nesterov, 1983) with a momentum parameter of 0." }, { "text": "instead of standard gradient descent without momentum." }, { "text": "• Instead of fixing the number of iterations at 10 like DSPN, we set the number of iterations to 20 at the start of training and change it to 40 after 50 epochs." }, { "text": "This had slightly better training loss than starting training with 40 iterations." }, { "text": "We have tried a few other ways of increasing the number of iterations throughout training (going from 10 to 20 to 30 to 40 iterations, smooth increase from 1 to 40 over the epochs, randomly sampling an iteration between 20 and 40 every batch), which had little impact on results. iDSPN training was stable in all of these configurations." }, { "text": "• We drop the learning rate after 90 epochs from 1e-3 to 1e-4 for the last 10 epochs." }, { "text": "This slightly improved training loss while also reducing variance in epoch-to-epoch validation loss." }, { "text": "• In preliminary experiments, we rarely observed spikes in the training loss." }, { "text": "Clipping the gradients in the inner optimization to a maximum L2 norm of 10 seemed to help." } ]
lLwt-9RJ2tm.XJsauLjck.03
That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative for at least the dissimilarity objective of [15]; ourstructural decomposition of the cost function and its subsequent implications carry over identically. In particular for this cost function, our results imply (1 + o (1)) ψ -approximate algorithms for HCin weighted graphs that use ( i ) a single-pass and e
That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative; we can in fact achieve even stronger performance guarantees for both objectives in the sublinear resource regime by exploiting the fact that their corresponding optimal hierarchies have large objective function values 5 , allowing us to tolerate even larger additive errors in our cut-sparsifiers. A straightforward application of our structural decomposition of the cost function along with its downstream implications in each of the three models of computation directly gives us (1 − o (1 /ψ )) ψ -approximate algorithms for both HC maximization objectives in weighted graphs that use ( i ) a single-pass and e
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
3
[ { "text": "That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider." }, { "text": "We answer this in the affirmative for at least the dissimilarity objective of [15]; ourstructural decomposition of the cost function and its subsequent implications carry over identically. In particular for this cost function, our results imply (1 + o (1))" }, { "text": "ψ -approximate algorithms for HCin weighted graphs that use ( i ) a single-pass and e" } ]
[ { "text": "That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider." }, { "text": "We answer this in the affirmative; we can in fact achieve even stronger performance guarantees for both objectives in the sublinear resource regime by exploiting the fact that their corresponding optimal hierarchies have large objective function values 5 , allowing us to tolerate even larger additive errors in our cut-sparsifiers. A straightforward application of our structural decomposition of the cost function along with its downstream implications in each of the three models of computation directly gives us (1 − o (1 /ψ ))" }, { "text": "ψ -approximate algorithms for both HC maximization objectives in weighted graphs that use ( i ) a single-pass and e" } ]
9ALnOEcGN_.4eEIRZ-dm.00
We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]. However, thereare several major distinctions between the existing methods and our proposed one. Previous workgenerates heatmaps based on supervised signals (each training graph is paired with its best solution)[4, 19], which are costly to obtain. DIMES is directly optimized with gradients estimated by the REINFORCE algorithm, which do not require supervised signals. As a result, DIMES can scale tolarge graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions withoutthe need for costly generation of supervised training data or human specification of problem-specificheuristics.
We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]. However, there are major distinctions between the existing methods and our DIMES. For instance, Fu et al. [17] learn to generate heatmaps via supervised learning (i.e., each training instance is paired with its best solution) [4, 19], which is very costly to obtain on large graphs. DIMES is directly optimized with gradients estimated by the REINFORCE algorithm without any supervision, so it can be trained on large graphs directly. As a result, DIMES can scale to large graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions without the need for costly generation of supervised training data or human specification of problem-specific heuristics.
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
9ALnOEcGN_
4eEIRZ-dm
0
[ { "text": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]." }, { "text": "However, thereare several major distinctions between the existing methods and our proposed one." }, { "text": "Previous workgenerates heatmaps based on supervised signals (each training graph is paired with its best solution)[4, 19], which are costly to obtain." }, { "text": "DIMES is directly optimized with gradients estimated by the REINFORCE algorithm, which do not require supervised signals." }, { "text": "As a result, DIMES can scale tolarge graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions withoutthe need for costly generation of supervised training data or human specification of problem-specificheuristics." } ]
[ { "text": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]." }, { "text": "However, there are major distinctions between the existing methods and our DIMES. For instance, Fu et al." }, { "text": "[17] learn to generate heatmaps via supervised learning (i.e., each training instance is paired with its best solution) [4, 19], which is very costly to obtain on large graphs." }, { "text": "DIMES is directly optimized with gradients estimated by the REINFORCE algorithm without any supervision, so it can be trained on large graphs directly." }, { "text": "As a result, DIMES can scale to large graphs with up to tens of thousands of nodes, and predict (nearly) optimal solutions without the need for costly generation of supervised training data or human specification of problem-specific heuristics." } ]
atxti8SVk.3K9AmPwALM.16
Pascal: Scribble annotations. Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new SOTA: We get 75 . 9% mIoU, achieving 98 . 6% of full supervision performance.
Pascal: Scribble annotations. Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing. We get 74 . 2% ( 76 . 1% ) mIoU, achieving 97 . 5% ( 98 . 4% ) of full supervision performance in these two categories respectively.
{ "annotation": [ "Content_substitution", "Rewriting_light" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
16
[ { "text": "Pascal: Scribble annotations." }, { "text": "Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new" }, { "text": "SOTA: We get 75 ." }, { "text": "9% mIoU, achieving 98 ." }, { "text": "6% of full supervision performance." } ]
[ { "text": "Pascal: Scribble annotations." }, { "text": "Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing." }, { "text": "We get 74 ." }, { "text": "2% ( 76 . 1% ) mIoU, achieving 97 ." }, { "text": "5% ( 98 . 4% ) of full supervision performance in these two categories respectively." } ]
ByZyHzZC-.HktKf7-AW.01
Our work is also related to other work on the importance of noise in SGDs, which have been previously explored. The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our analysis allows us to derive the impact of the gradient noise in the SGD stationary distribution. Additionally, our work also provides intuition toward explaining the recently proposed Cyclic Learning Rate (CLR) schedule (Smith, 2015). CLR schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation rather than on a theoretical understanding. We show that one can replace learning rate annealing with an equivalent batch size schedule. It suggests that the benefit of CLR relates to the noise that it induces and can be thought of as mixing in Monte Carlo Markov Chain (MCMC) methods. In the MCMC setting, annealing processes enable better mixing (Graham & Storkey, 2017).
Our work is also related to the importance of noise in SGD, which has been previously explored. The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our analysis allows us to derive the impact of the gradient’s noise in the SGD stationary distribution. Additionally, our work also provides intuitions toward explaining the recently proposed Cyclic learning rate (CLR) schedule (Smith, 2015). Cyclic learning rate schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation. We also show that one can replace learning rate annealing with an equivalent batch size schedule. It suggests that the benefit of cyclic learning rate relates to the noise that it induces.
{ "annotation": [ "Content_deletion", "Rewriting_light" ], "instruction": "Remove unnecessary content in the last sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Make the last sentence shorter, only keep the main idea. Slightly concise this paragraph and improve the english.", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
1
[ { "text": "Our work is also related to other work on the importance of noise in SGDs, which have been previously explored." }, { "text": "The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998)." }, { "text": "Neelakantan et al." }, { "text": "(2015) observe empirically that adding noise can aid optimization of very deep networks." }, { "text": "Our analysis allows us to derive the impact of the gradient noise in the SGD stationary distribution." }, { "text": "Additionally, our work also provides intuition toward explaining the recently proposed Cyclic Learning Rate (CLR) schedule (Smith, 2015)." }, { "text": "CLR schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation rather than on a theoretical understanding." }, { "text": "We show that one can replace learning rate annealing with an equivalent batch size schedule." }, { "text": "It suggests that the benefit of CLR relates to the noise that it induces and can be thought of as mixing in Monte Carlo Markov Chain (MCMC) methods. In the MCMC setting, annealing processes enable better mixing (Graham & Storkey, 2017)." } ]
[ { "text": "Our work is also related to the importance of noise in SGD, which has been previously explored." }, { "text": "The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998)." }, { "text": "Neelakantan et al." }, { "text": "(2015) observe empirically that adding noise can aid optimization of very deep networks." }, { "text": "Our analysis allows us to derive the impact of the gradient’s noise in the SGD stationary distribution." }, { "text": "Additionally, our work also provides intuitions toward explaining the recently proposed Cyclic learning rate (CLR) schedule (Smith, 2015)." }, { "text": "Cyclic learning rate schedules have demonstrated good optimization and generalization performances, but are grounded on empirical observation." }, { "text": "We also show that one can replace learning rate annealing with an equivalent batch size schedule." }, { "text": "It suggests that the benefit of cyclic learning rate relates to the noise that it induces." } ]
u9NaukzyJ-.hh0KECXQLv.11
Design A supportstwo sorts of medication entries: drug or phys- ical activity. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity. All other calendar entries are represented with rectangles filled with different shades of grey.
Design A supports medication (or drug) entries and physical activ- ities. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity. All other calendar entries are represented with rectangles filled with different shades of grey.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make this paragraph a bit more fluid.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "I want to rewrite the first sentence.", "annotator": "annotator_09" }
u9NaukzyJ-
hh0KECXQLv
11
[ { "text": "Design A supportstwo sorts of medication entries: drug or phys- ical activity." }, { "text": "Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage." }, { "text": "The suffix -WF indicates that the drug should be administered with food." }, { "text": "Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity." }, { "text": "All other calendar entries are represented with rectangles filled with different shades of grey." } ]
[ { "text": "Design A supports medication (or drug) entries and physical activ- ities." }, { "text": "Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage." }, { "text": "The suffix -WF indicates that the drug should be administered with food." }, { "text": "Physical activity entries have a full-color fill, a dashed border, and a label indicating the name of the activity." }, { "text": "All other calendar entries are represented with rectangles filled with different shades of grey." } ]
CVRUl83zah.I75TtW0V7.04
Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass,Zhang et al. (2019) backpropagate through the gradient descent iterations in order to compute the gradients of the training objective with respect to the input vector z and the parameters θ of the encoder.
Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass, the goal is to differentiate Equation 7 with respect to the input vector z and the parameters θ of the encoder. To do this, Zhang et al. (2019) unroll the gradient descent applied in the forward pass and backpropagate through each gradient descent step.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Add a sentence to explain the last sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Improve the logical flow of the last half of the paragraph.", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
4
[ { "text": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L ." }, { "text": "In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps." }, { "text": "In the backward pass,Zhang et al. (2019) backpropagate through the gradient descent iterations in order to compute the gradients of the training objective with respect to the input vector z and the parameters θ of the encoder. " } ]
[ { "text": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L ." }, { "text": "In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps." }, { "text": "In the backward pass, the goal is to differentiate Equation 7 with respect to the input vector z and the parameters θ of the encoder. To do this, Zhang et al. (2019) unroll the gradient descent applied in the forward pass and backpropagate through each gradient descent step." } ]
cW17DDjQa_.6iDdN7-bYz.00
We propose an algorithm to solve above optimization problem (3). The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables. Then we tackle the non-differentiable objective function using self-defined numerical differentiation. At last, we summarize all the optimization details into a gradient-based optimization formulation.
To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation. In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables. Then we tackle the non-differentiable objective function using self-defined numerical differentiation. At last, we summarize all the optimization details into a gradient-based optimization formulation.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
cW17DDjQa_
6iDdN7-bYz
0
[ { "text": "We propose an algorithm to solve above optimization problem (3)." }, { "text": "The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve." }, { "text": "Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables." }, { "text": "Then we tackle the non-differentiable objective function using self-defined numerical differentiation." }, { "text": "At last, we summarize all the optimization details into a gradient-based optimization formulation." } ]
[ { "text": "To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation." }, { "text": "In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make the problem difficult to solve." }, { "text": "Therefore, we first reformulate the inequality constraints as soft regularizations and introduce Minimax optimization with dual variables." }, { "text": "Then we tackle the non-differentiable objective function using self-defined numerical differentiation." }, { "text": "At last, we summarize all the optimization details into a gradient-based optimization formulation." } ]
33RNh69fYq.kMvWVl725x.02
Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [3]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[37] respectively have the channel of 24, 32, 56, and 160, and they are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamWoptimizer [18] with weight decay 1 × 10 − 4 is used for training. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs. The neighbor size is set as 7 × 7. The jittering scale and jitteringprobability are chosen as 20 and 1, respectively. The evaluation is run with 5 random seeds.
Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [4]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[39] are resized and concatenated together to form a 272-channel feature map. The reduced channel dimension is set as 256. AdamW optimizer [20] with weight decay 1 × 10 − 4 is used. Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64. The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs. The neighbor size, jittering scale, andjittering probability are set as 7 × 7, 20, and 1, respectively. The evaluation is run with 5 random seeds.
{ "annotation": [ "Concision" ], "instruction": "Remove some details on model training to make the paragraph more concise.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary details to shorten this paragraph.", "annotator": "annotator_07" }
33RNh69fYq
kMvWVl725x
2
[ { "text": "Setup ." }, { "text": "Anomaly detection aims to detect whether an image contains anomalous regions." }, { "text": "Theperformance is evaluated on MVTec-AD [3]." }, { "text": "The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 ." }, { "text": "The feature maps from stage-1 to stage-4 of EfficientNet-b4[37] respectively have the channel of 24, 32, 56, and 160, and they are resized and concatenated together to form a 272-channel feature map." }, { "text": "The reduced channel dimension is set as 256." }, { "text": "AdamWoptimizer [18] with weight decay 1 × 10 − 4 is used for training." }, { "text": "Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64." }, { "text": "The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs." }, { "text": "The neighbor size is set as 7 × 7. The jittering scale and jitteringprobability are chosen as 20 and 1, respectively." }, { "text": "The evaluation is run with 5 random seeds." } ]
[ { "text": "Setup ." }, { "text": "Anomaly detection aims to detect whether an image contains anomalous regions." }, { "text": "Theperformance is evaluated on MVTec-AD [4]." }, { "text": "The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 ." }, { "text": "The feature maps from stage-1 to stage-4 of EfficientNet-b4[39] are resized and concatenated together to form a 272-channel feature map." }, { "text": "The reduced channel dimension is set as 256." }, { "text": "AdamW optimizer [20] with weight decay 1 × 10 − 4 is used." }, { "text": "Our model is trained for 1000 epochs on 8 GPUs (NVIDIA Tesla V100 16GB) with batch size 64." }, { "text": "The learning rate is 1 × 10 − 4 initially, and dropped by 0.1 after 800 epochs." }, { "text": "The neighbor size, jittering scale, andjittering probability are set as 7 × 7, 20, and 1, respectively." }, { "text": "The evaluation is run with 5 random seeds." } ]
MXi6uEx-hp.rdZfFcGyf9.21
In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. Then, we hypothesized that the existence of the complex relations between actionsin the environment (e.g., tools and activators in CREATE) injects the complex action relations in the environment. For instance, an appropriate pair of an activator and a tool to use in CREATE depends on the situation. To this end, we implemented the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended. Since action relations are complex, AGILE is expected to outperform the ablations. Figure 14 shows that AGILE beats the baselines and in Fig. AGILE slightly but consistently outperforms the ablations. In Fig.16, AGILE outperformed AGILEGCN shows that a GAT is capable of modeling the action relations correctly and AGILE converging faster than AGILE Only Action shows that the intermediate list information is crucial to efficiently learn to attend the other half in the pairing of items.
In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations. In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. We hypothesize that these environments require complex relations between actions (e.g., tools and activators in CREATE). To this end, we implement the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended. Since action relations are complex, AGILE is expected to outperform the ablations. Figure 14 shows that AGILE beats the baselines and in Fig.15 AGILE slightly but consistently outperforms the ablations. In Fig.16, AGILE outperforming AGILE-GCN shows that a GAT is capable of modeling the action relations correctly. AGILE converges faster than AGILE Only-Action. This shows that the state and the partially constructed list are crucial to learning to attend the other half in pairing items efficiently.
{ "annotation": [ "Rewriting_medium", "Content_deletion" ], "instruction": "Make this paragraph shorter and easier to understand", "annotator": "annotator_10" }
{ "annotation": [ "Concision" ], "instruction": "Simplify the less essential ideas of the paragraph to make it more concise.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
21
[ { "text": "In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that " }, { "text": "AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making." }, { "text": "Then, we hypothesized that the existence of the complex relations between actionsin the environment (e.g., tools and activators in CREATE) injects the complex action relations in the environment. For instance, an appropriate pair of an activator and a tool to use in CREATE depends on the situation." }, { "text": "To this end, we implemented the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended." }, { "text": "Since action relations are complex, AGILE is expected to outperform the ablations." }, { "text": "Figure 14 shows that AGILE beats the baselines and in Fig. AGILE slightly but consistently outperforms the ablations." }, { "text": "In Fig.16, AGILE outperformed AGILEGCN shows that a GAT is capable of modeling the action relations correctly and AGILE converging faster than AGILE Only Action shows that the intermediate list information is crucial to efficiently learn to attend the other half in the pairing of items." } ]
[ { "text": "In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations." }, { "text": "In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making." }, { "text": "We hypothesize that these environments require complex relations between actions (e.g., tools and activators in CREATE)." }, { "text": "To this end, we implement the pre-defined pairings among items in RecSim such that clicks can only happen when the correct pairs of items are recommended." }, { "text": "Since action relations are complex, AGILE is expected to outperform the ablations." }, { "text": "Figure 14 shows that AGILE beats the baselines and in Fig.15 AGILE slightly but consistently outperforms the ablations." }, { "text": "In Fig.16, AGILE outperforming AGILE-GCN shows that a GAT is capable of modeling the action relations correctly. AGILE converges faster than AGILE Only-Action. This shows that the state and the partially constructed list are crucial to learning to attend the other half in pairing items efficiently." } ]
NwOG107NKJ.0PPYM22rdB.02
Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) Weber and Luo [2014]. Other features includeproject volume, documentation volume, presence ofsupporting files, codevolume and standardlibrary usage. The popularity velocity can be measured by (Total_Stars / project_life). Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs.
Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) [Weber and Luo, 2014]. Other features include project size, file volume, critical folder, lines of code and calling of basic functions. The popularity rate can be measured by (Total_Stars / project_life). Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs.
{ "annotation": [ "Rewriting_light" ], "instruction": "Make the use of a citation in the second sentence correct. Update the third sentence.", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the readability of this paragraph.", "annotator": "annotator_03" }
NwOG107NKJ
0PPYM22rdB
2
[ { "text": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users." }, { "text": "Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest)" }, { "text": "Weber and Luo [2014]." }, { "text": "Other features includeproject volume, documentation volume, presence ofsupporting files, codevolume and standardlibrary usage." }, { "text": "The popularity velocity can be measured by (Total_Stars / project_life)." }, { "text": "Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs." } ]
[ { "text": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users." }, { "text": "Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest)" }, { "text": "[Weber and Luo, 2014]." }, { "text": "Other features include project size, file volume, critical folder, lines of code and calling of basic functions." }, { "text": "The popularity rate can be measured by (Total_Stars / project_life)." }, { "text": "Few studies have examined influence of user-popularity, repo-popularity, and triadic relationships in dynamic graphs." } ]
ByZyHzZC-.HktKf7-AW.00
The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In particular, Mandt et al. (2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases. In the first phase, weights diffuse and move away from the initialization. In the second phase the gradient magnitude dominates the noise in the gradient estimate. In the final phase, the weights are near the optimum. (Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation. In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation. Our derivation bears similarity with Mandt et al. However, while Mandt et al. (2017) aims at performing approximate Bayesian inference, our end goal is to analyse the stationary distribution reached by SGD.
The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In particular, Mandt et al. (2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases. In the first phase, weights diffuse and move away from the initialization. In the second phase the gradient magnitude dominates the noise in the gradient estimate. In the final phase, the weights are near the optimum. (Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation. In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation. Our derivation bears similarity with Mandt et al. However, while Mandt et al. (2017) study SGD as an approximate Bayesian inference method in the final phase of optimization in a locally convex setting, our end goal is to analyse the stationary distribution over the entire parameter space reached by SGD. Further, our analysis allows us to compare the probability of SGD ending up in one minima over another, which is novel in our case.
{ "annotation": [ "Development", "Content_addition" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
0
[ { "text": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014;" }, { "text": "Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014)." }, { "text": "In particular, Mandt et al." }, { "text": "(2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases." }, { "text": "In the first phase, weights diffuse and move away from the initialization." }, { "text": "In the second phase the gradient magnitude dominates the noise in the gradient estimate." }, { "text": "In the final phase, the weights are near the optimum." }, { "text": "(Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation." }, { "text": "In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation." }, { "text": "Our derivation bears similarity with Mandt et al." }, { "text": "However, while Mandt et al." }, { "text": "(2017) aims at performing approximate Bayesian inference, our end goal is to analyse the stationary distribution reached by " }, { "text": "SGD." } ]
[ { "text": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014;" }, { "text": "Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014)." }, { "text": "In particular, Mandt et al." }, { "text": "(2017) describe the dynamics of stochastic gradient descent (SGD) as a stochastic process that can be divided into three distinct phases." }, { "text": "In the first phase, weights diffuse and move away from the initialization." }, { "text": "In the second phase the gradient magnitude dominates the noise in the gradient estimate." }, { "text": "In the final phase, the weights are near the optimum." }, { "text": "(Shwartz-Ziv & Tishby, 2017) make related observations from an information theoretic point of view and suggest the diffusion behavior of the parameters in the last phase leads to the minimization of mutual information between the input and hidden representation." }, { "text": "In a similar vein, we relate the SGD dynamics to the stationary distribution of the stochastic differential equation." }, { "text": "Our derivation bears similarity with Mandt et al." }, { "text": "However, while Mandt et al." }, { "text": "(2017) study SGD as an approximate Bayesian inference method in the final phase of optimization in a locally convex setting, our end goal is to analyse the stationary distribution over the entire parameter space reached by SGD." }, { "text": "Further, our analysis allows us to compare the probability of SGD ending up in one minima over another, which is novel in our case." } ]
7_CwM-IzWd.zcm6f5HDI.05
During training, the uni-modal branch largely focuses on the associated modality. The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:
During training, each uni-modal branch largely focuses on its associate input modality. The fusion modules generate context representation using all modalities and feed such information to the unimodal branches. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make the sentence understandable.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the wording of this paragraph.", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
5
[ { "text": "During training, the uni-modal branch largely focuses on the associated modality." }, { "text": "The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them." }, { "text": "Both ˆ y 0 and ˆ y 1 depend on information from both modalities." }, { "text": "We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:" } ]
[ { "text": "During training, each uni-modal branch largely focuses on its associate input modality." }, { "text": "The fusion modules generate context representation using all modalities and feed such information to the unimodal branches." }, { "text": "Both ˆ y 0 and ˆ y 1 depend on information from both modalities." }, { "text": "We end up with two functions, f 0 and f 1 , corresponding to the two uni-modal branches:" } ]
eyheq0JfG.lDLi0nFVcl.00
For example, using mixup on top of random scaling and cropping improves the results by 0.4%. This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations).
For example, using mixup on top of random scaling and cropping improves the results by 0.4%. In comparison, when we trained Real-to-Bin Martinez et al. (2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II. This suggests that, thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations).
{ "annotation": [ "Content_addition", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
eyheq0JfG
lDLi0nFVcl
0
[ { "text": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%." }, { "text": "" }, { "text": "" }, { "text": "This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations)." } ]
[ { "text": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%." }, { "text": "In comparison, when we trained Real-to-Bin Martinez et al." }, { "text": "(2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II." }, { "text": "This suggests that, thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations)." } ]
CVRUl83zah.I75TtW0V7.05
Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not necessarily setequivariant. Zhang et al. find that FSPool-based encoders (Zhang et al., 2020) perform by far the best among the ones they have tried. With this type of encoder, DSPN becomes exclusively multiset-equivariant . This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant. We prove this in Appendix A.
Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but depending on the encoder, it is not necessarily set-equivariant. Zhang et al. find that FSPool-based encoders (Zhang et al., 2020) achieved by far the best results among the encoders they have tried. With FSPool, DSPN becomes exclusively multiset-equivariant to its initialization Y 0 . This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant (Appendix A).
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light", "Development" ], "instruction": "", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
5
[ { "text": "Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets." }, { "text": "The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not necessarily setequivariant." }, { "text": "Zhang et al." }, { "text": "find that FSPool-based encoders (Zhang et al., 2020) perform by far the best among the ones they have tried." }, { "text": "With this type of encoder, DSPN becomes exclusively multiset-equivariant ." }, { "text": "This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant. We prove this in Appendix A." } ]
[ { "text": "Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets." }, { "text": "The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but depending on the encoder, it is not necessarily set-equivariant." }, { "text": "Zhang et al." }, { "text": "find that FSPool-based encoders (Zhang et al., 2020) achieved by far the best results among the encoders they have tried." }, { "text": "With FSPool, DSPN becomes exclusively multiset-equivariant to its initialization Y 0 ." }, { "text": "This is due to the use of numerical sorting in FSPool: the Jacobian of sorting is exclusively multiset-equivariant (Appendix A)." } ]
aomiOZE_m2.rxb2TiQ6bq.05
Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly introduced recursive learning in DRCN to decrease model size (Kim et al., 2016b). Ahn et al . designed a cascading mechanism upon a residual network in CARN (Ahn et al., 2018). Hui et al . proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019). Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020). Recently, neural architecture search was introduced for image SR in FALSR (Chu et al., 2019a). Besides, model compression techniques, like knowledge distillation, have been investigated for image SR. He et al . proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020). Lee et al . trained a teacher network to distill its knowledge to a student (Lee et al., 2020). Although those lightweight networks have achieved great progress, we still need to investigate deeper for more efficient image SR models.
Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly decreased parameter number by utilizing recursive learning in DRCN (Kim et al., 2016b). Ahn et al . proposed CARN by designing a cascading mechanism upon a residual network (Ahn et al., 2018). Hui et al . proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019). Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020). Recently, neural architecture search was applied for image SR, like FALSR (Chu et al., 2019a). Also, model compression techniques have been explored for image SR. He et al . proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020). Lee et al . distilled knowledge from a larger teacher network to a student one (Lee et al., 2020). Those lightweight image SR models have obtained great progress, but we still need to investigate deeper for more efficient image SR models.
{ "annotation": [ "Rewriting_medium", "Concision" ], "instruction": "Can you make my paragraph more concise?", "annotator": "annotator_09" }
{ "annotation": [ "Concision" ], "instruction": "Use shorter formulations and more direct language to make the paragraph more concise.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
5
[ { "text": "Lightweight Image SR Models." }, { "text": "Recent years have been rising interest in investigating lightweight image SR models." }, { "text": "These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting." }, { "text": "Kim et al ." }, { "text": "firstly introduced recursive learning in DRCN to decrease model size (Kim et al., 2016b)." }, { "text": "Ahn et al ." }, { "text": "designed a cascading mechanism upon a residual network in CARN (Ahn et al., 2018)." }, { "text": "Hui et al ." }, { "text": "proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019)." }, { "text": "Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020)." }, { "text": "Recently, neural architecture search was introduced for image SR in FALSR (Chu et al., 2019a)." }, { "text": "Besides, model compression techniques, like knowledge distillation, have been investigated for image SR." }, { "text": "He et al ." }, { "text": "proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020)." }, { "text": "Lee et al ." }, { "text": "trained a teacher network to distill its knowledge to a student (Lee et al., 2020)." }, { "text": "Although those lightweight networks have achieved great progress, we still need to investigate deeper for more efficient image SR models." } ]
[ { "text": "Lightweight Image SR Models." }, { "text": "Recent years have been rising interest in investigating lightweight image SR models." }, { "text": "These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting." }, { "text": "Kim et al ." }, { "text": "firstly decreased parameter number by utilizing recursive learning in DRCN (Kim et al., 2016b)." }, { "text": "Ahn et al ." }, { "text": "proposed CARN by designing a cascading mechanism upon a residual network (Ahn et al., 2018)." }, { "text": "Hui et al ." }, { "text": "proposed a lightweight information multi-distillation network (IMDN) (Hui et al., 2019)." }, { "text": "Luo et al . designed the lattice block with butterfly structures (Luo et al., 2020)." }, { "text": "Recently, neural architecture search was applied for image SR, like FALSR (Chu et al., 2019a)." }, { "text": "Also, model compression techniques have been explored for image SR." }, { "text": "He et al ." }, { "text": "proposed knowledge distillation based feature-affinity for efficient image SR (He et al., 2020)." }, { "text": "Lee et al ." }, { "text": "distilled knowledge from a larger teacher network to a student one (Lee et al., 2020)." }, { "text": "Those lightweight image SR models have obtained great progress, but we still need to investigate deeper for more efficient image SR models." } ]
gIp_U0JsFa.T3RdAsTpzN.00
Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [78, 84, 32]. 5 A Algorithm 1 (Conditional) independence testing assessing the nature of shift S on a single variable U ∈ G .
Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [66, 71, 27].
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
gIp_U0JsFa
T3RdAsTpzN
0
[ { "text": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1)." }, { "text": "As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [78, 84, 32]. 5 A Algorithm 1 (Conditional) independence testing assessing the nature of shift S on a single variable U ∈ G ." } ]
[ { "text": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1)." }, { "text": "As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data 6 , which is a common setting in medical applications [66, 71, 27]." } ]
7_CwM-IzWd.zcm6f5HDI.22
We report means and standard deviations of the models’ test accuracy in Table 1.[- -] The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, where their performances are very close.
We report means and standard deviations of the models’ test accuracies in Table 1.[- -] 3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods in all four cases.
{ "annotation": [ "Content_substitution", "Development" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Content_substitution", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
7_CwM-IzWd
zcm6f5HDI
22
[ { "text": "We report means and standard deviations of the models’ test accuracy in Table 1.[-\n-]" }, { "text": " The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, where their performances are very close." } ]
[ { "text": "We report means and standard deviations of the models’ test accuracies in Table 1.[-\n-]" }, { "text": "3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods in all four cases." } ]
S1-LZxvKX.rJ009I8RX.03
Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch. NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude. Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training. These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks. As will be discussed in Section 5, our method, more scalable and computationally efficient than these previous approaches, fully closed the generalization gap for the first time between training a compact sparse network and compression of a large deep CNN.
Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch. NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude. Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training. These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks. We show that the method we propose in this paper is more scalable and computationally efficient than these previous approaches, while achieving better performance on deep convolutional networks.
{ "annotation": [ "Concision" ], "instruction": "Edit the last sentence of this paragraph to make it shorter and remove the reference to Section 5.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Rewrite the last sentence to make it more concise.", "annotator": "annotator_07" }
S1-LZxvKX
rJ009I8RX
3
[ { "text": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently." }, { "text": "Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training." }, { "text": "Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch." }, { "text": "NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude." }, { "text": "Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training." }, { "text": "These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks." }, { "text": "As will be discussed in Section 5, our method, more scalable and computationally efficient than these previous approaches, fully closed the generalization gap for the first time between training a compact sparse network and compression of a large deep CNN." } ]
[ { "text": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently." }, { "text": "Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training." }, { "text": "Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at the end of each training epoch." }, { "text": "NeST (Dai et al., 2017; 2018) iteratively grew and pruned parameters and neurons during training; parameter growth was guided by gradient and pruning by magnitude." }, { "text": "Deep rewiring (Bellec et al., 2017) combined sparse reparameterization with stochastic parameter updates for training." }, { "text": "These methods were mostly concerned with sparsifying fully connected layers and applied to relatively small and shallow networks." }, { "text": "We show that the method we propose in this paper is more scalable and computationally efficient than these previous approaches, while achieving better performance on deep convolutional networks." } ]
XXtXW925iG.JHwYPw52XHb.00
In the previous section, we showed that the limiting diffusion exists when ⌘ and go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘ ! 0 while ⌘ varies and is only upper bounded by some constant. A concrete example is ⌘ ! 0and being fixed.
In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant. A concrete example is η → 0 and λ beingfixed.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
XXtXW925iG
JHwYPw52XHb
0
[ { "text": "In the previous section, we showed that the limiting diffusion exists when ⌘ and \u0000 go to zero witha fixed ratio." }, { "text": "However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘\u0000 ! 0 while ⌘\u0000 varies and is only upper bounded by some constant." }, { "text": "A concrete example is ⌘ ! 0and \u0000 being fixed." } ]
[ { "text": "In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio." }, { "text": "However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant." }, { "text": "A concrete example is η → 0 and λ beingfixed." } ]
aFWzpdwEna.MCecpd3utK.00
In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to. To address this problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method, Pareto policy pool (P3), that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, providing the flexibility to select the best policy for each realistic environment from the pool. P3 provides a simple and principal approach that addresses the two major challenges in model-based offline RL: “model exploitation” and generalization to different unseen states. On the D4RL benchmark, P3 substantially outperforms several recent baseline methods over multiple tasks and shows the potentiality of learning a generalizable policy when the quality of pre-collected experiences is low.
In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment. To address the problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, which provides flexibility to select the best policy in the inference stage. We extensively validate the efficacy of our method on the D4RL benchmark, where ours largely outperforms several recent baselines and exhibits promising results on low-quality datasets.
{ "annotation": [ "Concision", "Rewriting_heavy" ], "instruction": "Make this paragraph more concise by rewriting the second half.", "annotator": "annotator_02" }
{ "annotation": [ "Concision", "Content_deletion" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
aFWzpdwEna
MCecpd3utK
0
[ { "text": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to." }, { "text": "To address this problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method, Pareto policy pool (P3), that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, providing the flexibility to select the best policy for each realistic environment from the pool." }, { "text": "P3 provides a simple and principal approach that addresses the two major challenges in model-based offline RL: “model exploitation” and generalization to different unseen states." }, { "text": "On the D4RL benchmark, P3 substantially outperforms several recent baseline methods over multiple tasks and shows the potentiality of learning a generalizable policy when the quality of pre-collected experiences is low." } ]
[ { "text": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment." }, { "text": "To address the problem, we study a bi-objective formulation for model-based offline RL and develop an efficient method that produces a pool of diverse policies on the Pareto front performing different levels of trade-offs, which provides flexibility to select the best policy in the inference stage." }, { "text": "" }, { "text": "We extensively validate the efficacy of our method on the D4RL benchmark, where ours largely outperforms several recent baselines and exhibits promising results on low-quality datasets." } ]
YkiRt7L93m.jgDbnUD7s.01
We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. Italso provides a unique solution to the projection problem under mild conditions and can replicate the geometric properties of the target measure, such as its shape and support. To achieve this, we work in the 2Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance.
A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of general probability measures supported on Euclidean spaces. It provides a unique solution to the projection problem under mild conditions. To achieve this, we work in the 2 -Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Please, make this paragraph easier to read.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite and reorganise this paragraph to improve the english and be more convincing, let the last sentence as it is.", "annotator": "annotator_07" }
YkiRt7L93m
jgDbnUD7s
1
[ { "text": "We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. " }, { "text": "Italso provides a unique solution to the projection problem under mild conditions and can replicate the geometric properties of the target measure, such as its shape and support." }, { "text": "To achieve this, we work in the 2Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance." } ]
[ { "text": "A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of general probability measures supported on Euclidean spaces." }, { "text": "It provides a unique solution to the projection problem under mild conditions." }, { "text": "To achieve this, we work in the 2 -Wasserstein space, that is, the set of all probability measures with finite second moments equipped with the 2 -Wasserstein distance." } ]
jzQGmT-R1q.ugUt9B3XaO.02
In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, but we see a similar trend across most architectures and prediction tasks. The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks. This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions.
In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by saturated units and that this phenomenon will be easiest to detect in settings where the network architecture is not highly over-parameterized relative to the prediction task. The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks. This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions.
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
jzQGmT-R1q
ugUt9B3XaO
2
[ { "text": "In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget." }, { "text": "This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, but we see a similar trend across most architectures and prediction tasks." }, { "text": "The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks." }, { "text": "This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions." } ]
[ { "text": "In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget." }, { "text": "This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by saturated units and that this phenomenon will be easiest to detect in settings where the network architecture is not highly over-parameterized relative to the prediction task." }, { "text": "The sparse reward setting is particularly intriguing: we do not expect to see a monotone increase in error as the later label functions correspond to ‘easier’ learning problems (i.e. predicting the majority class will already yield reasonably low prediction error), but we do see that for equal difficulty, the network obtains greater error on the later target set than the earlier one, and this effect is significantly more pronounced than in the random labels tasks." }, { "text": "This suggests that sparse reward signals can be particularly damaging to the ability of networks to fit new target functions." } ]
hegI87bI5S.fL6Q48sfx8.08
VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI). The mouse-cursor speed via the OS setting was set to the middle of the slider in the control display and ” Enhance pointer precision ” setting was turned on to match the participant’s usual settings. The experimental system was implemented with Hot soup processor 3.6 and used in full-screen mode.
VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an optical mouse (Logitech gaming mouse, G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the control display and the “ Enhance pointer precision ” setting was turned on to match the usual settings of the participant.). The experimental system was implemented with Hot Soup Processor 3.6 and used in the full-screen mode 1 .
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English of this paragraph", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium", "Rewriting_light" ], "instruction": "Slightly revise the linking between phrases.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
8
[ { "text": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz." }, { "text": "We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI)." }, { "text": "The mouse-cursor speed via the OS setting was set to the middle of the slider in the control display and ” Enhance pointer precision ” setting was turned on to match the participant’s usual settings." }, { "text": "The experimental system was implemented with Hot soup processor 3.6 and used in full-screen mode." } ]
[ { "text": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz." }, { "text": "We used an optical mouse (Logitech gaming mouse," }, { "text": "G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the control display and the “ Enhance pointer precision ” setting was turned on to match the usual settings of the participant.)." }, { "text": "The experimental system was implemented with Hot Soup Processor 3.6 and used in the full-screen mode 1 ." } ]
_nwyDQp-7.85dN7i1zNm.00
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumption leads to the bounds having the following form:
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. Intuitively, this means that source and target tasks are independent, which does not reflect real-world applications of few-shot learning where the former are often different draws (without replacement) from the same dataset. Under this unrealistic assumption, the above-mentioned works obtained the bounds having the following form:
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
_nwyDQp-7
85dN7i1zNm
0
[ { "text": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018;" }, { "text": "Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d." }, { "text": "from the same random distribution." }, { "text": "" }, { "text": "This assumption leads to the bounds having the following form:" } ]
[ { "text": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018;" }, { "text": "Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d." }, { "text": "from the same random distribution." }, { "text": "Intuitively, this means that source and target tasks are independent, which does not reflect real-world applications of few-shot learning where the former are often different draws (without replacement) from the same dataset." }, { "text": "Under this unrealistic assumption, the above-mentioned works obtained the bounds having the following form:" } ]
OV5v_wBMHk.bw4cqlpLh.02
Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i . e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i . e ., individuals have their preferences regarding treatment selection, making the population across different groups heterogeneous. To cope with missing counterfactuals, meta-learners (R et al., 2019) decompose the ITE estimation task into solvable subproblems. However,as shown in Section 2.1, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained over the treated/untreated group to the entire population, and the ITE estimation isthus biased.Representation-based methods mitigate this selection bias by minimizing the distribution discrepancy between groups in the representation space. In particular, Uri et al.
Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i . e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i . e ., individuals have their preferences for treatment selection, making units in different treatment groups heterogeneous. To handle missing counterfactuals, meta-learners (K¨unzel et al., 2019) decompose the ITE estimation task into solvable factual outcome estimation subproblems. However, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained within respective treatment groups to the entire population; consequently, the derived ITE estimator is biased.
{ "annotation": [ "Unusable", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
null
OV5v_wBMHk
bw4cqlpLh
2
[ { "text": "Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i ." }, { "text": "e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i ." }, { "text": "e ., individuals have their preferences regarding treatment selection, making the population across different groups heterogeneous." }, { "text": "To cope with missing counterfactuals, meta-learners (R et al., 2019) decompose the ITE estimation task into solvable subproblems." }, { "text": "However,as shown in Section 2.1, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained over the treated/untreated group to the entire population, and the ITE estimation isthus biased.Representation-based methods mitigate this selection bias by minimizing the distribution discrepancy between groups in the representation space. In particular, Uri et al." } ]
[ { "text": "Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i ." }, { "text": "e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i ." }, { "text": "e ., individuals have their preferences for treatment selection, making units in different treatment groups heterogeneous." }, { "text": "To handle missing counterfactuals, meta-learners (K¨unzel et al., 2019) decompose the ITE estimation task into solvable factual outcome estimation subproblems." }, { "text": "However, the treatment selection bias makes it difficult to generalize the factual outcome estimators trained within respective treatment groups to the entire population; consequently, the derived ITE estimator is biased." } ]
aomiOZE_m2.rxb2TiQ6bq.07
We first give a brief view of the problem setting about deep CNN for image SR. We also observe that there exists heavy redundancy in the networks. To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to compress them.
We first present an overview of the problem setting about deep CNN for image SR. It is also observed that excessive redundancy exists in the SR deep CNNs. Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more efficient SR networks.
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Can you paraphrase the last sentence?", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite the last sentence preferring passive voice over active.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
7
[ { "text": "We first give a brief view of the problem setting about deep CNN for image SR." }, { "text": "We also observe that there exists heavy redundancy in the networks." }, { "text": "To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to compress them." } ]
[ { "text": "We first present an overview of the problem setting about deep CNN for image SR." }, { "text": "It is also observed that excessive redundancy exists in the SR deep CNNs." }, { "text": "Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more efficient SR networks." } ]
nCTSF9BQJ.DGhBYSP_sR.02
Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021). Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on protein-protein binding. The major challenge is the scarcity of experimental data — only a few thousands of protein mutations annotated with the change in binding affinity are publicly available (Geng et al., 2019b). This hinders supervised learning as the insufficiency of training data tends to cause over-fitting. Another difficulty is the absence of the structure of mutated protein-protein complexes. Mutating amino acids on a protein complex leads to changes on sidechain conformations (rotamers) (Najmanovich et al., 2000; Gaudreault et al., 2012). They account for the change in binding free energy but we do not have the knowledge of how exactly the conformation changes upon mutation.
Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021). However, developing deep learning-based models to predict mutational effects on protein-protein binding is challenging due to the scarcity of experimental data. Only a few thousand protein mutations, annotated with changes in binding affinity, are publicly available (Geng et al., 2019b), making supervised learning challenging due to the potential for overfitting with insufficient training data. Another difficulty is the absence of the structure of mutated protein-protein complexes. Mutating amino acids on a protein complex leads to changes mainly in sidechain conformations (Najmanovich et al., 2000; Gaudreault et al., 2012), which contribute to the change in binding free energy. However, the exact conformational changes upon mutation are unknown.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the following paragraph using a more formal language.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite this paragraph for better readability.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
2
[ { "text": "Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021)." }, { "text": "Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on protein-protein binding." }, { "text": "The major challenge is the scarcity of experimental data — only a few thousands of protein mutations annotated with the change in binding affinity are publicly available (Geng et al., 2019b). This hinders supervised learning as the insufficiency of training data tends to cause over-fitting." }, { "text": "Another difficulty is the absence of the structure of mutated protein-protein complexes." }, { "text": "Mutating amino acids on a protein complex leads to changes on sidechain conformations (rotamers) (Najmanovich et al., 2000; Gaudreault et al., 2012)." }, { "text": "They account for the change in binding free energy but we do not have the knowledge of how exactly the conformation changes upon mutation." } ]
[ { "text": "Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021)." }, { "text": "However, developing deep learning-based models to predict mutational effects on protein-protein binding is challenging due to the scarcity of experimental data." }, { "text": "Only a few thousand protein mutations, annotated with changes in binding affinity, are publicly available (Geng et al., 2019b), making supervised learning challenging due to the potential for overfitting with insufficient training data." }, { "text": "Another difficulty is the absence of the structure of mutated protein-protein complexes." }, { "text": "Mutating amino acids on a protein complex leads to changes mainly in sidechain conformations (Najmanovich et al., 2000;" }, { "text": "Gaudreault et al., 2012), which contribute to the change in binding free energy. However, the exact conformational changes upon mutation are unknown." } ]
g5N2H6sr7.6J3ec8Dl3p.02
Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 1 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN.
Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We also include the results of recent supervised graph classification models: GCN (Kipf & Welling, 2017), GAT (Veliˇckovi´c et al., 2018), GIN (Xu et al., 2019b). We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 2 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
g5N2H6sr7
6J3ec8Dl3p
2
[ { "text": "Kernel (MLG) (Kondor & Pan, 2016)." }, { "text": "In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC" }, { "text": "(Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020)." }, { "text": "" }, { "text": " We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 1 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN." } ]
[ { "text": "Kernel (MLG) (Kondor & Pan, 2016)." }, { "text": "In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC" }, { "text": "(Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020)." }, { "text": "We also include the results of recent supervised graph classification models: GCN (Kipf & Welling, 2017), GAT (Veliˇckovi´c et al., 2018)," }, { "text": "GIN (Xu et al., 2019b). We denote our framework using (1) GCN (Kipf & Welling, 2017) in the decoders as ALATION-GCN 2 , (2) inverse of GCN in Section 4.1 in the decoders as ALATION-INVERSE-GCN." } ]
hegI87bI5S.fL6Q48sfx8.11
We defined the notch position ( Position ) as the condition. Position = Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target. When the angle of entry to a target adjacent to a top edge with respect to the target was based on they-axis, an equivalent effect was observed at the angles of entry that were lineally symmetric about the y-axis [3]. Therefore, the performance would be the same whether the target was to the left or right of the starting area. To avoid increasing the workload of the participant, we always placed the starting area to the left of the target.
We defined the notch position ( Position ) as the condition. Position = Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target. An equivalent effect is observed at angles of entry that are lineally symmetric about the y-axis when the angle of entry the target adjacent to a top edge with respect to the target is based on the y-axis [3]. Therefore, the performance is the same whether the target is to the left or right of the starting area. We always place the starting area to the left of the target to avoid increasing the workload of the participant.
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
hegI87bI5S
fL6Q48sfx8
11
[ { "text": "We defined the notch position ( Position ) as the condition." }, { "text": "Position =" }, { "text": "Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target." }, { "text": "When the angle of entry to a target adjacent to a top edge with respect to the target was based on they-axis, an equivalent effect was observed at the angles of entry that were lineally symmetric about the y-axis [3]." }, { "text": "Therefore, the performance would be the same whether the target was to the left or right of the starting area." }, { "text": "To avoid increasing the workload of the participant, we always placed the starting area to the left of the target." } ]
[ { "text": "We defined the notch position ( Position ) as the condition." }, { "text": "Position =" }, { "text": "Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target." }, { "text": "An equivalent effect is observed at angles of entry that are lineally symmetric about the y-axis when the angle of entry the target adjacent to a top edge with respect to the target is based on the y-axis [3]." }, { "text": "Therefore, the performance is the same whether the target is to the left or right of the starting area." }, { "text": "We always place the starting area to the left of the target to avoid increasing the workload of the participant." } ]
aVemIPPM7t.-8hV3QV4L9.00
Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM. It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described in this paper.
Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM. It takes less than a week of compute on a single r5.24xlarge instance to run all the experiments described in this paper.
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
aVemIPPM7t
-8hV3QV4L9
0
[ { "text": "Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM." }, { "text": "It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described in this paper." } ]
[ { "text": "Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM." }, { "text": "It takes less than a week of compute on a single r5.24xlarge instance to run all the experiments described in this paper." } ]
SRquLaHRM4.vI2x5N-YHC.00
We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy. At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics. Compared with conventional distance (such as Euclideandistance of mean features), optimal transport can align different visual features for each local prompt,which is more robust to the visual misalignment and tolerates well feature shift [44]. It is because OTlearns an adaptive transport plan to align features, which achieves fine-grained matching across twomodalities. We conduct experiments on 11 datasets following the standard setting of CLIP [39] and CoOp [63] to evaluate our method. These experiments span the visual classification of generic objects,scenes, actions, fine-grained categories, and so on. The significant result improvement demonstratesthat PLOT can effectively learn representative and comprehensive prompts.
We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy. At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics. We conduct comprehensive experiments on 11 datasetsfollowing the standard setting of CLIP [39] and CoOp [62] to evaluate our method. These experimentsspan the visual classification on generic objects, scenes, actions, fine-grained categories and so on. The significant result improvement demonstrates that PLOT can effectively learn representative andcomprehensive prompts.
{ "annotation": [ "Content_deletion" ], "instruction": "Remove any unessential information in this paragraph.", "annotator": "annotator_03" }
{ "annotation": [ "Content_deletion", "Rewriting_light" ], "instruction": "Please exclude the content related to optimal transport.", "annotator": "annotator_09" }
SRquLaHRM4
vI2x5N-YHC
0
[ { "text": "We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value." }, { "text": "Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy." }, { "text": "At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm" }, { "text": "Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics." }, { "text": "Compared with conventional distance (such as Euclideandistance of mean features), optimal transport can align different visual features for each local prompt,which is more robust to the visual misalignment and tolerates well feature shift [44]." }, { "text": "It is because OTlearns an adaptive transport plan to align features, which achieves fine-grained matching across twomodalities." }, { "text": "We conduct experiments on 11 datasets following the standard setting of CLIP [39] and CoOp [63] to evaluate our method." }, { "text": "These experiments span the visual classification of generic objects,scenes, actions, fine-grained categories, and so on." }, { "text": "The significant result improvement demonstratesthat PLOT can effectively learn representative and comprehensive prompts." } ]
[ { "text": "We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value." }, { "text": "Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimization strategy." }, { "text": "At the first stage in the inner loop, we fix both visual and textfeatures and optimize the optimal transport problem by a fast Sinkhorn distances algorithm" }, { "text": "Then,in the outer loop, we fix all parameters of optimal transport and back-propagate the gradient to learnthe prompts with different characteristics." }, { "text": "" }, { "text": "" }, { "text": "We conduct comprehensive experiments on 11 datasetsfollowing the standard setting of CLIP [39] and CoOp [62] to evaluate our method." }, { "text": "These experimentsspan the visual classification on generic objects, scenes, actions, fine-grained categories and so on." }, { "text": "The significant result improvement demonstrates that PLOT can effectively learn representative andcomprehensive prompts." } ]
aomiOZE_m2.rxb2TiQ6bq.20
Model Size and Mult-Adds. Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number. We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720. Our SRPN-L operates less Mult-Adds than most compared methods. Those comparisons indicate that SRP reduces parameters and operations efficiently.
Model Size and Mult-Adds. Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN. The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also presented. As seen, our SRPNLite costs fewer Mult-Adds than most comparison methods. These results demonstrate the merits of SRP against other counterparts in striking a better network performance-complexity trade-off.
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Give me a more formal version of this paragraph", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the text and change SRPN-L to SRPN-Lite", "annotator": "annotator_06" }
aomiOZE_m2
rxb2TiQ6bq
20
[ { "text": "Model Size and Mult-Adds." }, { "text": "Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number." }, { "text": "We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720." }, { "text": "Our SRPN-L operates less Mult-Adds than most compared methods." }, { "text": "Those comparisons indicate that SRP reduces parameters and operations efficiently." } ]
[ { "text": "Model Size and Mult-Adds." }, { "text": "Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN." }, { "text": "The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also presented." }, { "text": "As seen, our SRPNLite costs fewer Mult-Adds than most comparison methods." }, { "text": "These results demonstrate the merits of SRP against other counterparts in striking a better network performance-complexity trade-off." } ]
MnewiFDvHZ.iAYttXl-uH.00
• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when makingdecision at round t and can be arbitrarily and adversarially chosen, as in [24, 20, 30].
• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adversarially chosen, as in [22, 18, 27].
{ "annotation": [ "Concision" ], "instruction": "Make paragraph more concise", "annotator": "annotator_06" }
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
MnewiFDvHZ
iAYttXl-uH
0
[ { "text": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when makingdecision at round t and can be arbitrarily and adversarially chosen, as in [24, 20, 30]." } ]
[ { "text": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adversarially chosen, as in [22, 18, 27]." } ]
3686sm4Cs.AJMXMDLVn.01
Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters. shows that our approach outperforms or is on par with prior work in efficient ensembling. b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles. See Section 4 for discussion
Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters. Unlike methods like BatchEnsemble (BE) (Wen et al., 2020) and MIMO (Havasi et al., 2021), which shows that our approach outperforms or is on par with prior work in efficient ensembling. (b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles. See Section 4 for discussion
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
3686sm4Cs
AJMXMDLVn
1
[ { "text": "Results." }, { "text": "Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling." }, { "text": "Note that all the methods boost performance over a single model without requiring additional model parameters." }, { "text": "However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters." }, { "text": " shows that our approach outperforms or is on par with prior work in efficient ensembling." }, { "text": "b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles." }, { "text": "See Section 4 for discussion" } ]
[ { "text": "Results." }, { "text": "Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling." }, { "text": "Note that all the methods boost performance over a single model without requiring additional model parameters." }, { "text": "However, our SuperWeight Ensembles outperforms all other methods on CIFAR-100 when using 36.5M parameters." }, { "text": "Unlike methods like BatchEnsemble (BE) (Wen et al., 2020) and MIMO (Havasi et al., 2021), which shows that our approach outperforms or is on par with prior work in efficient ensembling." }, { "text": "(b) increases the number of parameters (without changing the architecture) using our approach compared to Deep Ensembles." }, { "text": "See Section 4 for discussion" } ]
OV5v_wBMHk.bw4cqlpLh.08
However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration. As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochastic mini-batch level:
However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration. A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:
{ "annotation": [ "Rewriting_light" ], "instruction": "check the wordings but keep the original content as much as possible", "annotator": "annotator_05" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the language to make it more formal.", "annotator": "annotator_07" }
OV5v_wBMHk
bw4cqlpLh
8
[ { "text": "However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration." }, { "text": "As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochastic mini-batch level:" } ]
[ { "text": "However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration." }, { "text": "A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:" } ]
5Eyr2crzI.s502diDSt.00
We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made. We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable.
We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. 7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration from 2 to 128. From N = 16 and lower, coverage performance starts to diminish while little speed gains are made. We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
5Eyr2crzI
s502diDSt
0
[ { "text": "We also display the trade-off between inference speed and coverage from hierarchical refinement in " }, { "text": "Fig." }, { "text": "From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made." }, { "text": "We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable." } ]
[ { "text": "We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig." }, { "text": "7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration from 2 to 128." }, { "text": "From N = 16 and lower, coverage performance starts to diminish while little speed gains are made." }, { "text": "We still kept a relatively high N=64 in our model as we wanted to insure a wide coverage, and the time loss between 41 ms and 46 ms remains acceptable." } ]
atxti8SVk.3K9AmPwALM.15
Pascal: Image tag annotations. On Pascal VOC dataset, our method outperforms others by a large margin. Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable 4 . 5% .
Pascal: Image tag annotations. Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 .
{ "annotation": [ "Content_deletion", "Content_substitution" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
15
[ { "text": "Pascal: Image tag annotations." }, { "text": "On Pascal VOC dataset, our method outperforms others by a large margin." }, { "text": "Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable 4 . 5% ." } ]
[ { "text": "Pascal: Image tag annotations." }, { "text": "" }, { "text": "Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 ." } ]
OzYyHKPyj7.O9Mk1uqXra.01
The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively). In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed. The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018). The stack reading is the top cell of the stack. This model has quadratic time and space complexity with respect to input length. We refer the reader to Appendix A.2 for full details.
The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stacks. In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed. The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018). The stack reading is the top cell of the stack. This model has quadratic time and space complexity with respect to input length. We refer the reader to Appendix A.2 for full details.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
OzYyHKPyj7
O9Mk1uqXra
1
[ { "text": "The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively)." }, { "text": "In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed." }, { "text": "The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018)." }, { "text": "The stack reading is the top cell of the stack." }, { "text": "This model has quadratic time and space complexity with respect to input length." }, { "text": "We refer the reader to Appendix A.2 for full details." } ]
[ { "text": "The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stacks." }, { "text": "In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , where the vector a 𝑡 is a probability distribution over three stack operations: push a new vector, no-op, and pop the top vector; v 𝑡 is the vector to be pushed." }, { "text": "The vector v 𝑡 can be learned or can be set to h 𝑡 (Yogatama et al., 2018)." }, { "text": "The stack reading is the top cell of the stack." }, { "text": "This model has quadratic time and space complexity with respect to input length." }, { "text": "We refer the reader to Appendix A.2 for full details." } ]
BkwlK_dPB.SJfZLu8oB.00
It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem such asvolume of the goal set |F RLgoal | and how complex and long the solution needs to be. ˆ b depends on the probability of sampling states that will expand the solution in the right direction. Therefore, ˆ b is a function of the dimensionality of S and the visibility of F , i.e. how constrained the tree expansion is. We refer the reader to Appendix S for more details on how the tail bound in Theorem 1 is derived.
It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem. It grows as |F RLgoal | becomes smaller or as the length of the solution trajectory becomes longer. ˆ b depends on the probability of sampling states that will expand the tree in the right direction. It therefore shrinks as the dimensionality of S increases. We refer the reader to Appendix S2 for more details on the meaning of ˆ a, ˆ b and the derivation of the tail bound in Theorem 1.
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_04" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the text to make it more direct and readable when necessary.", "annotator": "annotator_07" }
BkwlK_dPB
SJfZLu8oB
0
[ { "text": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b ." }, { "text": "Intuitively, ˆ a depends on the scale of the problem such asvolume of the goal set |F RLgoal | and how complex and long the solution needs to be." }, { "text": "ˆ b depends on the probability of sampling states that will expand the solution in the right direction." }, { "text": "Therefore, ˆ b is a function of the dimensionality of S and the visibility of F , i.e. how constrained the tree expansion is." }, { "text": "We refer the reader to Appendix S for more details on how the tail bound in Theorem 1 is derived." } ]
[ { "text": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b ." }, { "text": "Intuitively, ˆ a depends on the scale of the problem. It grows as |F RLgoal | becomes smaller or as the length of the solution trajectory becomes longer." }, { "text": "ˆ b depends on the probability of sampling states that will expand the tree in the right direction." }, { "text": "It therefore shrinks as the dimensionality of S increases." }, { "text": "We refer the reader to Appendix S2 for more details on the meaning of ˆ a, ˆ b and the derivation of the tail bound in Theorem 1." } ]
URRc6L6nmE.yUoqIf6zGY.00
A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators. A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al.
A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators. A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
URRc6L6nmE
yUoqIf6zGY
0
[ { "text": "A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space." }, { "text": "Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators." }, { "text": "A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al." } ]
[ { "text": "A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work." }, { "text": "Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators." }, { "text": "A continuous approximation that has shown to yield satisfying performance is the boundary-layer technique (Slotine et al." } ]
kAwMEYEIN.RlDWAM6qF.00
HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training. We believe this work provides important insights into the loss design in Physics-Informed deep learning.
HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training. One limitation of this workis that we only work on the HJB Equation. Theoretical investigation of other important equations canbe an exciting direction for future works. We believe this work provides important insights into the loss design in Physics-Informed deep learning.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
kAwMEYEIN
RlDWAM6qF
0
[ { "text": "HJB equation is stable only if p is sufficiently large." }, { "text": "Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice." }, { "text": "The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training." }, { "text": "" }, { "text": "" }, { "text": "We believe this work provides important insights into the loss design in Physics-Informed deep learning." } ]
[ { "text": "HJB equation is stable only if p is sufficiently large." }, { "text": "Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice." }, { "text": "The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss for HJB equations in a similar spirit to adversarial training." }, { "text": "One limitation of this workis that we only work on the HJB Equation." }, { "text": "Theoretical investigation of other important equations canbe an exciting direction for future works." }, { "text": "We believe this work provides important insights into the loss design in Physics-Informed deep learning." } ]
YCmehaMzt.kHwUIOFr_.00
In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this composing method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the abilities to learm critical information under the disturbance of composed non-semantic representations.
Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this ensemble method under different training strategies and find that it can always keep effective. Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level. Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises. It can serve as a new benchmark to evaluate the ability to learn critical information under the disturbance of composed non-semantic representations.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Change the idea of \"composition\" to \"ensemble\" if this paragraph. Fix any spelling mistake.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the first sentence. Improve English in this paragraph.", "annotator": "annotator_07" }
YCmehaMzt
kHwUIOFr_
0
[ { "text": "In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples." }, { "text": "Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed." }, { "text": "We evaluate the effectiveness of this composing method under different training strategies and find that it can always keep effective." }, { "text": "Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level." }, { "text": "Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises." }, { "text": "It can serve as a new benchmark to evaluate the abilities to learm critical information under the disturbance of composed non-semantic representations." } ]
[ { "text": "Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut." }, { "text": "Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed." }, { "text": "We evaluate the effectiveness of this ensemble method under different training strategies and find that it can always keep effective." }, { "text": "Even if we use adversarial training and strong data augmentation like RandAugment, it is still able to degrade test accuracy to a relatively low level." }, { "text": "Based on this property, we introduce CIFAR-10-S, where all the images are perturbed by the EM-OPS-composed noises." }, { "text": "It can serve as a new benchmark to evaluate the ability to learn critical information under the disturbance of composed non-semantic representations." } ]
NcdK3bdqnA.kF_TmXY8G0.00
The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections. The two types of image-specific linear projections do not lead to substantial performance differences. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency.
The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance increase. Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency.
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
NcdK3bdqnA
kF_TmXY8G0
0
[ { "text": "The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections." }, { "text": "The two types of image-specific linear projections do not lead to substantial performance differences." }, { "text": "Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency." } ]
[ { "text": "The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias." }, { "text": "Introducing additional image-specific linear projection weights does not lead to further performance increase." }, { "text": "Thus, we take the strategy of only adding additional linear bias for augmented images and reuse contextual linear weights in generating visual attention keys and values for implementation convenience and parameter efficiency." } ]
mS4xvgSiEH.i-a3xp3usm.00
The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.
The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
mS4xvgSiEH
i-a3xp3usm
0
[ { "text": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels." }, { "text": "To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE." } ]
[ { "text": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels." }, { "text": "We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE." } ]
g5N2H6sr7.6J3ec8Dl3p.04
Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.
Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
g5N2H6sr7
6J3ec8Dl3p
4
[ { "text": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS." }, { "text": "This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL." } ]
[ { "text": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s." }, { "text": "This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL." } ]
aomiOZE_m2.rxb2TiQ6bq.06
Neural Network Pruning. Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pruning (a.k.a. unstructured pruning). The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., a scalar). Structured pruning results in regular sparsity after pruning. It does not demand any special hardware features to achieve considerable practical acceleration. In contrast, unstructured pruning leads to irregular sparsity. Leveraging the irregular sparsity for acceleration typically demands special software libraries, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platform (Han et al., 2016a). In this paper, we focus on filter pruning for easy acceleration. Most efforts in pruning (mainly in classification task) have been spent on finding a better pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017). Magnitude-based (Han et al., 2015; 2016b; Li et al., 2017) is the most prevailing criterion, which we will also employ to develop our method in this paper.As far as we know, no work before has managed to apply filter pruning to compressing image SR networks. This paper is meant to fill the blank.
Neural Network Pruning. Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). The methodology of pruning mainly falls into two groups: filter pruning (or more generally known as structured pruning) * and weight-element pruning (also referred to as unstructured pruning). The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., scalars). Structured pruning results in regular sparsity after pruning. It does not demand any special hardware features to achieve considerable practical acceleration. In contrast, unstructured pruning leads to irregular sparsity. Leveraging the irregular sparsity for acceleration typically demands special software supports, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platforms (Han et al., 2016a). In this paper, we tackle filter pruning instead of weight-element pruning for effortless acceleration. The major efforts in pruning (mainly in image classification) have been focusing on proposing a more sound pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017). Criteria based on weight magnitude (Han et al., 2015; 2016b; Li et al., 2017) are the most prevailing ones, which we will also employ to develop our method in this paper.
{ "annotation": [ "Development", "Concision" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Rewrite the last sentence to make it more concise by removing shortcomings of other work.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
6
[ { "text": "Neural Network Pruning." }, { "text": "Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017)." }, { "text": "It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pruning (a.k.a. unstructured pruning)." }, { "text": "The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., a scalar)." }, { "text": "Structured pruning results in regular sparsity after pruning." }, { "text": "It does not demand any special hardware features to achieve considerable practical acceleration." }, { "text": "In contrast, unstructured pruning leads to irregular sparsity." }, { "text": "Leveraging the irregular sparsity for acceleration typically demands special software libraries, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platform (Han et al., 2016a)." }, { "text": "In this paper, we focus on filter pruning for easy acceleration." }, { "text": "Most efforts in pruning (mainly in classification task) have been spent on finding a better pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017)." }, { "text": "Magnitude-based (Han et al., 2015; 2016b; Li et al., 2017) is the most prevailing criterion, which we will also employ to develop our method in this paper.As far as we know, no work before has managed to apply filter pruning to compressing image SR networks. This paper is meant to fill the blank." } ]
[ { "text": "Neural Network Pruning." }, { "text": "Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017)." }, { "text": "The methodology of pruning mainly falls into two groups: filter pruning (or more generally known as structured pruning) * and weight-element pruning (also referred to as unstructured pruning)." }, { "text": "The former aims to remove weights by filters (i.e., 4-d tensors), while the latter removes weights by single elements (i.e., scalars)." }, { "text": "Structured pruning results in regular sparsity after pruning." }, { "text": "It does not demand any special hardware features to achieve considerable practical acceleration." }, { "text": "In contrast, unstructured pruning leads to irregular sparsity." }, { "text": "Leveraging the irregular sparsity for acceleration typically demands special software supports, while past works have shown the practical speedup is very limited (Wen et al., 2016), unless using customized hardware platforms (Han et al., 2016a)." }, { "text": "In this paper, we tackle filter pruning instead of weight-element pruning for effortless acceleration." }, { "text": "The major efforts in pruning (mainly in image classification) have been focusing on proposing a more sound pruning criterion to select unimportant weights (Reed, 1993; Sze et al., 2017)." }, { "text": "Criteria based on weight magnitude (Han et al., 2015; 2016b; Li et al., 2017) are the most prevailing ones, which we will also employ to develop our method in this paper." } ]
7_CwM-IzWd.zcm6f5HDI.21
Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla). For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40 and
Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019). For each algorithm, we train each model three times with the same learning rate. We use 0.01, 0.1 and 0.01 as learning rate for Colored-and-gray-MNIST, ModelNet40 and
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_08" }
7_CwM-IzWd
zcm6f5HDI
21
[ { "text": "Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla)." }, { "text": "For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40 and" } ]
[ { "text": "Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019)." }, { "text": "For each algorithm, we train each model three times with the same learning rate. We use 0.01, 0.1 and 0.01 as learning rate for Colored-and-gray-MNIST, ModelNet40 and" } ]
sIqSoZ9KiO.KLlOZMoJ9G.01
To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there is a single stochastic vector and not a hierarchy of feature maps, which enables better control of the latent space. In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations.
To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning, at least in the sense of (Higgins et al., 2016) where the aim is to factorize the dimensions of a latent vector. In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make sentence precise.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Rephrase the second sentence, mostly focusing on the second half.", "annotator": "annotator_07" }
sIqSoZ9KiO
KLlOZMoJ9G
1
[ { "text": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE." }, { "text": "Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there is a single stochastic vector and not a hierarchy of feature maps, which enables better control of the latent space." }, { "text": "In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations." } ]
[ { "text": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE." }, { "text": "Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning, at least in the sense of (Higgins et al., 2016) where the aim is to factorize the dimensions of a latent vector." }, { "text": "In particular, the gains in performance when using SDN were evaluated with respect to: (a) evidence lower bound (ELBO), as a proxy to measure how well an image distribution is approximated; (b) disentanglement of latent codes based on the corresponding metrics, to examine the effects of SDN decoder to the quality of learned latent representations." } ]
q4rMz7ZfFG.uyxGiQeMP.01
We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”. The source code finds all substrings by calling re.findall () build-in fucntion. In the second case, the query is “Combing the individual byte arrays into one array”, and the model searches a source code from Java candidate codes. As we can see, the source code concatenates multiple arrays into one array by calling System.arraycopy () build-in fucntion.
We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes. In the fine-turning step, we set the learning rate as 2e-5, the batch size as 32, the max sequence length of queries and codes as 128 and 256, and the max number of nodes as 64. We use the Adam optimizer to update model parameters and perform early stopping on the development set.
{ "annotation": [ "Rewriting_heavy", "Content_substitution" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
q4rMz7ZfFG
uyxGiQeMP
1
[ { "text": "We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”." }, { "text": "The source code finds all substrings by calling re.findall () build-in fucntion. In the second case, the query is “Combing the individual byte arrays into one array”, and the model searches a source code from Java candidate codes. As we can see, the source code concatenates multiple arrays into one array by calling System.arraycopy () build-in fucntion." } ]
[ { "text": "We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes." }, { "text": "In the fine-turning step, we set the learning rate as 2e-5, the batch size as 32, the max sequence length of queries and codes as 128 and 256, and the max number of nodes as 64. We use the Adam optimizer to update model parameters and perform early stopping on the development set." } ]
aomiOZE_m2.rxb2TiQ6bq.04
SRCNN. Tai et al . later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure. Lim et al . (Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper and wider networks with a large number of parameters. Zhang et al . (Zhang et al., 2018b) proposed an even deeper network, residual channel attention network (RCAN), where the attention mechanism was firstly introduced in image SR. Liu et al . proposed FRANet (Liu et al., 2020) to make the residual features more focused on critical spatial contents. Later, Zhang et al . (Zhang et al., 2019) proposed residual non-local attention for image restoration, including image SR. Mei et al . proposed CSNLN (Mei et al., 2020) by combining local, in-scale/cross-scale non-local feature correlations, and external statistics. Most of them have achieved state-of-the-art results with deeper and wider networks. However, they suffer from huge model size (i.e., network parameter number) and/or heavy computation operations (i.e., FLOPs).
SRCNN. Tai et al . later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure. Lim et al . (Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper and wider networks with a large number of parameters. Zhang et al . (Zhang et al., 2018b) proposed an even deeper network, residual channel attention network (RCAN), where the attention mechanism was firstly introduced in image SR. Liu et al . proposed FRANet (Liu et al., 2020) to make the residual features focus on critical spatial contents. Later, Zhang et al . (Zhang et al., 2019) proposed residual non-local attention for image restoration. Mei et al . proposed CSNLN (Mei et al., 2020) by combining local, in-scale/cross-scale non-local feature correlations, and external statistics. Most of those methods have achieved SOTA results. However, they suffer from huge model size (i.e., network parameter number) and/or heavy computation operations (i.e., FLOPs).
{ "annotation": [ "Concision" ], "instruction": "Be more concise.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light", "Concision" ], "instruction": "Use shorter formulations to make some sentences more concise.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
4
[ { "text": "SRCNN." }, { "text": "Tai et al ." }, { "text": "later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure." }, { "text": "Lim et al ." }, { "text": "(Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper and wider networks with a large number of parameters." }, { "text": "Zhang et al ." }, { "text": "(Zhang et al., 2018b) proposed an even deeper network, residual channel attention network (RCAN), where the attention mechanism was firstly introduced in image SR." }, { "text": "Liu et al ." }, { "text": "proposed" }, { "text": "FRANet" }, { "text": "(Liu et al., 2020) to make the residual features more focused on critical spatial contents." }, { "text": "Later, Zhang et al ." }, { "text": "(Zhang et al., 2019) proposed residual non-local attention for image restoration, including image SR." }, { "text": "Mei et al ." }, { "text": "proposed CSNLN (Mei et al., 2020) by combining local, in-scale/cross-scale non-local feature correlations, and external statistics." }, { "text": "Most of them have achieved state-of-the-art results with deeper and wider networks." }, { "text": "However, they suffer from huge model size (i.e., network parameter number) and/or heavy computation operations (i.e., FLOPs)." } ]
[ { "text": "SRCNN." }, { "text": "Tai et al ." }, { "text": "later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure." }, { "text": "Lim et al ." }, { "text": "(Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper and wider networks with a large number of parameters." }, { "text": "Zhang et al ." }, { "text": "(Zhang et al., 2018b) proposed an even deeper network, residual channel attention network (RCAN), where the attention mechanism was firstly introduced in image SR." }, { "text": "Liu et al ." }, { "text": "proposed" }, { "text": "FRANet" }, { "text": "(Liu et al., 2020) to make the residual features focus on critical spatial contents." }, { "text": "Later, Zhang et al ." }, { "text": "(Zhang et al., 2019) proposed residual non-local attention for image restoration." }, { "text": "Mei et al ." }, { "text": "proposed CSNLN (Mei et al., 2020) by combining local, in-scale/cross-scale non-local feature correlations, and external statistics." }, { "text": "Most of those methods have achieved SOTA results." }, { "text": "However, they suffer from huge model size (i.e., network parameter number) and/or heavy computation operations (i.e., FLOPs)." } ]
MXi6uEx-hp.rdZfFcGyf9.16
In Figure 6, we analyze the agent performance qualitatively. (a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially covers all the tools that get activated with spring, such as trampoline. At t = 1 , the trampoline tool is selected with a strong attention on spring. This shows that for selecting trampoline, the agent checks for the presence of spring, so it is possible to place it before or after the trampoline. (b) In Grid World, we visualize the Summary-GAT ablation to see how summarizer utilizes attention. We consider the case where both dig − lava skills are available. The agent goes right, digs the orange lava, and is about to enter the pink lava. At this point, the Right action attends with a high weight to Dig − Pink skill, checking for its presence before making an irreversible decision of entering the lava. In contrast, the Utility Policy always follows the safe suboptimal path as it is blind to the knowledge of dig-skills before entering lava. Finally, in RecSim, we observe that the agent is able to maximize the CPR score by selecting 5 out of 6 items in the list from the same primary category. In contrast, Utility Policy cannot determine the most common category and is unable to maximize CPR well.
In Figure 6, we analyze the agent performance qualitatively. (a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially the tools that get activated with spring , such as trampoline . At t = 1 , the trampoline tool is selected with strong attention on spring . This shows that for selecting the trampoline , the agent checks for its activator, spring , to ensure that it is possible to place spring before or after the trampoline. (b) In Grid World, we visualize the inter-action attention in Summary-GAT ’s summarizer. We consider the case where both dig − lava skills are available. The agent goes right, digs the orange lava, and is about to enter the pink lava. At this point, the Right action attends with a large weight to the Dig − Pink skill, checking for its presence before making an irreversible decision of entering the lava. In contrast, the Utility Policy always follows the safe suboptimal path as it is blind to the knowledge of dig-skills before entering lava. (c) In RecSim, we observe that the agent can maximize the CPR score by selecting 5 out of 6 items in the list from the same primary category. In contrast, Utility Policy cannot determine the most common available category and is unable to maximize CPR.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make this paragraph better. Rewrite a sentence about the Grid World", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Improve the clarity in this paragraph.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
16
[ { "text": "In Figure 6, we analyze the agent performance qualitatively." }, { "text": "(a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially covers all the tools that get activated with spring, such as trampoline." }, { "text": "At t = 1 , the trampoline tool is selected with a strong attention on spring." }, { "text": "This shows that for selecting trampoline, the agent checks for the presence of spring, so it is possible to place it before or after the trampoline." }, { "text": "(b) In Grid World, we visualize the Summary-GAT ablation to see how summarizer utilizes attention." }, { "text": "We consider the case where both dig − lava skills are available." }, { "text": "The agent goes right, digs the orange lava, and is about to enter the pink lava." }, { "text": "At this point, the Right action attends with a high weight to Dig − Pink skill, checking for its presence before making an irreversible decision of entering the lava." }, { "text": "In contrast, the Utility Policy always follows the safe suboptimal path as it is blind to the knowledge of dig-skills before entering lava." }, { "text": "Finally, in RecSim, we observe that the agent is able to maximize the CPR score by selecting 5 out of 6 items in the list from the same primary category." }, { "text": "In contrast, Utility Policy cannot determine the most common category and is unable to maximize CPR well." } ]
[ { "text": "In Figure 6, we analyze the agent performance qualitatively." }, { "text": "(a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially the tools that get activated with spring , such as trampoline ." }, { "text": "At t = 1 , the trampoline tool is selected with strong attention on spring ." }, { "text": "This shows that for selecting the trampoline , the agent checks for its activator, spring , to ensure that it is possible to place spring before or after the trampoline." }, { "text": "(b) In Grid World, we visualize the inter-action attention in Summary-GAT ’s summarizer." }, { "text": "We consider the case where both dig − lava skills are available." }, { "text": "The agent goes right, digs the orange lava, and is about to enter the pink lava." }, { "text": "At this point, the Right action attends with a large weight to the Dig − Pink skill, checking for its presence before making an irreversible decision of entering the lava." }, { "text": "In contrast, the Utility Policy always follows the safe suboptimal path as it is blind to the knowledge of dig-skills before entering lava." }, { "text": "(c) In RecSim, we observe that the agent can maximize the CPR score by selecting 5 out of 6 items in the list from the same primary category." }, { "text": "In contrast, Utility Policy cannot determine the most common available category and is unable to maximize CPR." } ]
aFzc_2nNz.WIdHkazOg.00
Further improvement is expected if γ is selected independently for each training sample as shown through the Sample-Dependent Focal Loss (FLSD-53) proposed in [19], which, however, is based on heuristics and, as shown in this paper, does not generalize well. In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration properties of focal (and inverse-focal loss) and adaptively modifies γ t for different groups of samples based on (1) γ t −from the previous step (2) the magnitude of the model’s under/over-confidence. We evaluate AdaFocal on various image recognition tasks and one NLP task, covering a variety of network architectures, to confirm the improvement in calibration while achieving similar levels of accuracy. Additionally, models trained with AdaFocal are shown to achieve a significant boost in out-of-distribution detection capability.
Further improvement is expected if γ is selected independently for each training sample (Sample-Dependent Focal Loss (FLSD-53) [19]). However, FLSD-53 is based on heuristics and does not generalize well. In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration properties of focal (and inverse-focal) loss and adaptively modifies γ t for different groups of samples based on γ t − 1 from the previous step and the knowledge of model’s under/over-confidence on the validation set. We evaluate AdaFocal on various image recognition and one NLP task, covering a wide variety of network architectures, to confirm the improvement in calibration while achieving similar levels of accuracy. Additionally, we show that models trained with AdaFocal achieve a significant boost in out-of-distribution detection.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make the ideas in these paragraph more modular and easier to understand.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Concise this academic paragraph a bit and smooth out the writing.", "annotator": "annotator_07" }
aFzc_2nNz
WIdHkazOg
0
[ { "text": "Further improvement is expected if γ is selected independently for each training sample as shown through the Sample-Dependent Focal Loss (FLSD-53) proposed in [19], which, however, is based on heuristics and, as shown in this paper, does not generalize well." }, { "text": "In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration properties of focal (and inverse-focal loss) and adaptively modifies γ t for different groups of samples based on (1) γ t −from the previous step (2) the magnitude of the model’s under/over-confidence." }, { "text": "We evaluate AdaFocal on various image recognition tasks and one NLP task, covering a variety of network architectures, to confirm the improvement in calibration while achieving similar levels of accuracy." }, { "text": "Additionally, models trained with AdaFocal are shown to achieve a significant boost in out-of-distribution detection capability." } ]
[ { "text": "Further improvement is expected if γ is selected independently for each training sample (Sample-Dependent Focal Loss (FLSD-53) [19]). However, FLSD-53 is based on heuristics and does not generalize well." }, { "text": "In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration properties of focal (and inverse-focal) loss and adaptively modifies γ t for different groups of samples based on γ t − 1 from the previous step and the knowledge of model’s under/over-confidence on the validation set." }, { "text": "We evaluate AdaFocal on various image recognition and one NLP task, covering a wide variety of network architectures, to confirm the improvement in calibration while achieving similar levels of accuracy." }, { "text": "Additionally, we show that models trained with AdaFocal achieve a significant boost in out-of-distribution detection." } ]
aomiOZE_m2.rxb2TiQ6bq.17
L 1 -norm aspruning criterion, the same as (Li et al., 2017). However, our results are significantly better than theirs. The reason is that they do not impose any regularization on the pruned structure, thus the kept feature map channels are misaligned in residual blocks after pruning. In contrast, our method does not have this problem, thanks to the proposed structure regularization.
L 1 -norm as the scoring criterion to select unimportant filters, same as (Li et al., 2017). Nevertheless, our method delivers significantly better results than theirs. The primary reason is that they do not impose any regularization on the pruned structure; the remaining feature maps are thus mismatched in residual blocks after pruning. In contrast, our method SRP is not bothered by this issue, showing the effectiveness of our proposed structure regularization.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_01" }
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_06" }
aomiOZE_m2
rxb2TiQ6bq
17
[ { "text": "L 1 -norm aspruning criterion, the same as (Li et al., 2017)." }, { "text": "However, our results are significantly better than theirs." }, { "text": "The reason is that they do not impose any regularization on the pruned structure, thus the kept feature map channels are misaligned in residual blocks after pruning." }, { "text": "In contrast, our method does not have this problem, thanks to the proposed structure regularization." } ]
[ { "text": "L 1 -norm as the scoring criterion to select unimportant filters, same as (Li et al., 2017)." }, { "text": "Nevertheless, our method delivers significantly better results than theirs." }, { "text": "The primary reason is that they do not impose any regularization on the pruned structure; the remaining feature maps are thus mismatched in residual blocks after pruning." }, { "text": "In contrast, our method SRP is not bothered by this issue, showing the effectiveness of our proposed structure regularization." } ]
ryaiZC9KQ.ryt3YptA7.00
DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts, suggesting that modern DNNs approximately follow a similar bag-of-feature strategy.
ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different decision strategies.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
ryaiZC9KQ
ryt3YptA7
0
[ { "text": " DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts, suggesting that modern DNNs approximately follow a similar bag-of-feature strategy." } ]
[ { "text": "ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different decision strategies." } ]
eYzycFMXwr.8-KFmZiCM.01
Normally, C a is too large because the size of batch is too large. In this case, we can reduce the size ofbatch and the number of the model partitions, and replicate each stage on the newly idle accelerator devices for data parallelism to increase the total batchsize to the original size, as shown in Figure 6. This not only reduces the C a, but also does not add new accelerator device andchange the batch size. Of course, we need to weigh the communication overhead between model parallelism ( C WPipe ) and data parallelism ( C DP ). When C WPipe is greater than C DP , we reduce the model-parallel batch size and the number of model partitions, and increase the number of data-parallel groups.
Normally, too large C a is caused by too large microbatch. we can proportionally reduce the depth of the pipeline while reducing the size of the microbatch, and then we proportionally increase the width of data parallelism to maintain the same global batch size, as shown in Figure 6. As a result, the size of the micro-batch becomes smaller, and the C a also decreases, while the number of accelerators and the global batch size remain unchanged. Of course, we need to weigh the communication overhead between model parallelism ( C WPipe ) and data parallelism ( C DP ) to choose the appropriate ratio of depth ( d ) and width ( w ). When C WPipe is greater than C DP , we reduce the value of d : w, d ∗ w = N GPU .
{ "annotation": [ "Development", "Rewriting_heavy" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_heavy", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
eYzycFMXwr
8-KFmZiCM
1
[ { "text": "Normally, C a is too large because the size of batch is too large." }, { "text": "In this case, we can reduce the size ofbatch and the number of the model partitions, and replicate each stage on the newly idle accelerator devices for data parallelism to increase the total batchsize to the original size, as shown in Figure 6." }, { "text": "This not only reduces the C a, but also does not add new accelerator device andchange the batch size." }, { "text": "Of course, we need to weigh the communication overhead between model parallelism ( C WPipe ) and data parallelism ( C DP )." }, { "text": "When C WPipe is greater than C DP , we reduce the model-parallel batch size and the number of model partitions, and increase the number of data-parallel groups." } ]
[ { "text": "Normally, too large C a is caused by too large microbatch." }, { "text": "we can proportionally reduce the depth of the pipeline while reducing the size of the microbatch, and then we proportionally increase the width of data parallelism to maintain the same global batch size, as shown in Figure 6." }, { "text": "As a result, the size of the micro-batch becomes smaller, and the C a also decreases, while the number of accelerators and the global batch size remain unchanged." }, { "text": "Of course, we need to weigh the communication overhead between model parallelism ( C WPipe ) and data parallelism ( C DP ) to choose the appropriate ratio of depth ( d ) and width ( w )." }, { "text": "When C WPipe is greater than C DP , we reduce the value of d : w, d ∗ w = N GPU ." } ]
NvI7ejSHFe.ppieLd2M4a.00
PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space. Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as an smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013). Our work proposes to learn adaptive activation function as a weighted sum of candidate functions, whose weights can be adapted to the underlying physics laws when modelling different PDE systems. While learning combinations of activation functions has been studied for convolutional neural networks on image classification (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018), we would like to argue that it is non-trivial to explore this idea in the context of PINNs. First, different PDE systems have various characteristics, which are difficult to model accurately by a single activation function. In contrast, learning a combination of candidate functions makes it possible to embed prior knowledge about the physics system into neural networks by including activation functions with suitable properties. In addition, while previous methods (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018) experiment with limit choice of activation functions, we add most of commonly-used activation functions into the candidate function set to ensure its diversity and to avoid the repetitive evaluations of each candidate activation function for different PDEs.
PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space. Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as a smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013). Our work proposes to learn an adaptive activation function as a weighted sum of candidate functions, whose weights can be adapted to the underlying physics laws when modelling different PDE systems. While similar ideas have been studied for convolutional neural networks in image classification (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018; S¨utfeld et al., 2020), some technical challenges remain unexplored in the context of PINNs, which have a higher demand for the smoothness and diversity of the candidate functions. First, the optimization of PDE-based constraints needs the activation function to provide higher-order derivatives, which causes the failure of widely-used ReLUs in PINNs. Second, unlike the image classification tasks, different PDE systems could have various characteristics, such as periodicity and rapid decay. This leads to a higher requirement for the diversity of the candidate functions. To overcome these challenges, we propose to build the candidate function set with simple elementary functions to embed the prior knowledge of physics systems, as well as commonly-used activation functions to ensure the diversity.
{ "annotation": [ "Rewriting_heavy", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
NvI7ejSHFe
ppieLd2M4a
0
[ { "text": "PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space." }, { "text": "Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as an smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013)." }, { "text": "Our work proposes to learn adaptive activation function as a weighted sum of candidate functions, whose weights can be adapted to the underlying physics laws when modelling different PDE systems." }, { "text": "While learning combinations of activation functions has been studied for convolutional neural networks on image classification (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018), we would like to argue that it is non-trivial to explore this idea in the context of PINNs." }, { "text": "First, different PDE systems have various characteristics, which are difficult to model accurately by a single activation function." }, { "text": "In contrast, learning a combination of candidate functions makes it possible to embed prior knowledge about the physics system into neural networks by including activation functions with suitable properties." }, { "text": "In addition, while previous methods (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018) experiment with limit choice of activation functions, we add most of commonly-used activation functions into the candidate function set to ensure its diversity and to avoid the repetitive evaluations of each candidate activation function for different PDEs." } ]
[ { "text": "PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space." }, { "text": "Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as a smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013)." }, { "text": "Our work proposes to learn an adaptive activation function as a weighted sum of candidate functions, whose weights can be adapted to the underlying physics laws when modelling different PDE systems." }, { "text": "While similar ideas have been studied for convolutional neural networks in image classification (Dushkoff & Ptucha, 2016; Qian et al., 2018; Manessi & Rozza, 2018; S¨utfeld et al., 2020), some technical challenges remain unexplored in the context of PINNs, which have a higher demand for the smoothness and diversity of the candidate functions." }, { "text": "First, the optimization of PDE-based constraints needs the activation function to provide higher-order derivatives, which causes the failure of widely-used" }, { "text": "ReLUs in PINNs." }, { "text": "Second, unlike the image classification tasks, different PDE systems could have various characteristics, such as periodicity and rapid decay. This leads to a higher requirement for the diversity of the candidate functions. To overcome these challenges, we propose to build the candidate function set with simple elementary functions to embed the prior knowledge of physics systems, as well as commonly-used activation functions to ensure the diversity." } ]
PDvmJtmgQb.gGrpxbc7UI.01
In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a. indistribution ) (Bassily et al., 2018a; Zhou et al., 2020; Kairouz et al., 2021a; Asi et al., 2021), and where the distributions are different (a.k.a. out-of-distribution ) (Abadi et al., 2016; Papernot et al., 2016; 2018; Liu et al., 2021). In principle, our algorithm can be used in out-of-distribution settings , but our results in this paper are for the in-distribution case. In the in-distribution setting, it is typical that there are fewer public data samples available than private data samples – i.e., n pub (cid:28) n priv – as it is harder to obtain public data sets than ones with privacy constraints attached. In-distribution public data could come from either altruistic opt-in users (Merriman, 2014; Avent et al., 2017) or from users who are incentivized to provide such data (e.g., mechanical turks). Out-of-distribution public data may be easier to obtain but can have various degrees of freedom; e.g., the domains of private and public data may not be identical, the representation of some classes may vary, the distributions can be mean shifted, etc. It is usually hard to quantify these degrees of freedom to the extent that we can provide precise guarantees. Hence, we leave this aspect for future exploration, and work with the idealized assumption that the public data comes from the same distribution as the private data, or, at least, that the differences between these two distributions are not material.
In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a. in-distribution ) [4, 7, 23, 39], and where the distributions are different (a.k.a. out-of-distribution ) [1, 26, 31, 32]. In principle, our algorithm can be used in out-of-distribution settings , but our results in this paper are for the in-distribution case. In the in-distribution setting, it is typical that there are fewer public data samples available than private data samples – i.e., n pub (cid:28) n priv – as it is harder to obtain public data sets than ones with privacy constraints attached. In-distribution public data could come from either altruistic opt-in users [5, 28] or from users who are incentivized to provide such data (e.g., mechanical turks). Out-of-distribution public data may be easier to obtain but can have various degrees of freedom; e.g., the domains of private and public data may not be identical, the representation of some classes may vary, the distributions can be mean shifted, etc. It is usually hard to quantify these degrees of freedom to the extent that we can provide precise guarantees. Hence, we leave this aspect for future exploration, and work with the idealized assumption that the public data comes from the same distribution as the private data, or, at least, that the differences between these two distributions are not material.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "Convert in-text citations to numbers.", "annotator": "annotator_09" }
PDvmJtmgQb
gGrpxbc7UI
1
[ { "text": "In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a. indistribution ) (Bassily et al., 2018a;" }, { "text": "Zhou et al., 2020; Kairouz et al., 2021a; Asi et al., 2021), and where the distributions are different (a.k.a. out-of-distribution ) (Abadi et al., 2016; Papernot et al., 2016; 2018; Liu et al., 2021)." }, { "text": "In principle, our algorithm can be used in out-of-distribution settings , but our results in this paper are for the in-distribution case." }, { "text": "In the in-distribution setting, it is typical that there are fewer public data samples available than private data samples – i.e., n pub (cid:28) n priv – as it is harder to obtain public data sets than ones with privacy constraints attached." }, { "text": "In-distribution public data could come from either altruistic opt-in users (Merriman, 2014; Avent et al., 2017) or from users who are incentivized to provide such data (e.g., mechanical turks)." }, { "text": "Out-of-distribution public data may be easier to obtain but can have various degrees of freedom; e.g., the domains of private and public data may not be identical, the representation of some classes may vary, the distributions can be mean shifted, etc." }, { "text": "It is usually hard to quantify these degrees of freedom to the extent that we can provide precise guarantees." }, { "text": "Hence, we leave this aspect for future exploration, and work with the idealized assumption that the public data comes from the same distribution as the private data, or, at least, that the differences between these two distributions are not material." } ]
[ { "text": "In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a." }, { "text": "in-distribution ) [4, 7, 23, 39], and where the distributions are different (a.k.a. out-of-distribution ) [1, 26, 31, 32]." }, { "text": "In principle, our algorithm can be used in out-of-distribution settings , but our results in this paper are for the in-distribution case." }, { "text": "In the in-distribution setting, it is typical that there are fewer public data samples available than private data samples – i.e., n pub (cid:28) n priv – as it is harder to obtain public data sets than ones with privacy constraints attached." }, { "text": "In-distribution public data could come from either altruistic opt-in users [5, 28] or from users who are incentivized to provide such data (e.g., mechanical turks)." }, { "text": "Out-of-distribution public data may be easier to obtain but can have various degrees of freedom; e.g., the domains of private and public data may not be identical, the representation of some classes may vary, the distributions can be mean shifted, etc." }, { "text": "It is usually hard to quantify these degrees of freedom to the extent that we can provide precise guarantees." }, { "text": "Hence, we leave this aspect for future exploration, and work with the idealized assumption that the public data comes from the same distribution as the private data, or, at least, that the differences between these two distributions are not material." } ]
lLwt-9RJ2tm.XJsauLjck.01
Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V . discount in its parent’s contribution to the cost, which after cascading gives a third view of Eq.
Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V . is the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to the cost also appears as a positive term in its parent’s contribution to the cost. We can pass this term as a discount in its parent’s contribution to the cost, which after cascading gives a third view of Eq.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
1
[ { "text": "Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V ." }, { "text": "" }, { "text": " discount in its parent’s contribution to the cost, which after cascading gives a third view of Eq." } ]
[ { "text": "Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V ." }, { "text": "is the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to the cost also appears as a positive term in its parent’s contribution to the cost." }, { "text": "We can pass this term as a discount in its parent’s contribution to the cost, which after cascading gives a third view of Eq." } ]
isfcBsgB-H.SBe0hOLmg9.00
Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability. Here we propose using chemical reactions to assist learning molecule representation. The key idea of our approach is to preserve the equivalence of molecules with respect to chemical reactions in the embedding space, i.e., forcing the sum of reactant embeddings and the sum of product embeddings to be equal for each chemical equation. This constraint is proven effective to 1) keep the embedding space well-organized and 2) improve the generalization ability of molecule embeddings. Moreover, our model can use any GNN as the molecule encoder and is thus agnostic to GNN architectures. Experimental results demonstrate that our method achieves state-of-the-art performance in a variety of downstream tasks, e.g., 17.4% absolute Hit@1 gain in chemical reaction prediction, 2.3% absolute AUC gain in molecule property prediction, and 18.5% relative RMSE gain in graph-edit-distance prediction, respectively, over the best baseline method. All experimental code is provided in the supplementary material.
Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability. Here we propose using chemical reactions to assist learning molecule representation. The key idea of our approach is to preserve the equivalence of molecules with respect to chemical reactions in the embedding space, i.e., forcing the sum of reactant embeddings and the sum of product embeddings to be equal for each chemical equation. This constraint is proven effective to 1) keep the embedding space well-organized and 2) improve the generalization ability of molecule embeddings. Moreover, our model can use any GNN as the molecule encoder and is thus agnostic to GNN architectures. Experimental results demonstrate that our method achieves state-of-the-art performance in a variety of downstream tasks, e.g., reaction product prediction, molecule property prediction, reaction classification, and graph-edit-distance prediction. The code is available at https://github.com/hwwang55/MolR .
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary details on specific numerical performance of the model. Link to https://github.com/hwwang55/MolR instead of supplementary material.", "annotator": "annotator_03" }
{ "annotation": [ "Concision", "Content_substitution" ], "instruction": "Make the second last sentence from the end of this paragraph more concise by removing too precise details. For the last sentence, the code is now provided on github.", "annotator": "annotator_07" }
isfcBsgB-H
SBe0hOLmg9
0
[ { "text": "Line-Entry System)" }, { "text": "or GNN-based (Graph Neural Networks)" }, { "text": "MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability." }, { "text": "Here we propose using chemical reactions to assist learning molecule representation." }, { "text": "The key idea of our approach is to preserve the equivalence of molecules with respect to chemical reactions in the embedding space, i.e., forcing the sum of reactant embeddings and the sum of product embeddings to be equal for each chemical equation." }, { "text": "This constraint is proven effective to 1) keep the embedding space well-organized and 2) improve the generalization ability of molecule embeddings." }, { "text": "Moreover, our model can use any GNN as the molecule encoder and is thus agnostic to GNN architectures." }, { "text": "Experimental results demonstrate that our method achieves state-of-the-art performance in a variety of downstream tasks, e.g., 17.4% absolute Hit@1 gain in chemical reaction prediction, 2.3% absolute AUC gain in molecule property prediction, and 18.5% relative RMSE gain in graph-edit-distance prediction, respectively, over the best baseline method." }, { "text": "All experimental code is provided in the supplementary material." } ]
[ { "text": "Line-Entry System)" }, { "text": "or GNN-based (Graph Neural Networks)" }, { "text": "MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability." }, { "text": "Here we propose using chemical reactions to assist learning molecule representation." }, { "text": "The key idea of our approach is to preserve the equivalence of molecules with respect to chemical reactions in the embedding space, i.e., forcing the sum of reactant embeddings and the sum of product embeddings to be equal for each chemical equation." }, { "text": "This constraint is proven effective to 1) keep the embedding space well-organized and 2) improve the generalization ability of molecule embeddings." }, { "text": "Moreover, our model can use any GNN as the molecule encoder and is thus agnostic to GNN architectures." }, { "text": "Experimental results demonstrate that our method achieves state-of-the-art performance in a variety of downstream tasks, e.g., reaction product prediction, molecule property prediction, reaction classification, and graph-edit-distance prediction." }, { "text": "The code is available at https://github.com/hwwang55/MolR ." } ]
ZpvHK3zB43.QhVM4p3DKI.00
We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection. FROB tackles the few-shot problem using classification with OoD detection. In real-world applications, in the wild, it is a challenge to robustly perform classification and few-shot OoD detection with high levels of reliability. The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary. To improve robustness, FROB generates strong adversarial samples on the boundary, and forces samples from OoD and on the boundary to be less confident. By including the boundary, FROB reduces the threshold linked to the model’s few-shot robustness. FROB redesigns, restructures, and streamlines OE to work even for zero-shots. FROB maintains the OoD performance approximately constant, independent of the few-shot number. The performance of FROB with the self-supervised learning boundary is robust and effective as the performance is approximately stable as the few-shot outliers decrease in number, while the performance of FROB without O ( z ) decreases as the few-shots decrease. The evaluation of FROB, on many sets, shows that it is effective, achieves competitive state-of-the-art performance, and outperforms benchmarks in the few-shot OoD detection setting in AUC-type metrics. In the future, in addition to confidence and class, FROB will also output important regions and bounding boxes around abnormal objects.
We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection. FROB tackles the few-shot problem using classification with OoD detection. The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary. To improve robustness, FROB generates strong adversarial samples on the boundary, and forces samples from OoD and on the boundary to be less confident. By including the self-produced boundary, FROB reduces the threshold linked to the model’s few-shot robustness. FROB redesigns, restructures, and streamlines OE to work even for zero-shots. It robustly performs classification and few-shot OoD detection with a high level of reliability in real-world applications, in the wild. FROB maintains the OoD performance approximately constant, independent of the few-shot number. The performance of FROB with the self-supervised learning boundary is robust and effective, as the performance is approximately stable as the few-shot outliers decrease in number, while the performance of FROB without O ( z ) decreases as the few-shots decrease. The evaluation of FROB, on many sets, shows that it is effective, achieves competitive state-of-the-art performance, and outperforms benchmarks in the few-shot OoD detection setting in AUC-type metrics. In the future, in addition to confidence and the class, FROB will also output important regions and bounding boxes around abnormal objects.
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
ZpvHK3zB43
QhVM4p3DKI
0
[ { "text": "We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection." }, { "text": "FROB tackles the few-shot problem using classification with OoD detection." }, { "text": "In real-world applications, in the wild, it is a challenge to robustly perform classification and few-shot OoD detection with high levels of reliability." }, { "text": "The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary." }, { "text": "To improve robustness, FROB generates strong adversarial samples on the boundary, and forces samples from OoD and on the boundary to be less confident." }, { "text": "By including the boundary, FROB reduces the threshold linked to the model’s few-shot robustness." }, { "text": "FROB redesigns, restructures, and streamlines OE to work even for zero-shots." }, { "text": "" }, { "text": "FROB maintains the OoD performance approximately constant, independent of the few-shot number." }, { "text": "The performance of FROB with the self-supervised learning boundary is robust and effective as the performance is approximately stable as the few-shot outliers decrease in number, while the performance of FROB without O ( z ) decreases as the few-shots decrease." }, { "text": "The evaluation of FROB, on many sets, shows that it is effective, achieves competitive state-of-the-art performance, and outperforms benchmarks in the few-shot OoD detection setting in AUC-type metrics." }, { "text": "In the future, in addition to confidence and class, FROB will also output important regions and bounding boxes around abnormal objects." } ]
[ { "text": "We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection." }, { "text": "FROB tackles the few-shot problem using classification with OoD detection." }, { "text": "" }, { "text": "The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary." }, { "text": "To improve robustness, FROB generates strong adversarial samples on the boundary, and forces samples from OoD and on the boundary to be less confident." }, { "text": "By including the self-produced boundary, FROB reduces the threshold linked to the model’s few-shot robustness." }, { "text": "FROB redesigns, restructures, and streamlines OE to work even for zero-shots." }, { "text": "It robustly performs classification and few-shot OoD detection with a high level of reliability in real-world applications, in the wild." }, { "text": "FROB maintains the OoD performance approximately constant, independent of the few-shot number." }, { "text": "The performance of FROB with the self-supervised learning boundary is robust and effective, as the performance is approximately stable as the few-shot outliers decrease in number, while the performance of FROB without O ( z ) decreases as the few-shots decrease." }, { "text": "The evaluation of FROB, on many sets, shows that it is effective, achieves competitive state-of-the-art performance, and outperforms benchmarks in the few-shot OoD detection setting in AUC-type metrics." }, { "text": "In the future, in addition to confidence and the class, FROB will also output important regions and bounding boxes around abnormal objects." } ]
lLwt-9RJ2tm.XJsauLjck.02
Several other variations of this basic setup have been considered. For example, [12] have considered this problem in the presence of structural constraints. [11, 31, 34] considered a setting where vertices are embedded in a metric space and the similarity/dissimilarity between two vertices is given by their distances. The most relevant to our work amongst these is [34] which considered this metric embedded hierarchical clustering problem in a streaming setting. However, the stream in their setting is composed of vertices while edge weights can be directly inferred using distances between vertices; whereas the stream in our streaming setting is composed of edges while vertices are already known. Moreover, their study is only limited to the streaming setting. There has also been work on designing faster/parallel agglomerative algorithms such as single-linkage, average-linkage etc.[40, 17]. However, these algorithms are not known to achieve a good approximation factor for Dasgupta’s objective, which is the main focus of our paper. [27] studied the hierarchical clustering problem in an MPC setting. However, their work only considered the maximization objectives[32, 15], while our work is primarily focussed on the minimization objective of [16].
Several other variations of this basic setup have been considered. For example, [12] have considered this problem in the presence of structural constraints. [11, 34, 37] considered a setting where vertices are embedded in a metric space and the similarity/dissimilarity between two vertices is given by their distances. The most relevant to our work amongst these is [37] which considered this metric embedded hierarchical clustering problem in a streaming setting. However, the stream in their setting is composed of vertices while edge weights can be directly inferred using distances between vertices; whereas the stream in our streaming setting is composed of edges while vertices are already known. Moreover, their study is only limited to the streaming setting. There has also been work on designing faster/parallel agglomerative algorithms such as single-linkage, average-linkage etc. While these works share the same motivation as ours, namely, scaling HC algorithms to massive datasets, these results are largely orthogonal to ours. The primary philosophical difference is that these aforementioned works are aimed at speeding up/parallelizing very specific kinds of linkage based algorithms, while recovering the same or similar cluster trees (under very different notions of similarity) that would have been computed by the slower/sequential algorithm. Moreover, the specific algorithms considered in these works have no known approximation guarantees for Dasgupta’s objective. Our work on the other hand approaches this problem from an optimization perspective. Through data sparsification, we aim to recover a cluster tree with marginal loss in objective function value as compared to one computed over the entire (dense) input data by any given HC algorithm as a blackbox, achieving a speedup in runtime or reducing its memory requirement due to sparsity. [29] studied the hierarchical clustering problem in an MPC setting. However, their work only considered the maximization objectives [35, 17], while our work is primarily focussed on the minimization objective of [18].
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
2
[ { "text": "Several other variations of this basic setup have been considered." }, { "text": "For example, [12] have considered this problem in the presence of structural constraints." }, { "text": "[11, 31, 34] considered a setting where vertices are embedded in a metric space and the similarity/dissimilarity between two vertices is given by their distances." }, { "text": "The most relevant to our work amongst these is [34] which considered this metric embedded hierarchical clustering problem in a streaming setting." }, { "text": "However, the stream in their setting is composed of vertices while edge weights can be directly inferred using distances between vertices; whereas the stream in our streaming setting is composed of edges while vertices are already known." }, { "text": "Moreover, their study is only limited to the streaming setting." }, { "text": "There has also been work on designing faster/parallel agglomerative algorithms such as single-linkage, average-linkage etc.[40, 17]." }, { "text": "However, these algorithms are not known to achieve a good approximation factor for" }, { "text": "Dasgupta’s objective, which is the main focus of our paper." }, { "text": "[27] studied the hierarchical clustering problem in an MPC setting." }, { "text": "However, their work only considered the maximization objectives[32, 15], while our work is primarily focussed on the minimization objective of [16]." } ]
[ { "text": "Several other variations of this basic setup have been considered." }, { "text": "For example, [12] have considered this problem in the presence of structural constraints." }, { "text": "[11, 34, 37] considered a setting where vertices are embedded in a metric space and the similarity/dissimilarity between two vertices is given by their distances." }, { "text": "The most relevant to our work amongst these is [37] which considered this metric embedded hierarchical clustering problem in a streaming setting." }, { "text": "However, the stream in their setting is composed of vertices while edge weights can be directly inferred using distances between vertices; whereas the stream in our streaming setting is composed of edges while vertices are already known." }, { "text": "Moreover, their study is only limited to the streaming setting." }, { "text": "There has also been work on designing faster/parallel agglomerative algorithms such as single-linkage, average-linkage etc." }, { "text": "While these works share the same motivation as ours, namely, scaling HC algorithms to massive datasets, these results are largely orthogonal to ours." }, { "text": "The primary philosophical difference is that these aforementioned works are aimed at speeding up/parallelizing very specific kinds of linkage based algorithms, while recovering the same or similar cluster trees (under very different notions of similarity) that would have been computed by the slower/sequential algorithm. Moreover, the specific algorithms considered in these works have no known approximation guarantees for Dasgupta’s objective. Our work on the other hand approaches this problem from an optimization perspective. Through data sparsification, we aim to recover a cluster tree with marginal loss in objective function value as compared to one computed over the entire (dense) input data by any given HC algorithm as a blackbox, achieving a speedup in runtime or reducing its memory requirement due to sparsity." }, { "text": "[29] studied the hierarchical clustering problem in an MPC setting." }, { "text": "However, their work only considered the maximization objectives [35, 17], while our work is primarily focussed on the minimization objective of [18]." } ]
zzdwUcxTjWY.rVxmgW1FRK.00
Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they do not necessarily know what they don’t know (Nguyen et al., 2015). In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs, which should not be predicted by the model. Taking self-driving car as an example, an object detection model trained to recognize in-distribution objects ( e.g., cars, stop signs) can produce a high-confidence prediction for an unseen object of a moose (see Figure 1(a)). Such a failure case raises concerns in model reliability, and worse, may lead to a catastrophic effect when deployed in safety-critical applications.
Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they often struggle to handle the unknowns. In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs (Nguyen et al., 2015), which arise from unknown categories and should not be predicted by the model. Taking self-driving car as an example, an object detection model trained to recognize in-distribution objects ( e.g., cars, stop signs) can produce a high-confidence prediction for an unseen object of a moose; see Figure 1(a). Such a failure case raises concerns in model reliability, and worse, may lead to catastrophe when deployed in safety-critical applications.
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Make this paragraph more formal and fitting to academic style.", "annotator": "annotator_07" }
zzdwUcxTjWY
rVxmgW1FRK
0
[ { "text": "Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they do not necessarily know what they don’t know (Nguyen et al., 2015). In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs, which should not be predicted by the model." }, { "text": "Taking self-driving car as an example, an object detection model trained to recognize in-distribution objects ( e.g., cars, stop signs) can produce a high-confidence prediction for an unseen object of a moose (see Figure 1(a))." }, { "text": "Such a failure case raises concerns in model reliability, and worse, may lead to a catastrophic effect when deployed in safety-critical applications." } ]
[ { "text": "Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they often struggle to handle the unknowns. In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs (Nguyen et al., 2015), which arise from unknown categories and should not be predicted by the model." }, { "text": "Taking self-driving car as an example, an object detection model trained to recognize in-distribution objects ( e.g., cars, stop signs) can produce a high-confidence prediction for an unseen object of a moose; see Figure 1(a)." }, { "text": "Such a failure case raises concerns in model reliability, and worse, may lead to catastrophe when deployed in safety-critical applications." } ]
jP_amc4U0A.Y2t7AFVo5Z.00
We proposed a new inference lgorithm for distributions parametrized by normalizing flow models. The need for approximate inference is motivated by our theoretical hardness result for exact inference,which is surprising given that it applies to invertible models. We also presented a detailed empirical evaluation of our method with both quantitative and qualitative results on a wide range of tasks and datasets. Overall, we believe that the idea of a pre-generator creating structured noise is a useful and general method for leveraging pre-trained generators to solve new generative problems.
We proposed a new inference algorithm for distributions parametrized by a flow. The need for approximate inference is motivated by the hardness of exact inference. We also presented a detailed empirical evaluation of our method with both quantitative and qualitative results on a wide range of tasks and datasets. Overall, we believe that the idea of a pre-generator creating structured noise is a useful and general method for leveraging pre-trained generators to solve new generative problems.
{ "annotation": [ "Concision" ], "instruction": "Remove details which are unnecessary for the overall paragraph. Fix any spelling mistakes.", "annotator": "annotator_03" }
{ "annotation": [ "Concision" ], "instruction": "Correct and concise the two first sentences.", "annotator": "annotator_07" }
jP_amc4U0A
Y2t7AFVo5Z
0
[ { "text": "We proposed a new inference lgorithm for distributions parametrized by normalizing flow models." }, { "text": "The need for approximate inference is motivated by our theoretical hardness result for exact inference,which is surprising given that it applies to invertible models." }, { "text": "We also presented a detailed empirical evaluation of our method with both quantitative and qualitative results on a wide range of tasks and datasets." }, { "text": "Overall, we believe that the idea of a pre-generator creating structured noise is a useful and general method for leveraging pre-trained generators to solve new generative problems." } ]
[ { "text": "We proposed a new inference algorithm for distributions parametrized by a flow." }, { "text": "The need for approximate inference is motivated by the hardness of exact inference." }, { "text": "We also presented a detailed empirical evaluation of our method with both quantitative and qualitative results on a wide range of tasks and datasets." }, { "text": "Overall, we believe that the idea of a pre-generator creating structured noise is a useful and general method for leveraging pre-trained generators to solve new generative problems." } ]
fDUdAYCQqZy.0cNiGAHFml.02
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ . However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The estimation error can be significant when the dataset is small, and EVL needs a smaller τ to be more conservative and closer to behavior cloning. When the dataset is large, the estimation error becomes small, and we can use a larger τ to recover the optimal policy. However, the expectile operator in Equation 2 does not have a closed-form solution. In practice, we consider the one-step gradient expectile operator
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ . However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data. We simulate this effect by adding random Gaussian noise to the operator. Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping. The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2. The noise upon the operator largely depends on the size of the dataset. Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning. When the dataset is large and we are able to have an accurate estimation for the operator, we can use a larger τ to recover the optimal policy. By adjusting τ , the expectile operator can accommodate variant types of datasets. However, the expectile operator in Equation 4 does not have a closed-form solution. In practice, we consider the one-step gradient expectile operator
{ "annotation": [ "Content_addition", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
fDUdAYCQqZy
0cNiGAHFml
2
[ { "text": "We use the following toy example to further illustrate the trade-offs achieved by EVL." }, { "text": "Consider a random generated MDP." }, { "text": "When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ ." }, { "text": "However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data." }, { "text": "We simulate this effect by adding random Gaussian noise to the operator." }, { "text": "Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping." }, { "text": "The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2." }, { "text": "" }, { "text": "The estimation error can be significant when the dataset is small, and EVL needs a smaller τ to be more conservative and closer to behavior cloning." }, { "text": "When the dataset is large, the estimation error becomes small, and we can use a larger τ to recover the optimal policy." }, { "text": "" }, { "text": "However, the expectile operator in Equation 2 does not have a closed-form solution." }, { "text": "In practice, we consider the one-step gradient expectile operator" } ]
[ { "text": "We use the following toy example to further illustrate the trade-offs achieved by EVL." }, { "text": "Consider a random generated MDP." }, { "text": "When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ ." }, { "text": "However, applying operators with an offline dataset raises a noise on the actual operator due to the estimation error with finite and biased data." }, { "text": "We simulate this effect by adding random Gaussian noise to the operator." }, { "text": "Applying the optimality operator on offline datasets can lead to severe overestimation due to the maximization bias and bootstrapping." }, { "text": "The value estimation learned by EVL, on the contrary, achieves a trade-off between learning optimal policy and behavior cloning and can be close to the optimal value with proper chosen τ , as depicted in Figure 2." }, { "text": "The noise upon the operator largely depends on the size of the dataset." }, { "text": "Estimation error can be significant with insufficent data. In this case, we need a small τ to be conservative and be close to behavior cloning." }, { "text": "When the dataset is large and we are able to have an accurate estimation for the operator, we can use a larger τ to recover the optimal policy." }, { "text": "By adjusting τ , the expectile operator can accommodate variant types of datasets." }, { "text": "However, the expectile operator in Equation 4 does not have a closed-form solution." }, { "text": "In practice, we consider the one-step gradient expectile operator" } ]
vokZIVWUXN.zMdXRtaisu.01
Xie et al., 2020; Sohn et al., 2020) (also known as input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b). It is interesting that UDA (Xie et al., 2020) reveals the crucial role of noise produced by advanced data augmentation methods. FixMatch (Sohn et al., 2020) usestwo versions of augmentation (weak augmentation and strong augmentation) and argues that predictions from weakly-augmented imagescan be used to supervise the output of strongly augmented data. SimCLRv2 (Chen et al., 2020b) first fine-tunes the pre-trained model from the labeled data and then distills on the unlabeled data. Self-Tuning (Wang et al., 2021) further improves data efficiency by a pseudo group contrast (PGC) mechanism but limited on classification setup. Moreover, various recent methods (van denOord et al., 2018; He et al., 2020; Wu et al., 2018; Hadsell et al., 2006; Tian et al., 2019; Chen et al., 2020a) improve data efficiency by self-supervised learning. However, most existing data-efficient methods focus on classification setup while rare attention has been paid to deep regression .
Xie et al., 2020; Sohn et al., 2020) (a.k.a. input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b). Further, Co-Training (Blum & Mitchell, 1998b), Deep Co-Training Qiao et al. and Tri-Training (Zhou & Li, 2005a) improve data efficiency from an interesting perspective of different views of classifiers. MixMatch (Berthelot et al., 2019), ReMixMatch (Berthelot et al., 2020) and UDA (Xie et al., 2020) reveal the crucial role of noise produced by advanced data augmentation methods. FixMatch (Sohn et al., 2020) uses predictions from weakly-augmented images to supervise the output of strongly augmented data. Meta Pseudo Labels (Pham et al., 2021) further improves data efficiency by making the teacher constantly adapted by the feedback of the student’s performance on the labeled dataset. SimCLRv2 (Chen et al., 2020b) first fine-tunes the pre-trained model from the labeled data and then distills on the unlabeled data. Self-Tuning (Wang et al., 2021) introduces a pseudo group contrast (PGC) mechanism but is limited on classification setup. Besides of involving unlabeled data from the same distribution, another promising direction for improving data efficiency is introducing a complementary perspective to further improve data efficiency by introducing a related but different domain (Long et al., 2015; Ganin & Lempitsky, 2015; Long et al., 2017; Saito et al., 2018b; Lee et al., 2019; Zhang et al., 2019; Saito et al., 2018a; 2019). Moreover, various recent methods (van den Oord et al., 2018; He et al., 2020; Wu et al., 2018; Hadsell et al., 2006; Tian et al., 2019; Chen et al., 2020a) improve data efficiency by self-supervised learning. However, most existing data-efficient methods focus on classification setup while rare attention has been paid to deep regression .
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
vokZIVWUXN
zMdXRtaisu
1
[ { "text": "Xie et al., 2020; Sohn et al., 2020) (also known as input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b)." }, { "text": "It is interesting that UDA (Xie et al., 2020) reveals the crucial role of noise produced by advanced data augmentation methods." }, { "text": "FixMatch (Sohn et al., 2020) usestwo versions of augmentation (weak augmentation and strong augmentation) and argues that predictions from weakly-augmented imagescan be used to supervise the output of strongly augmented data. " }, { "text": "SimCLRv2 (Chen et al., 2020b)" }, { "text": "first fine-tunes the pre-trained model from the labeled data and then distills on the unlabeled data." }, { "text": "Self-Tuning (Wang et al., 2021) further improves data efficiency by a pseudo group contrast (PGC) mechanism but limited on classification setup." }, { "text": " Moreover, various recent methods (van denOord et al., 2018;" }, { "text": " He et al., 2020; Wu et al., 2018; Hadsell et al., 2006; Tian et al., 2019; Chen et al., 2020a) improve data efficiency by self-supervised learning." }, { "text": "However, most existing data-efficient methods focus on classification setup while rare attention has been paid to deep regression ." } ]
[ { "text": "Xie et al., 2020; Sohn et al., 2020) (a.k.a. input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b)." }, { "text": "Further, Co-Training (Blum & Mitchell, 1998b), Deep Co-Training Qiao et al. and Tri-Training (Zhou & Li, 2005a) improve data efficiency from an interesting perspective of different views of classifiers. MixMatch (Berthelot et al., 2019), ReMixMatch (Berthelot et al., 2020) and UDA (Xie et al., 2020) reveal the crucial role of noise produced by advanced data augmentation methods." }, { "text": "FixMatch (Sohn et al., 2020) uses predictions from weakly-augmented images to supervise the output of strongly augmented data. Meta Pseudo Labels (Pham et al., 2021) further improves data efficiency by making the teacher constantly adapted by the feedback of the student’s performance on the labeled dataset." }, { "text": "SimCLRv2 (Chen et al., 2020b)" }, { "text": "first fine-tunes the pre-trained model from the labeled data and then distills on the unlabeled data." }, { "text": "Self-Tuning (Wang et al., 2021) introduces a pseudo group contrast (PGC) mechanism but is limited on classification setup." }, { "text": "Besides of involving unlabeled data from the same distribution, another promising direction for improving data efficiency is introducing a complementary perspective to further improve data efficiency by introducing a related but different domain (Long et al., 2015; Ganin & Lempitsky, 2015; Long et al., 2017; Saito et al., 2018b; Lee et al., 2019; Zhang et al., 2019; Saito et al., 2018a; 2019). Moreover, various recent methods (van den" }, { "text": "Oord et al., 2018; He et al., 2020; Wu et al., 2018; Hadsell et al., 2006; Tian et al., 2019; Chen et al., 2020a) improve data efficiency by self-supervised learning." }, { "text": "However, most existing data-efficient methods focus on classification setup while rare attention has been paid to deep regression ." } ]
tOMAf1V5dI.SNeLZ71pb5.00
CNN-based Architectures. Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features. Subsequently, the VGG network (Simonyan & Zisserman, 2015) is proposed, which purely uses a series of 3 × 3 convolution and fully connected layers, and obtains outstanding performance in image classification. Furthermore, ResNet (He et al., 2016) is proposed, which utilizes the residual connection to transfer features in different layers, thereby alleviating the problem of gradient vanishing and obtaining superior performance. After that, the residual module becomes an important component of the network design and is also employed in subsequent transformer-based architectures and MLP-based architectures. Some papers have made further improvements to the convolution operation in CNN-based architecture, such as dilated convolution (Yu & Koltun, 2016) and deformable convolution (Dai et al., 2017). EfficientNet (Tan & Le, 2019; 2021) introduces neural architecture search into CNN to search for a suitable structure.
CNN-based Architectures. Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features. Subsequently, the VGG network (Simonyan & Zisserman, 2015) is proposed, which purely uses a series of 3 × 3 convolution and fully connected layers. ResNet (He et al., 2016) utilizes the residual connection to transfer features in different layers, which alleviates the gradient vanishing and obtains superior performance. Some papers make further improvements to the convolution operation in CNN-based architecture, such as dilated convolution (Yu & Koltun, 2016) and deformable convolution (Dai et al., 2017). EfficientNet (Tan & Le, 2019; 2021) introduces neural architecture search into CNN to search for a suitable network structure.
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02" }
{ "annotation": [ "Content_deletion", "Concision" ], "instruction": "Remove the sentence about the residual module. Make the paragraph more concise.", "annotator": "annotator_07" }
tOMAf1V5dI
SNeLZ71pb5
0
[ { "text": "CNN-based Architectures." }, { "text": "Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features." }, { "text": "Subsequently, the VGG network (Simonyan & Zisserman, 2015) is proposed, which purely uses a series of 3 × 3 convolution and fully connected layers, and obtains outstanding performance in image classification." }, { "text": "Furthermore, ResNet (He et al., 2016) is proposed, which utilizes the residual connection to transfer features in different layers, thereby alleviating the problem of gradient vanishing and obtaining superior performance." }, { "text": "After that, the residual module becomes an important component of the network design and is also employed in subsequent transformer-based architectures and MLP-based architectures." }, { "text": "Some papers have made further improvements to the convolution operation in CNN-based architecture, such as dilated convolution (Yu & Koltun, 2016) and deformable convolution (Dai et al., 2017)." }, { "text": "EfficientNet (Tan & Le, 2019; 2021) introduces neural architecture search into CNN to search for a suitable structure." } ]
[ { "text": "CNN-based Architectures." }, { "text": "Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features." }, { "text": "Subsequently, the VGG network (Simonyan & Zisserman, 2015) is proposed, which purely uses a series of 3 × 3 convolution and fully connected layers." }, { "text": "ResNet (He et al., 2016) utilizes the residual connection to transfer features in different layers, which alleviates the gradient vanishing and obtains superior performance." }, { "text": "" }, { "text": "Some papers make further improvements to the convolution operation in CNN-based architecture, such as dilated convolution (Yu & Koltun, 2016) and deformable convolution (Dai et al., 2017)." }, { "text": "EfficientNet (Tan & Le, 2019; 2021) introduces neural architecture search into CNN to search for a suitable network structure." } ]
CVRUl83zah.I75TtW0V7.13
In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Subsection 4.1), the differences between our implicit and automatic differentiation (Subsection 4.2), and the applicability of iDSPN to a larger-scale dataset (Subsection 4.3). We provide detailed descriptions of the experimental procedure in Appendix D, show example inputs and outputs in Appendix E, and open-source the code to reproduce all experiments at https://github.com/<redacted>/<redacted> .
In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Section 4.1), the differences between automatic and our approximate implicit differentiation (Section 4.2), and the applicability of iDSPN to a larger-scale dataset (Section 4.3). We provide detailed descriptions of the experimental procedure in Appendix D, show example inputs and outputs in Appendix E, and open-source the code to reproduce all experiments at https: //github.com/<redacted>/<redacted> and in the supplementary material.
{ "annotation": [ "Rewriting_light" ], "instruction": "Be clear about references.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_medium", "Rewriting_light" ], "instruction": "Lightly clarify the text. Add a reference to appendix at the end.", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
13
[ { "text": "In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Subsection 4.1), the differences between our implicit and automatic differentiation (Subsection 4.2), and the applicability of iDSPN to a larger-scale dataset (Subsection 4.3)." }, { "text": "We provide detailed descriptions of the experimental procedure in Appendix D, show example inputs and outputs in Appendix E, and open-source the code to reproduce all experiments at https://github.com/<redacted>/<redacted> ." } ]
[ { "text": "In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Section 4.1), the differences between automatic and our approximate implicit differentiation (Section 4.2), and the applicability of iDSPN to a larger-scale dataset (Section 4.3)." }, { "text": "We provide detailed descriptions of the experimental procedure in Appendix D, show example inputs and outputs in Appendix E, and open-source the code to reproduce all experiments at https: //github.com/<redacted>/<redacted> and in the supplementary material." } ]
CzTbgFKuy.hfDu8DsDq6.02
Our main example willinstead be online job scheduling via minimizing the fractional makespan, following Lattanzi et al. They consider the problem of assigning each in a sequence of variablesized jobs to one of m machines [30, Section 3]. The authors provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good" machine weights w ∈ R m> 0 to assign jobs based on how well ˆw corresponds to machine demand; the algorithm has a runtime guarantee of O (cid:0) log min { max i ˆw [ i ] / w [ i ] , m } (cid:1) . They also discus learning linear and more complicated predictors, but without guarantees. In this section we provide guarantees for the linear prediction setting in which we target the logarithm of the machine weights, which makes the problem convex. Note we assume features lie in the f -dimensional simplex, and for simplicity we only consider learning the linear transform from features to predictors and not the intercept, as the latter is subsumed by the former. For the online result, we use the parameter-free algorithm of Orabona and Pal [38], an OGD-type method that allows us to not assume any bound on the machine weights and thus compete with the optimal linear predictor in all of R m × f .
Our main example will be online job scheduling via minimizing the fractional makespan [30], where we must assign each in a sequence of variable-sized jobs to one of m machines. Lattanzi et al. [30] provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good” machine weights w ∈ R m> 0 to assign jobs based on how well ˆw corresponds to machine demand; the method has a performance guarantee of O (log min { max i ˆw [ i ] w [ i ] , m } ) . They also discuss learning linear and other predictors, but without guarantees. We study linear prediction of the logarithm of the machine weights, which makes the problem convex, and assume features lie in the f -dimensional simplex. For simplicity we only consider learning the linear transform from features to predictors and not the intercept, as the former subsumes the latter. For the online result, we use KT-OCO [38, Algorithm 1], a parameter-free subgradient method with update x t +1 ← 1+ (cid:80) ts =1 (cid:104) g s , x s (cid:105) t + (cid:80) ts =1 g s for g s = ∇ U s ( x s ) ; it allows us to not assume any bound on the machine weights and thus to compete with the optimal linear predictor in all of R m × f .
{ "annotation": [ "Concision", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
CzTbgFKuy
hfDu8DsDq6
2
[ { "text": "Our main example willinstead be online job scheduling via minimizing the fractional makespan, following Lattanzi et al. They consider the problem of assigning each in a sequence of variablesized jobs to one of m machines [30, Section 3]." }, { "text": "The authors provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good\" machine weights w ∈ R m> 0 to assign jobs based on how well ˆw corresponds to machine demand; the algorithm has a runtime guarantee of O (cid:0) log min { max i ˆw" }, { "text": "[ i ] / w [ i ] , m } (cid:1) ." }, { "text": "They also discus learning linear and more complicated predictors, but without guarantees." }, { "text": "In this section we provide guarantees for the linear prediction setting in which we target the logarithm of the machine weights, which makes the problem convex." }, { "text": "Note we assume features lie in the f -dimensional simplex, and for simplicity we only consider learning the linear transform from features to predictors and not the intercept, as the latter is subsumed by the former." }, { "text": "For the online result, we use the parameter-free algorithm of Orabona and Pal [38], an OGD-type method that allows us to not assume any bound on the machine weights and thus compete with the optimal linear predictor in all of R m × f ." } ]
[ { "text": "Our main example will be online job scheduling via minimizing the fractional makespan [30], where we must assign each in a sequence of variable-sized jobs to one of m machines. Lattanzi et al." }, { "text": "[30] provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good” machine weights w ∈ R m> 0 to assign jobs based on how well ˆw corresponds to machine demand; the method has a performance guarantee of O (log min { max i ˆw" }, { "text": "[ i ] w [ i ] , m } ) ." }, { "text": "They also discuss learning linear and other predictors, but without guarantees." }, { "text": "We study linear prediction of the logarithm of the machine weights, which makes the problem convex, and assume features lie in the f -dimensional simplex." }, { "text": "For simplicity we only consider learning the linear transform from features to predictors and not the intercept, as the former subsumes the latter." }, { "text": "For the online result, we use KT-OCO [38, Algorithm 1], a parameter-free subgradient method with update x t +1 ← 1+ (cid:80) ts =1 (cid:104) g s , x s (cid:105) t + (cid:80) ts =1 g s for g s = ∇ U s ( x s ) ; it allows us to not assume any bound on the machine weights and thus to compete with the optimal linear predictor in all of R m × f ." } ]
KUhhOtV2Yw.nPdxbHsbU.00
Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (eq. 5), ¯ y ⊥ s | y (eq. 6–7), and y ⊥ s | ¯ y (eq. 8–9). If our training set has no positive outcome for the demographic s = 0 , i.e. M y =1 ,s =0 = ∅ , the true positive rate for this group will suffer, and therefore we will likely not be able to satisfy, among others, equality of true positive rate (eq. 6).
Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (equation 5), ¯ y ⊥ s | y (equation 6 – equation 7), and y ⊥ s | ¯ y (equation 8 – equation 9). If our training set has no positive outcome for the demographic s = 0 , i.e. M y =1 ,s =0 = ∅ , the true positive rate for this group will suffer, and therefore we will likely not be able to satisfy, among others, equality of true positive rate.
{ "annotation": [ "Rewriting_light" ], "instruction": "Prefer extended forms over abbreviations of words.", "annotator": "annotator_04" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Write the abbreviation in their full form.", "annotator": "annotator_07" }
KUhhOtV2Yw
nPdxbHsbU
0
[ { "text": "Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (eq. 5), ¯ y ⊥ s | y (eq." }, { "text": "6–7), and y ⊥ s | ¯ y (eq. 8–9)." }, { "text": "If our training set has no positive outcome for the demographic s = 0 , i.e. M y =1 ,s =0 = ∅ , the true positive rate for this group will suffer, and therefore we will likely not be able to satisfy, among others, equality of true positive rate (eq. 6)." } ]
[ { "text": "Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (equation 5), ¯ y ⊥ s | y (equation 6 – equation 7), and y ⊥ s |" }, { "text": "¯ y (equation 8 – equation 9)." }, { "text": "If our training set has no positive outcome for the demographic s = 0 , i.e. M y =1 ,s =0 = ∅ , the true positive rate for this group will suffer, and therefore we will likely not be able to satisfy, among others, equality of true positive rate." } ]
slsGUcTSZI.DH75WqDfD7.00
We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models. During the training phase, sBN does not track running estimates and simply normalize batch data. We do not track the local running statistics as the size of local models may also vary dynamically. This method is suitable for HeteroFL as every communication round is independent. After the training process finishes, the server sequentially query local clients and cumulatively update global BN statistics. Thus, this method greatly reduces the risk of leaking private data because the calculation of BN statistics and the optimization of parameters are isolated. We also empirically found this trick significantly outperforms other forms of normalization methods including the InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018) , and LayerNorm (Ba et al., 2016) as shown in Table 4 and Table 5.
We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models. During the training phase, sBN does not track running estimates and simply normalize batch data. We do not track the local running statistics as the size of local models may also vary dynamically. This method is suitable for HeteroFL as every communication round is independent. After the training process finishes, the server sequentially query local clients and cumulatively update global BN statistics. There exist privacy concerns about calculating global statistics cumulatively and we hope to address those issues in the future work. We also empirically found this trick significantly outperforms other forms of normalization methods including the InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018) , and LayerNorm (Ba et al., 2016) as shown in Table 4 and Table 5.
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
slsGUcTSZI
DH75WqDfD7
0
[ { "text": "We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models." }, { "text": "During the training phase, sBN does not track running estimates and simply normalize batch data." }, { "text": "We do not track the local running statistics as the size of local models may also vary dynamically." }, { "text": "This method is suitable for HeteroFL as every communication round is independent." }, { "text": "After the training process finishes, the server sequentially query local clients and cumulatively update global BN statistics." }, { "text": "Thus, this method greatly reduces the risk of leaking private data because the calculation of BN statistics and the optimization of parameters are isolated." }, { "text": "We also empirically found this trick significantly outperforms other forms of normalization methods including the InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018) , and LayerNorm (Ba et al., 2016) as shown in Table 4 and Table 5." } ]
[ { "text": "We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models." }, { "text": "During the training phase, sBN does not track running estimates and simply normalize batch data." }, { "text": "We do not track the local running statistics as the size of local models may also vary dynamically." }, { "text": "This method is suitable for HeteroFL as every communication round is independent." }, { "text": "After the training process finishes, the server sequentially query local clients and cumulatively update global BN statistics." }, { "text": "There exist privacy concerns about calculating global statistics cumulatively and we hope to address those issues in the future work." }, { "text": "We also empirically found this trick significantly outperforms other forms of normalization methods including the InstanceNorm (Ulyanov et al., 2016), GroupNorm (Wu & He, 2018) , and LayerNorm (Ba et al., 2016) as shown in Table 4 and Table 5." } ]
8_oadXCaRE.Kt4-LpYuM.01
Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single "hard" winner neuron. This allows us to prove formally that a SoftHebb layer is a generative mixture model that objectively minimizes its Kullback-Leibler (KL) divergence from the input distribution through Bayesian inference, thus providing a new formal ML-theoretic perspective of these networks. We complement our main results, which are theoretical, with experiments that are small-scale but produce intriguing results. As a generative model, SoftHebb has a broader scope than classification, but we test it in simulations on the tasks of recognizing MNIST handwritten digits and Fashion-MNIST fashion products. First, we confirm that SoftHebb is more accurate than a hard-WTA model. Second, we validate that it minimizes a loss function (cross-entropy) even though it has no access to it or to labels during learning. In addition, likely owing to its Bayesian and generative properties, the unsupervised WTA model outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data, and increased robustness to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD) (Madry et al., 2017), and without any explicit defence. Interestingly, the SoftHebb model also exhibits inherent properties of deflection (Qin et al., 2020) of the adversarial attacks, and generates object interpolations.
Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single "hard" winner neuron. This allows us to prove formally that a SoftHebb layer is a generative mixture model that objectively minimizes its Kullback-Leibler (KL) divergence from the input distribution through Bayesian inference, thus providing a new formal ML-theoretic perspective of these networks. We complement our main results, which are theoretical, with experiments that are small-scale but produce intriguing results. As a generative model, SoftHebb has a broader scope than classification, but we test it on image classification tasks. Surprisingly, in addition to overcoming several inefficiencies of backpropagation, the unsupervised WTA model also outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data and to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD) (Madry et al., 2017), and without any explicit defence. Interestingly, the SoftHebb model also exhibits inherent properties of deflection (Qin et al., 2020) of the adversarial attacks, and generates object interpolations.
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter by removing details.", "annotator": "annotator_04" }
{ "annotation": [ "Content_deletion", "Concision" ], "instruction": "Summarize the middle of the paragraph to make it shorter and more concise. Remove unnecessary details.", "annotator": "annotator_07" }
8_oadXCaRE
Kt4-LpYuM
1
[ { "text": "Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single \"hard\" winner neuron." }, { "text": "This allows us to prove formally that a SoftHebb layer is a generative mixture model that objectively minimizes its Kullback-Leibler (KL) divergence from the input distribution through Bayesian inference, thus providing a new formal ML-theoretic perspective of these networks." }, { "text": "We complement our main results, which are theoretical, with experiments that are small-scale but produce intriguing results." }, { "text": "As a generative model, SoftHebb has a broader scope than classification, but we test it in simulations on the tasks of recognizing MNIST handwritten digits and" }, { "text": "Fashion-MNIST fashion products. First, we confirm that SoftHebb is more accurate than a hard-WTA model. Second, we validate that it minimizes a loss function (cross-entropy) even though it has no access to it or to labels during learning. In addition, likely owing to its Bayesian and generative properties, the unsupervised WTA model outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data, and increased robustness to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD) (Madry et al., 2017), and without any explicit defence." }, { "text": "Interestingly, the SoftHebb model also exhibits inherent properties of deflection (Qin et al., 2020) of the adversarial attacks, and generates object interpolations." } ]
[ { "text": "Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single \"hard\" winner neuron." }, { "text": "This allows us to prove formally that a SoftHebb layer is a generative mixture model that objectively minimizes its Kullback-Leibler (KL) divergence from the input distribution through Bayesian inference, thus providing a new formal ML-theoretic perspective of these networks." }, { "text": "We complement our main results, which are theoretical, with experiments that are small-scale but produce intriguing results." }, { "text": "As a generative model, SoftHebb has a broader scope than classification, but we test it on image classification tasks." }, { "text": "Surprisingly, in addition to overcoming several inefficiencies of backpropagation, the unsupervised WTA model also outperforms a supervised two-layer perceptron in several aspects: learning speed and accuracy in the first presentation of the training dataset, robustness to noisy data and to one of the strongest white-box adversarial attacks, i.e. projected gradient descent (PGD) (Madry et al., 2017), and without any explicit defence." }, { "text": "Interestingly, the SoftHebb model also exhibits inherent properties of deflection (Qin et al., 2020) of the adversarial attacks, and generates object interpolations." } ]
9wfZbn73om.FhHH15YtKt.02
Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; 2020). However, a rigorous relationship between mutual information and the downstream classification error has not been established. Tschannen et al. (2019) also find that optimizing tighter bounds of MI does not imply better representations. Thus, MI may not fully explain the success of InfoNCE. Besides, Arora et al. (2019) directly analyze the generalization of InfoNCE loss based on the assumption that positive samples are drawn from the same latent classes, which is different from practical contrastive algorithms. Ash et al. (2021) study the role of negative samples in contrastive SSL, and show an interesting collision-coverage trade-off theoretically. Furthermore, HaoChen et al. (2021) study contrastive SSL from a matrix decomposition perspective, but it is only applicable to their spectral contrastive loss. The behavior of InfoNCE is also studied from the perspective of alignment and uniformity (Wang & Isola, 2020), sparse coding model (Wen & Li, 2021), and the “expansion” assumption (Wei et al., 2020).
Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; 2020; Tschannen et al., 2019). However, a rigorous relationship between mutual information and downstream performance has not been established. Besides, Arora et al. (2019) directly analyze the generalization of InfoNCE loss based on the assumption that positive samples are drawn from the same latent classes, which is different from practical algorithms. Ash et al. (2021) study the role of negative samples and show an interesting collision-coverage trade-off theoretically. HaoChen et al. (2021) study contrastive SSL from a matrix decomposition perspective, but it is only applicable to their spectral contrastive loss. The behavior of InfoNCE is also studied from the perspective of alignment and uniformity (Wang & Isola, 2020), sparse coding model (Wen & Li, 2021), the expansion assumption (Wei et al., 2020), stochastic neighbor embedding (Hu et al., 2022), and augmentation robustness (Zhao et al., 2023).
{ "annotation": [ "Content_deletion", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
9wfZbn73om
FhHH15YtKt
2
[ { "text": "Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019;" }, { "text": "Hjelm et al., 2018; Tian et al., 2019; 2020)." }, { "text": "However, a rigorous relationship between mutual information and the downstream classification error has not been established." }, { "text": "Tschannen et al." }, { "text": "(2019) also find that optimizing tighter bounds of MI does not imply better representations." }, { "text": "Thus, MI may not fully explain the success of InfoNCE." }, { "text": "Besides, Arora et al." }, { "text": "(2019) directly analyze the generalization of InfoNCE loss based on the assumption that positive samples are drawn from the same latent classes, which is different from practical contrastive algorithms." }, { "text": "Ash et al." }, { "text": "(2021) study the role of negative samples in contrastive SSL, and show an interesting collision-coverage trade-off theoretically." }, { "text": "Furthermore, HaoChen et al." }, { "text": "(2021) study contrastive SSL from a matrix decomposition perspective, but it is only applicable to their spectral contrastive loss." }, { "text": "The behavior of InfoNCE is also studied from the perspective of alignment and uniformity (Wang & Isola, 2020), sparse coding model (Wen & Li, 2021), and the “expansion” assumption (Wei et al., 2020)." } ]
[ { "text": "Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019;" }, { "text": "Hjelm et al., 2018; Tian et al., 2019; 2020; Tschannen et al., 2019)." }, { "text": "However, a rigorous relationship between mutual information and downstream performance has not been established." }, { "text": "" }, { "text": "" }, { "text": "" }, { "text": "Besides, Arora et al." }, { "text": "(2019) directly analyze the generalization of InfoNCE loss based on the assumption that positive samples are drawn from the same latent classes, which is different from practical algorithms." }, { "text": "Ash et al." }, { "text": "(2021) study the role of negative samples and show an interesting collision-coverage trade-off theoretically." }, { "text": "HaoChen et al." }, { "text": "(2021) study contrastive SSL from a matrix decomposition perspective, but it is only applicable to their spectral contrastive loss." }, { "text": "The behavior of InfoNCE is also studied from the perspective of alignment and uniformity (Wang & Isola, 2020), sparse coding model (Wen & Li, 2021), the expansion assumption (Wei et al., 2020), stochastic neighbor embedding (Hu et al., 2022), and augmentation robustness (Zhao et al., 2023)." } ]
CVRUl83zah.I75TtW0V7.02
Set prediction modelsmake use of set-to-set functions that are permutation-equivariant (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020). This is desirable when processing sets because it prevents a function from relying on the arbitrary order of the set in its matrix representation. Permutation-equivariant functions can be easily composed to build larger models that remain equivariant, which fits well into deep learning architectures as building blocks.
Recent set prediction models (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020) make use of set-to-set (permutation-equivariant list-to-list) functions to refine an initial set Y 0 , which is usually a randomly generated or learnable matrix. Permutation-equivariance is desirable when processing sets because it prevents a function from relying on the arbitrary order of the set in its matrix representation. Such functions can be easily composed to build larger models that remain equivariant, which fits well into deep learning architectures as building blocks.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
2
[ { "text": "Set prediction modelsmake use of set-to-set functions that are permutation-equivariant (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020)." }, { "text": "This is desirable when processing sets because it prevents a function from relying on the arbitrary order of the set in its matrix representation." }, { "text": "Permutation-equivariant functions can be easily composed to build larger models that remain equivariant, which fits well into deep learning architectures as building blocks." } ]
[ { "text": "Recent set prediction models (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020) make use of set-to-set (permutation-equivariant list-to-list) functions to refine an initial set Y 0 , which is usually a randomly generated or learnable matrix." }, { "text": "Permutation-equivariance is desirable when processing sets because it prevents a function from relying on the arbitrary order of the set in its matrix representation." }, { "text": "Such functions can be easily composed to build larger models that remain equivariant, which fits well into deep learning architectures as building blocks." } ]
hegI87bI5S.fL6Q48sfx8.04
Hollinworth et al. found that senior citizens generally lose the cursor due to poor eyesight and sustained concentration [15]. Therefore, the implemented a Field Mouse (a mouse with a touch sensor attached) and proposed a technique wherein the cursor moves to the center of the screen when the user hold the mouse. This technique reduced the time required to search for the cursor, which in turn reduced the movement time. Stephane et al. focused on screen torus settings [16]. With this setting, for example, when the cursor reaches the right edge, it appears from the left edge. As a result, users can easily lose sight of the cursor when it warps. Therefore, they proposed a TorusDesktop technique that adds appropriate visual feedback between the time the cursor warping, and making it difficult to lose sight of the cursor.
Hollinworth et al. found that senior citizens lose the cursor be- cause of poor eyesight and sustained concentration, and therefore, they implemented a Field Mouse (a mouse with a touch sensor at- tached) and proposed a technique wherein the cursor moves to the center of the screen when the user holds the mouse [15]. This tech- nique help reduce the time required to search for the cursor, which in turn reduces the movement time. Stephane et al. focused on screen torus settings [16]. With this setting, when the cursor reaches the screen edge, it appears from the opposite end. For example, when the cursor reaches the right edge, it appears from the left edge. However, users can easily lose sight of the cursor when it warps around the edges. To overcome this issue, they proposed a TorusDesktop technique that adds appropriate visual feedback between the time the cursor warps. These studies focused on the user losing sight of the cursor, but these did not focus on a scenario where the cursor is hidden.
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
hegI87bI5S
fL6Q48sfx8
4
[ { "text": "Hollinworth et al." }, { "text": "found that senior citizens generally lose the cursor due to poor eyesight and sustained concentration [15]. Therefore, the implemented a Field Mouse (a mouse with a touch sensor attached) and proposed a technique wherein the cursor moves to the center of the screen when the user hold the mouse." }, { "text": "This technique reduced the time required to search for the cursor, which in turn reduced the movement time." }, { "text": "Stephane et al." }, { "text": "focused on screen torus settings [16]." }, { "text": "With this setting, for example, when the cursor reaches the right edge, it appears from the left edge." }, { "text": "As a result, users can easily lose sight of the cursor when it warps." }, { "text": "Therefore, they proposed a TorusDesktop technique that adds appropriate visual feedback between the time the cursor warping, and making it difficult to lose sight of the cursor." } ]
[ { "text": "Hollinworth et al." }, { "text": "found that senior citizens lose the cursor be- cause of poor eyesight and sustained concentration, and therefore, they implemented a Field Mouse (a mouse with a touch sensor at- tached) and proposed a technique wherein the cursor moves to the center of the screen when the user holds the mouse [15]." }, { "text": "This tech- nique help reduce the time required to search for the cursor, which in turn reduces the movement time." }, { "text": "Stephane et al." }, { "text": "focused on screen torus settings [16]." }, { "text": "With this setting, when the cursor reaches the screen edge, it appears from the opposite end. For example, when the cursor reaches the right edge, it appears from the left edge." }, { "text": "However, users can easily lose sight of the cursor when it warps around the edges." }, { "text": "To overcome this issue, they proposed a TorusDesktop technique that adds appropriate visual feedback between the time the cursor warps. These studies focused on the user losing sight of the cursor, but these did not focus on a scenario where the cursor is hidden." } ]
7_CwM-IzWd.zcm6f5HDI.11
To warm-up the model, we perform regular steps in the first epoch. We switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is the imbalance parameter . The training takes Q re-balancing steps before returning to regular mode. We refer to Q as the re-balancing window size .
To warm-up the model, we perform only regular steps in the first training epoch. Then we switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is a hyperparameter, referred to as the imbalance tolerance parameter . The training takes Q re-balancing steps before returning to regular mode. We refer to the hyperparameter Q as the re-balancing window size .
{ "annotation": [ "Development" ], "instruction": "Change the descriptions so that the hyperparameters can be easily referred to later", "annotator": "annotator_05" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
7_CwM-IzWd
zcm6f5HDI
11
[ { "text": "To warm-up the model, we perform regular steps in the first epoch." }, { "text": "We switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is the imbalance parameter ." }, { "text": "The training takes Q re-balancing steps before returning to regular mode." }, { "text": "We refer to Q as the re-balancing window size ." } ]
[ { "text": "To warm-up the model, we perform only regular steps in the first training epoch." }, { "text": "Then we switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is a hyperparameter, referred to as the imbalance tolerance parameter ." }, { "text": "The training takes Q re-balancing steps before returning to regular mode." }, { "text": "We refer to the hyperparameter Q as the re-balancing window size ." } ]
r1DvZQwjB.Hk8CzQDiB.00
Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse problems where the optimized loss function consists of few elements: fidelity terms of Land L ∞ norms, boundary conditions constraints and additional regularizers. This setting is flexible in the sense that regularizers can be tailored to specific problems. We demonstrate our method on several free shape 2D second order systems with application to Electrical Impedance Tomography (EIT), diffusion and wave equations.
Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse problems where the optimized loss function consists of few elements: fidelity terms of Land L ∞ norms that unlike previous methods promote a strong solution. Robust boundary conditions constraints and additional regularizers are included as well. This setting is flexible in the sense that regularizers can be tailored to specific problems. We demonstrate our method on several free shape 2D second order systems with application to Electrical Impedance Tomography (EIT), diffusion and wave equations.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
r1DvZQwjB
Hk8CzQDiB
0
[ { "text": "Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order." }, { "text": "This framework therefore, enables the solution of high order non-linear PDEs." }, { "text": "The proposed algorithm is a unified formulation of both forward and inverse problems where the optimized loss function consists of few elements: fidelity terms of Land L ∞ norms, boundary conditions constraints and additional regularizers." }, { "text": "This setting is flexible in the sense that regularizers can be tailored to specific problems." }, { "text": "We demonstrate our method on several free shape 2D second order systems with application to Electrical Impedance Tomography (EIT), diffusion and wave equations." } ]
[ { "text": "Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order." }, { "text": "This framework therefore, enables the solution of high order non-linear PDEs." }, { "text": "The proposed algorithm is a unified formulation of both forward and inverse problems where the optimized loss function consists of few elements: fidelity terms of Land L ∞ norms that unlike previous methods promote a strong solution. Robust boundary conditions constraints and additional regularizers are included as well." }, { "text": "This setting is flexible in the sense that regularizers can be tailored to specific problems." }, { "text": "We demonstrate our method on several free shape 2D second order systems with application to Electrical Impedance Tomography (EIT), diffusion and wave equations." } ]
u9NaukzyJ-.hh0KECXQLv.17
The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) . Design elements such as colors, sliders, labels, and markers should be carefully employed to avoid overwhelming the calendar. One of the reasons why Design B was preferred is because it is less cluttered: medication entries can be rendered effectively using position, shape, and size. The size of a medication entry should be as small as possible so as not to occupy too much space. Size should not be used to indicate either allowed or preferred administration time of a medication entry, and the size of the entry should also be uniform regardless of the length of the allowed period of administration. Using shapes with a colored outline and transparent fill was associated with less noise and hence preferred by the users. While the slider design was effective in com- municating both the allowed and preferred administration period, it made the entry occupy a lot of calendar space and was also misread by some participants as an indicator of delayed release for certain medications. Sliders should thus be avoided. Familiar icons such as tablets can be used to indicate medication entries.
The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) . One of the reasons why Design B was preferred is because it is less cluttered: medication entries can be rendered effectively using position, shape, and size. The size of a medication entry should be as small as possible so as not to occupy too much space. Size should not be used to indicate either allowed or preferred administration time of a medication entry, and the size of the entry should also be uniform regardless of the length of the allowed period of administration. Using shapes with a colored outline and transparent fill was associated with less noise by participants. While the slider design was effective in communicating both the allowed and preferred administration period, it made the entry occupy a lot of calendar space and was also misread by some participants. Familiar icons such as tablets can be used to indicate medication entries.
{ "annotation": [ "Concision", "Content_deletion" ], "instruction": "Remove unnecessary details and explanations.", "annotator": "annotator_03" }
{ "annotation": [ "Content_deletion", "Development" ], "instruction": "", "annotator": "annotator_09" }
u9NaukzyJ-
hh0KECXQLv
17
[ { "text": "The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) ." }, { "text": "Design elements such as colors, sliders, labels, and markers should be carefully employed to avoid overwhelming the calendar." }, { "text": "One of the reasons why Design B was preferred is because it is less cluttered: medication entries can be rendered effectively using position, shape, and size." }, { "text": "The size of a medication entry should be as small as possible so as not to occupy too much space." }, { "text": "Size should not be used to indicate either allowed or preferred administration time of a medication entry, and the size of the entry should also be uniform regardless of the length of the allowed period of administration." }, { "text": "Using shapes with a colored outline and transparent fill was associated with less noise and hence preferred by the users." }, { "text": "While the slider design was effective in com- municating both the allowed and preferred administration period, it made the entry occupy a lot of calendar space and was also misread by some participants as an indicator of delayed release for certain medications. Sliders should thus be avoided." }, { "text": "Familiar icons such as tablets can be used to indicate medication entries." } ]
[ { "text": "The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) ." }, { "text": "" }, { "text": "One of the reasons why Design B was preferred is because it is less cluttered: medication entries can be rendered effectively using position, shape, and size." }, { "text": "The size of a medication entry should be as small as possible so as not to occupy too much space." }, { "text": "Size should not be used to indicate either allowed or preferred administration time of a medication entry, and the size of the entry should also be uniform regardless of the length of the allowed period of administration." }, { "text": "Using shapes with a colored outline and transparent fill was associated with less noise by participants." }, { "text": "While the slider design was effective in communicating both the allowed and preferred administration period, it made the entry occupy a lot of calendar space and was also misread by some participants." }, { "text": "Familiar icons such as tablets can be used to indicate medication entries." } ]
SkMm_pDYm.rkQWRbxAQ.00
S t +1 = f st ( S t , A t , U st ) . This is always possible using auto-regressive uniformization. The DAG G of the resulting SCM is shown in fig. 1. This procedure is closely related to the ‘reparameterization trick’ for models with lotion-scale distributions (Kingma & Welling, 2013; Rezende et al., 2014).
S t +1 = f st ( S t , A t , U st ) . This is always possible using auto-regressive uniformization, see Lemma 2 in the appendix. The DAG G of the resulting SCM is shown in fig. 1. This procedure is closely related to the ‘reparameterization trick’ for models with location-scale distributions (Kingma & Welling, 2013; Rezende et al., 2014).
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
SkMm_pDYm
rkQWRbxAQ
0
[ { "text": "S t +1 =" }, { "text": "f st ( S t , A t , U st ) ." }, { "text": "This is always possible using auto-regressive uniformization." }, { "text": "The DAG G of the resulting SCM is shown in fig. 1." }, { "text": "This procedure is closely related to the ‘reparameterization trick’ for models with lotion-scale distributions (Kingma & Welling, 2013; Rezende et al., 2014)." } ]
[ { "text": "S t +1 =" }, { "text": "f st ( S t , A t , U st ) ." }, { "text": "This is always possible using auto-regressive uniformization, see Lemma 2 in the appendix." }, { "text": "The DAG G of the resulting SCM is shown in fig. 1." }, { "text": "This procedure is closely related to the ‘reparameterization trick’ for models with location-scale distributions (Kingma & Welling, 2013; Rezende et al., 2014)." } ]
nCTSF9BQJ.DGhBYSP_sR.12
The prior rotamers ˜ χ j are inaccurate or unknown in many cases. For example, if we mutate some amino acids in the protein complex, the rotamers of the mutated amino acids are unknown, and the rotamers of amino acids nearby the mutated ones are inaccurate because they are affected by the mutation. The probability density is defined over the d -dimensional torus T D = ( S 1 ) D , and we show below our proposed flow-based architecture to model the density.
The prior rotamers ˜ χ j are often inaccurate or unknown. For example, if we mutate some residues, the rotamers of the mutated residues are unknown, and the rotamers of residues nearby the mutated ones are inaccurate because they are affected by the mutation. The probability density is defined over the d -dimensional torus T D = ( S 1 ) D , and we describe below the flow-based architecture to model the density.
{ "annotation": [ "Rewriting_medium" ], "instruction": "Replace every apparition of \"amino acids\" or \"amino acids in the protein complex\" by \"residues\"", "annotator": "annotator_01" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Replace occurrences of amino acids by residues. Make this paragraph a lit bit more concise.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
12
[ { "text": "The prior rotamers ˜ χ" }, { "text": "j are inaccurate or unknown in many cases." }, { "text": "For example, if we mutate some amino acids in the protein complex, the rotamers of the mutated amino acids are unknown, and the rotamers of amino acids nearby the mutated ones are inaccurate because they are affected by the mutation." }, { "text": "The probability density is defined over the d -dimensional torus T D =" }, { "text": "( S 1 ) D , and we show below our proposed flow-based architecture to model the density." } ]
[ { "text": "The prior rotamers ˜ χ" }, { "text": "j are often inaccurate or unknown." }, { "text": "For example, if we mutate some residues, the rotamers of the mutated residues are unknown, and the rotamers of residues nearby the mutated ones are inaccurate because they are affected by the mutation." }, { "text": "The probability density is defined over the d -dimensional torus T D =" }, { "text": "( S 1 ) D , and we describe below the flow-based architecture to model the density." } ]
skR2qMboVK.lmwxQfhmln.01
Margin-Density (Nguyen & Smeulders, 2004). Scores candidates by the product of their margin and their density estimates, so as to increase diversity. The density is computed by first clustering the penultimate layer activations of all |Z| candidate points via K -means. Then, the density score of candidate x i is computed as: | C ( x i ) | / |Z| , where C ( x i ) is the cluster containing x i . We useclusters.
Margin-Density (Nguyen & Smeulders, 2004). Scores candidates by the product of their margin and their density estimates, so as to increase diversity. The density is computed by first clustering the penultimate layer activations of the current model on all |Z| candidate points via K -means. Then, the density score of candidate x i is computed as: | C ( x i ) | / |Z| , where C ( x i ) is the cluster containing x i . We use min { 20 , |Z|} clusters.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
skR2qMboVK
lmwxQfhmln
1
[ { "text": "Margin-Density (Nguyen & Smeulders, 2004)." }, { "text": "Scores candidates by the product of their margin and their density estimates, so as to increase diversity." }, { "text": "The density is computed by first clustering the penultimate layer activations of all |Z| candidate points via K -means." }, { "text": "Then, the density score of candidate x i is computed as: | C ( x i ) | / |Z| , where C ( x i ) is the cluster containing x i ." }, { "text": "We useclusters." } ]
[ { "text": "Margin-Density (Nguyen & Smeulders, 2004)." }, { "text": "Scores candidates by the product of their margin and their density estimates, so as to increase diversity." }, { "text": "The density is computed by first clustering the penultimate layer activations of the current model on all |Z| candidate points via K -means." }, { "text": "Then, the density score of candidate x i is computed as: | C ( x i ) | / |Z| , where C ( x i ) is the cluster containing x i ." }, { "text": "We use min { 20 , |Z|} clusters." } ]
IoTyuVEanE.Et-c0vQfeb.04
Parsing signal from noise is critical to learning from weak and rule-based supervision. Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1. For each dataset, we provide exactly one seed LF foreach class. Each seed LF contains exactly six single-token keywords. If any of these keywords is found in document d i , the LF assigns its label; otherwise, it abstains from labeling d i .
Parsing signal from noise is critical to learning from weak and rule-based supervision. Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1. We provided exactly each class with exactly one labeling function consisting of six keywords or phrases adapted from [16]. If any of these keywords is found in document d i , the LF assigns its label; otherwise, it abstains from labeling d i .
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_08" }
IoTyuVEanE
Et-c0vQfeb
4
[ { "text": "Parsing signal from noise is critical to learning from weak and rule-based supervision." }, { "text": "Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1." }, { "text": "For each dataset, we provide exactly one seed LF foreach class. Each seed LF contains exactly six single-token keywords." }, { "text": "If any of these keywords is found in document d" }, { "text": "i , the LF assigns its label; otherwise, it abstains from labeling d i ." } ]
[ { "text": "Parsing signal from noise is critical to learning from weak and rule-based supervision." }, { "text": "Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1." }, { "text": "We provided exactly each class with exactly one labeling function consisting of six keywords or phrases adapted from [16]." }, { "text": "If any of these keywords is found in document d" }, { "text": "i , the LF assigns its label; otherwise, it abstains from labeling d i ." } ]
LC37_sQl_t.XlHDVLz97W.01
In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time. Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a grid-world in a zero-shot manner, being given a high-level, symbolic specification of their structures, and after being trained with simpler concepts. In addition, we demonstrate that an independently trained ZeroC is able to transfer hierarchical concepts across different domains at inference. Although this work is evaluated only in grid-world domain, we are the first to address this difficult challenge, and hope that this work will make a useful step in the development of composable neural systems, capable of zero-shot concept recognition and acquisition and hence suitable for more diverse tasks.
In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time. Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a grid-world in a zero-shot manner, being given a high-level, symbolic specification of their structures, and after being trained with simpler concepts. In addition, we demonstrate that an independently trained ZeroC is able to transfer hierarchical concepts across different domains at inference. Although this work is evaluated only in grid-world visual domain, we are the first to address this difficult challenge. We are also excited to see its potential application in broader domains, e.g. in AI for scientific discovery, where it may infer novel patterns and concepts from data in a zero-shot manner. We hope that this work will make a useful step in the development of composable neural systems, capable of zero-shot concept recognition and acquisition and hence suitable for more diverse tasks.
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Content_addition", "Rewriting_light" ], "instruction": "", "annotator": "annotator_08" }
LC37_sQl_t
XlHDVLz97W
1
[ { "text": "In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time." }, { "text": "Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a grid-world in a zero-shot manner, being given a high-level, symbolic specification of their structures, and after being trained with simpler concepts." }, { "text": "In addition, we demonstrate that an independently trained ZeroC is able to transfer hierarchical concepts across different domains at inference." }, { "text": "Although this work is evaluated only in grid-world domain, we are the first to address this difficult challenge, and hope that this work will make a useful step in the development of composable neural systems, capable of zero-shot concept recognition and acquisition and hence suitable for more diverse tasks." } ]
[ { "text": "In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time." }, { "text": "Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a grid-world in a zero-shot manner, being given a high-level, symbolic specification of their structures, and after being trained with simpler concepts." }, { "text": "In addition, we demonstrate that an independently trained ZeroC is able to transfer hierarchical concepts across different domains at inference." }, { "text": "Although this work is evaluated only in grid-world visual domain, we are the first to address this difficult challenge. We are also excited to see its potential application in broader domains, e.g. in AI for scientific discovery, where it may infer novel patterns and concepts from data in a zero-shot manner. We hope that this work will make a useful step in the development of composable neural systems, capable of zero-shot concept recognition and acquisition and hence suitable for more diverse tasks." } ]
CVRUl83zah.I75TtW0V7.17
Attention on all AP metrics. It is better at the attribute classification ignoring the 3d coordinates (96. → 98.8) and improves especially for the stricter AP thresholds like AP 0 . 125 (7.9 → 76.9). Note that a stricter AP threshold is always upper bounded by the looser AP threshold, so iDSPN is guaranteed to be better than Slot Attention † on AP 0 . We observed some overfitting when using 128x image inputs, which did not show up in preliminary experiments when training an autoencoder with the ground-truth set as input. We reduced this overfitting significantly by increasing the image size to 256x256 while keeping the latent vector size the same, which results in further performance improvements. We thus believe that the overfitting is due to the ResNet18 image encoder rather than iDSPN.
Attention on all AP metrics. It is better at attribute classification when ignoring the 3d coordinates (AP ∞ , 96.4% → 98.8%) and improves especially for the metrics with stricter 3d coordinate thresholds (AP 0 . 125 , 7.9% → 76.9%). This is despite Slot Attention † using a three times higher weight on the loss for the coordinates than iDSPN. Note that a stricter AP threshold is always upper bounded by a looser AP threshold, so iDSPN is guaranteed to be better than Slot Attention † on AP 0 . We observe some overfitting with 128x128 image inputs (1.6e-4 train loss, 5.4e-4 validation loss), which did not appear in preliminary experiments when training an autoencoder with the ground-truth set as input. We reduce this generalization gap by increasing the image size to 256x256 while keeping the latent vector size the same, which results in further performance improvements (1.1e-4 train loss, 2.5e-4 validation loss). We thus believe that the overfitting is due to the ResNet18 image encoder rather than iDSPN.
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
17
[ { "text": "Attention on all AP metrics." }, { "text": "It is better at the attribute classification ignoring the 3d coordinates (96." }, { "text": "→ 98.8) and improves especially for the stricter AP thresholds like AP 0 ." }, { "text": "125 (7.9 → 76.9)." }, { "text": " Note that a stricter AP threshold is always upper bounded by the looser AP threshold, so iDSPN is guaranteed to be better than Slot Attention † on AP 0 ." }, { "text": "We observed some overfitting when using 128x image inputs, which did not show up in preliminary experiments when training an autoencoder with the ground-truth set as input." }, { "text": "We reduced this overfitting significantly by increasing the image size to 256x256 while keeping the latent vector size the same, which results in further performance improvements." }, { "text": "We thus believe that the overfitting is due to the ResNet18 image encoder rather than iDSPN." } ]
[ { "text": "Attention on all AP metrics." }, { "text": "It is better at attribute classification when ignoring the 3d coordinates (AP ∞ , 96.4%" }, { "text": "→ 98.8%) and improves especially for the metrics with stricter 3d coordinate thresholds (AP 0 ." }, { "text": "125 , 7.9% → 76.9%)." }, { "text": "This is despite Slot Attention † using a three times higher weight on the loss for the coordinates than iDSPN. Note that a stricter AP threshold is always upper bounded by a looser AP threshold, so iDSPN is guaranteed to be better than Slot Attention † on AP 0 ." }, { "text": "We observe some overfitting with 128x128 image inputs (1.6e-4 train loss, 5.4e-4 validation loss), which did not appear in preliminary experiments when training an autoencoder with the ground-truth set as input." }, { "text": "We reduce this generalization gap by increasing the image size to 256x256 while keeping the latent vector size the same, which results in further performance improvements (1.1e-4 train loss, 2.5e-4 validation loss)." }, { "text": "We thus believe that the overfitting is due to the ResNet18 image encoder rather than iDSPN." } ]
SyF8k7bCW.HytIRPamf.01
Less Constraints: During encoding, the explicit word order information used inRNN will help the vector representation capture more of the temporally-specific relationships among words, but this same constraint (if using RNN as the decoder) could be an inappropriate constraint in the decoding process.
The results are presented in the Table 1. Generally, the three different decoding settings didn’t make much of a difference in terms of the performance on selected downstream tasks, with RNN or CNN as the decoder. The results tell us that, in terms of learning good sentence representations, the autoregressive decoder doesn’t require the correct ground-truth words as the inputs.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Can you reformulate my entire paragraph?", "annotator": "annotator_09" }
SyF8k7bCW
HytIRPamf
1
[ { "text": "Less Constraints: During encoding, the explicit word order information used inRNN will help the vector representation capture more of the temporally-specific relationships among words, but this same constraint (if using RNN as the decoder) could be an inappropriate constraint in the decoding process." } ]
[ { "text": "The results are presented in the Table 1. Generally, the three different decoding settings didn’t make much of a difference in terms of the performance on selected downstream tasks, with RNN or CNN as the decoder. The results tell us that, in terms of learning good sentence representations, the autoregressive decoder doesn’t require the correct ground-truth words as the inputs." } ]
lLwt-9RJ2tm.XJsauLjck.00
Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse. Hereis the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to thecost also appears as a positive term in its parent’s contribution to the cost. We can pass this term as a since there always exists an optimal hierarchy that is binary.
Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse. Here optimal hierarchy that is binary.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
0
[ { "text": "Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse." }, { "text": "Hereis the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to thecost also appears as a positive term in its parent’s contribution to the cost." }, { "text": "We can pass this term as a since there always exists an optimal hierarchy that is binary." } ]
[ { "text": "Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse." }, { "text": "" }, { "text": "Here optimal hierarchy that is binary." } ]
p8yrWJS4W.eHA5NswPr.02
Results. Fig. 4 shows that certain alterations—such as completely removing articles from the evaluated text—have almost no impact on the divergence between our reference and test corpora for various ∆ . In fact, text without any articles is judged as better than GPT-2 XL ’s by most of the cluster-based divergences. Further, while this perturbation undoubtedly affects the text’s fluency, it has less of an effect on this divergence than, e.g., truncating texts. This is arguably undesirable: A metric of text quality should place more emphasis on fluency than surface statistics, such as length.
Results. Fig. 4 shows that certain alterations to the evaluated text—such as completely removing articles—have almost no impact on its divergences from the reference corpora for various ∆ . In fact, text without any articles is judged as better than GPT-2 XL ’s by all of the cluster-based divergences (see Fig. 9 for a zoomed in version). Further, while this perturbation undoubtedly affects the text’s fluency, it has less of an effect on ∆ than, e.g., truncating texts. This is arguably undesirable: A metric of text quality should place more emphasis on fluency than surface statistics, such as length.
{ "annotation": [ "Rewriting_light" ], "instruction": "Make the concepts a bit more specific, such that some vague ideas are more clear.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Revise the writing for better readability.", "annotator": "annotator_07" }
p8yrWJS4W
eHA5NswPr
2
[ { "text": "Results." }, { "text": "Fig." }, { "text": "4 shows that certain alterations—such as completely removing articles from the evaluated text—have almost no impact on the divergence between our reference and test corpora for various ∆ ." }, { "text": "In fact, text without any articles is judged as better than GPT-2 XL ’s by most of the cluster-based divergences." }, { "text": " Further, while this perturbation undoubtedly affects the text’s fluency, it has less of an effect on this divergence than, e.g., truncating texts." }, { "text": "This is arguably undesirable: A metric of text quality should place more emphasis on fluency than surface statistics, such as length." } ]
[ { "text": "Results." }, { "text": "Fig." }, { "text": "4 shows that certain alterations to the evaluated text—such as completely removing articles—have almost no impact on its divergences from the reference corpora for various ∆ ." }, { "text": "In fact, text without any articles is judged as better than GPT-2 XL ’s by all of the cluster-based divergences (see Fig." }, { "text": "9 for a zoomed in version). Further, while this perturbation undoubtedly affects the text’s fluency, it has less of an effect on ∆ than, e.g., truncating texts." }, { "text": "This is arguably undesirable: A metric of text quality should place more emphasis on fluency than surface statistics, such as length." } ]
rkwFe19K7.BJCfw3tCm.00
We follow several rules when selecting victim nodes. First, the attack must be successful on the victim node to fool the model. Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ properties. Finally, we choose victim nodes among those with the same degree uniformly at random to perform the detection. We observe that without considering detection, the Multi-edges direct attack is the most successful attacking model, followed by Single-edge attack and finally Multi-edges indirect attack. Therefore, we selected 20, 10, 6 victim nodes respectively for these three attack methods. The selected victim node degrees are shown in the appendix.
We follow several rules when selecting victim nodes. First, the attack must be successful on the victim node to fool the model. Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ properties. Finally, we choose victim nodes among those with the same degree uniformly at random to perform the detection. We observe that without considering detection, the Multi-edges direct attack is the most successful attacking model, followed by Single-edge attack and finally Multi-edges indirect attack. Therefore, we selected 20, 10, 6 victim nodes respectively for these three attack methods on real-world data. For synthetic data, we simply pick two victim nodes, one with the smallest degree and the other with the largest degree. The selected victim node degrees are shown in the appendix.
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
rkwFe19K7
BJCfw3tCm
0
[ { "text": "We follow several rules when selecting victim nodes." }, { "text": "First, the attack must be successful on the victim node to fool the model." }, { "text": "Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ properties." }, { "text": "Finally, we choose victim nodes among those with the same degree uniformly at random to perform the detection." }, { "text": "We observe that without considering detection, the Multi-edges direct attack is the most successful attacking model, followed by Single-edge attack and finally Multi-edges indirect attack." }, { "text": "Therefore, we selected 20, 10, 6 victim nodes respectively for these three attack methods." }, { "text": "" }, { "text": "The selected victim node degrees are shown in the appendix." } ]
[ { "text": "We follow several rules when selecting victim nodes." }, { "text": "First, the attack must be successful on the victim node to fool the model." }, { "text": "Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ properties." }, { "text": "Finally, we choose victim nodes among those with the same degree uniformly at random to perform the detection." }, { "text": "We observe that without considering detection, the Multi-edges direct attack is the most successful attacking model, followed by Single-edge attack and finally Multi-edges indirect attack." }, { "text": "Therefore, we selected 20, 10, 6 victim nodes respectively for these three attack methods on real-world data." }, { "text": "For synthetic data, we simply pick two victim nodes, one with the smallest degree and the other with the largest degree." }, { "text": "The selected victim node degrees are shown in the appendix." } ]
B1SkMaDvr.W2MCLgZGr.00
In this paper, we prove new generalization bounds for convolutional networks that take account of this effect. As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters. Additionally, our bounds are “size-free”, in the sense that they are independent of the number of pixels in the input, or the height and width of the hidden feature maps.
In this paper, we prove new generalization bounds for convolutional networks that take account of this effect. As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters. Additionally, our bounds independent of the number of pixels in the input, or the height and width of the hidden feature maps.
{ "annotation": [ "Concision" ], "instruction": "Make the ideas more concise.", "annotator": "annotator_03" }
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary details.", "annotator": "annotator_07" }
B1SkMaDvr
W2MCLgZGr
0
[ { "text": "In this paper, we prove new generalization bounds for convolutional networks that take account of this effect." }, { "text": "As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters." }, { "text": "Additionally, our bounds are “size-free”, in the sense that they are independent of the number of pixels in the input, or the height and width of the hidden feature maps." } ]
[ { "text": "In this paper, we prove new generalization bounds for convolutional networks that take account of this effect." }, { "text": "As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters." }, { "text": "Additionally, our bounds independent of the number of pixels in the input, or the height and width of the hidden feature maps." } ]