id_paragraph
stringlengths
20
26
parag_1
stringlengths
101
3.02k
parag_2
stringlengths
173
2.77k
annot_1
dict
annot_2
dict
id_source
stringlengths
8
11
id_target
stringlengths
8
11
index_paragraph
int64
0
26
list_sentences_1
listlengths
1
36
list_sentences_2
listlengths
1
36
7_CwM-IzWd.zcm6f5HDI.04
During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until the highest accuracy o...
During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 ) + CE( y, ˆ y 1 ) , where CE stands for cross-entropy. We refer to each of the cross-entropy losses as a modality-specific loss. We train the model until ˆ y = y for all samples...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_08" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
4
[ { "text": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 )" }, { "text": "+ CE( y, ˆ y 1 ) , where CE stands for cross-entropy." }, { "text": "We refer to each of the cross-entropy losses as a modali...
[ { "text": "During training, the parameters of the multi-modal DNN are updated by stochastic gradient descent (SGD) to minimize the loss: L = CE( y, ˆ y 0 )" }, { "text": "+ CE( y, ˆ y 1 ) , where CE stands for cross-entropy." }, { "text": "We refer to each of the cross-entropy losses as a modali...
hegI87bI5S.fL6Q48sfx8.09
The task was created with reference to the previous study [25]. Fig- ure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground. First, participants clicked on the start area, and the cursor was fixed at the center of the start area. Assum...
The task was created by referring to a previous study [28]. Figure 3 shows a schematic of the task. A pink circular start area (251-pixel radius) and a green target were displayed on a gray background. The participants clicked on the start area; the cursor positioned at the center of the start area. We strictly fixed t...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the middle part of the paragraph to make it more better. Replace some words in the paragraph.", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Slightly revise for readability, you can reorganise ideas in sentences if necessary.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
9
[ { "text": "The task was created with reference to the previous study [25]." }, { "text": "Fig- ure 3 shows a schematic of the task." }, { "text": "A pink circular start area (251-pixel radius) and a green target were displayed on a gray back- ground." }, { "text": "First, participants cl...
[ { "text": "The task was created by referring to a previous study [28]." }, { "text": "Figure 3 shows a schematic of the task." }, { "text": "A pink circular start area (251-pixel radius) and a green target were displayed on a gray background." }, { "text": "The participants clicked on th...
SyGfyinsH.I2YVGmIp0.00
A + C + D refers to our approach. In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on average; these gains are most noticeable in the tails. Using the accumulated confidence pr...
A + C + D is our approach. As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ . In (b), we show the same ablations over the entire trajectory until t = 20 . As can be seen, using the calibrated predictor p...
{ "annotation": [ "Content_addition", "Concision" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
SyGfyinsH
I2YVGmIp0
0
[ { "text": "A + C + D refers to our approach." }, { "text": "" }, { "text": "In (b), we show the same ablations over the entire trajectory until t = 20 ." }, { "text": "As can be seen, using the calibrated predictor produces a large gain and using the direct bound produce a large gain on ...
[ { "text": "A + C + D is our approach." }, { "text": "As before, we omit results for the ablation using the VC generalization bound since n is so small that the bound does not hold for any k for the given (cid:15) and δ ." }, { "text": "In (b), we show the same ablations over the entire trajector...
WldWha1MT.LL2ZsGpJga.03
A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G . However, it is limited as it ignores the spatial correspondence ofthe topological features within their respective images (see F...
Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G . However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial correspondence (see Fig.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this definition in a more direct and academic style.", "annotator": "annotator_07" }
WldWha1MT
LL2ZsGpJga
3
[ { "text": "A well-established metric to evaluate the topological performance of a segmentation network is the Betti number error, see appendix I, which compares the topological complexity of P and G ." }, { "text": "However, it is limited as it ignores the spatial correspondence ofthe topological feat...
[ { "text": "Betti number error The Betti number error β err (see App. K) compares the topological complexity of the binarized prediction P and the ground truth G ." }, { "text": "However, it is limited as it only compares the number of topological features in both images, while ignoring their spatial cor...
7_CwM-IzWd.zcm6f5HDI.03
We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020). The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions. Next we concatenate these representations and applya linear transformatio...
We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020). Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector. We concatenate the two vectors and apply linear t...
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rearrange the structure to make the structure clearer.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite this paragraph completely to make it clearer.", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
3
[ { "text": "We implement the fusion module as a multi-modal transfer module (MMTM) (Joze et al., 2020)." }, { "text": "The first step in MMTM is to squeeze feature maps from each uni-modal branch to vector representations via global average pooling over spatial dimensions." }, { "text": "Next we ...
[ { "text": "We implement every fusion module by a multi-modal transfer module (MMTM) (Joze et al., 2020)." }, { "text": "Each MMTM connects two layers from the two uni-modal branches. There is first the global average pooling applied over spatial dimensions to transform feature maps into a vector." }, ...
uJRtLYIOIq.e9xxGlB_c.00
Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants; for example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-...
Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added. For example, c − ∥ x − x ′ ∥ p for large enough c . Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of the Softmax normalization, we can leave it as an under-determine...
{ "annotation": [ "Concision" ], "instruction": "Rewrite some formulations, giving preference to shorter ones.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Shorten this paragraph a bit while keeping all the informations.", "annotator": "annotator_07" }
uJRtLYIOIq
e9xxGlB_c
0
[ { "text": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if they are added with large enoughconstants;" }, { "text": "for example, c − ∥ x − x ′ ∥ p for large enough c ." }, { "text": "Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant pr...
[ { "text": "Lemma 1 implies the CPD kernels in Corollary 1 can be made PD if a large enough constant is added." }, { "text": "For example, c − ∥ x − x ′ ∥ p for large enough c ." }, { "text": "Although Lemma 1 does not have an explicit construction of c , thanks to the shift-invariant property of...
xV0XmrSMtk.sYfR73R9z.02
Discrete Variational Auto-Encoder. In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to...
Discrete Variational Auto-Encoder (DVAE). In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder. We consider the task of training a DVAE on the M NIST dataset where the encoder maps the input image to a discrete distribution of ...
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise by introducing acronyms earlier.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Introduce the acronym DVAE earlier to avoid repeating it.", "annotator": "annotator_07" }
xV0XmrSMtk
sYfR73R9z
2
[ { "text": "Discrete Variational Auto-Encoder." }, { "text": "In a discrete variational autoencoder (DVAE) (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder." }, { "text": "We consider the task of training a DVA...
[ { "text": "Discrete Variational Auto-Encoder (DVAE)." }, { "text": "In a DVAE (Rolfe, 2016), the network layers before the sampling solver represent the encoder and the layers after the sampling solver the decoder." }, { "text": "We consider the task of training a DVAE on the M NIST dataset wher...
PDvmJtmgQb.gGrpxbc7UI.02
Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data samples can yield asymptotically better w...
Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically. On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically better worst-case PAC learning guarantees t...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "I want to use numbers for in-text citations. ", "annotator": "annotator_09" }
PDvmJtmgQb
gGrpxbc7UI
2
[ { "text": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically." }, { "text": "On the theoretical side, it has been shown (Alon et al., 2019; Bassily et al., 2020a) that a combination of private and public data...
[ { "text": "Other Uses of Public Data in DP Learning: The use of in-distribution public data has been extensively explored both theoretically and empirically." }, { "text": "On the theoretical side, it has been shown [3, 10] that a combination of private and public data samples can yield asymptotically b...
E2pFUCGYZ1.5hMS4Fg2b_b.00
ADO iterations in the Bayesian framework are shown in Sec. 3.3 and Appendix A.3. Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters. To further improve the prediction capability, especially for chaoticsystems, we propose t...
ADO iterations in the Bayesian framework are shown in Sec. 3.3 and supplemental materials. Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters. To further improve the prediction capability, especially forchaotic systems, we...
{ "annotation": [ "Rewriting_light" ], "instruction": "Use \"supplemental materials\" instead of \"Appendix\"", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Lightly revise for readability.", "annotator": "annotator_07" }
E2pFUCGYZ1
5hMS4Fg2b_b
0
[ { "text": "ADO iterations in the Bayesian framework are shown in Sec." }, { "text": "3.3 and Appendix A.3." }, { "text": "Finally, with theestimated posterior, the predictive uncertainty can be quantified by evaluating the identified systemwith an ensemble of parameters." }, { "text": "T...
[ { "text": "ADO iterations in the Bayesian framework are shown in Sec." }, { "text": "3.3 and supplemental materials." }, { "text": "Finally,with the estimated posterior, the predictive uncertainty can be quantified by evaluating the identifiedsystem with an ensemble of parameters." }, { ...
MXi6uEx-hp.rdZfFcGyf9.14
AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQNbased architectures because the top...
AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy. RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys. Additionally, DQN is worse than CDQN-based architectures because the top-K gre...
{ "annotation": [ "Rewriting_light" ], "instruction": "Remove unnecessary words and fix the words if they are not in the correct form", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Remove terms that might be considered biased. Make the writing more clear.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
14
[ { "text": "AGILE clearly outperforms all the baselines demonstrating that relational knowledge of other available actions is crucial for an optimal policy." }, { "text": "RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys." }, { "text": "Additi...
[ { "text": "AGILE outperforms all the baselines, demonstrating that relational knowledge of other available actions is crucial for an optimal policy." }, { "text": "RecSim and Real RecSys : result trends are consistent with CREATE, but less pronounced for Real RecSys." }, { "text": "Additionally,...
mFNezF8ubW.g-sOkbqBcm.00
Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden nodes representing Chordate to those acc...
Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any. For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidden ...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
mFNezF8ubW
g-sOkbqBcm
0
[ { "text": "Each concept in the hierarchy corresponds to one set of hidden nodes which are connected to the hidden nodes representing its children, if any." }, { "text": "For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will be all to all connections from the hidde...
[ { "text": "Each concept in the hierarchy corresponds to one set of hidden nodes that essentially represent the concept. These hidden nodes are connected to those representing its children, if any." }, { "text": "For example, if Mammal, Bird and Reptile are the descendant concept of Chordate, there will ...
CVRUl83zah.I75TtW0V7.25
• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach does improve our results sl...
• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely. This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) . Using the relation network approach would i...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
25
[ { "text": "• Instead of using a relation network [] – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely." }, { "text": "This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) ." }, { "text":...
[ { "text": "• Instead of using a relation network (Santoro et al., 2017) – expanding the set into the set of all pairs first – paired with FSPool, we skip the relation network entirely." }, { "text": "This improves the complexity of the set encoder g used in iDSPN from O ( n 2 log n ) to O ( n log n ) ." ...
lLwt-9RJ2tm.XJsauLjck.03
That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative for at least the dissimilarity objective of [15]; ourstructural decomposition of ...
That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider. We answer this in the affirmative; we can in fact achieve even stronger performance guarantees for both objective...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
3
[ { "text": "That said, one might still question whether it is possible to match the solution quality of a givenψ -approximate offline algorithm for the maximization objectives in the models of computation we consider." }, { "text": "We answer this in the affirmative for at least the dissimilarity objective...
[ { "text": "That said, one can further question whether it is possible to match the solution quality of any given ψ -approximate offline algorithm for the maximization objectives in the models of computation we consider." }, { "text": "We answer this in the affirmative; we can in fact achieve even stronger...
9ALnOEcGN_.4eEIRZ-dm.00
We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]. However, thereare several major distinctions between the existing methods and our proposed one. Previous workgenerates heatmaps based ...
We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]. However, there are major distinctions between the existing methods and our DIMES. For instance, Fu et al. [17] learn to generate he...
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
9ALnOEcGN_
4eEIRZ-dm
0
[ { "text": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 36]." }, { "text": "However, thereare several major distinctions between the existing methods and our proposed o...
[ { "text": "We need to point out that the idea of designing a continuous space for combinatorial optimization problems has been tried by the heatmaps approaches in the literature [40, 25, 17, 14, 36]." }, { "text": "However, there are major distinctions between the existing methods and our DIMES. For ins...
atxti8SVk.3K9AmPwALM.16
Pascal: Scribble annotations. Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new SOTA: We get 75 . 9% mIoU, achieving 98 . 6% of full supervision performance.
Pascal: Scribble annotations. Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing. We get 74 . 2% ( 76 . 1% ) mIoU, achieving 97 . 5% ( 98 . 4% ) of full supervision performance in these two categories respectively.
{ "annotation": [ "Content_substitution", "Rewriting_light" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
16
[ { "text": "Pascal: Scribble annotations." }, { "text": "Table 3 shows that, without CRF post-processing, we get 74 . 1% mIoU, achieving 97 . 6% of full supervision performance; with CRF post-processing, we reach new" }, { "text": "SOTA: We get 75 ." }, { "text": "9% mIoU, achieving 98 ....
[ { "text": "Pascal: Scribble annotations." }, { "text": "Table 3 shows that, our method consistently delivers the best performance among methods without or with CRF post-processing." }, { "text": "We get 74 ." }, { "text": "2% ( 76 . 1% ) mIoU, achieving 97 ." }, { "text": "5% ( 9...
ByZyHzZC-.HktKf7-AW.01
Our work is also related to other work on the importance of noise in SGDs, which have been previously explored. The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our ana...
Our work is also related to the importance of noise in SGD, which has been previously explored. The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998). Neelakantan et al. (2015) observe empirically that adding noise can aid optimization of very deep networks. Our analysis...
{ "annotation": [ "Content_deletion", "Rewriting_light" ], "instruction": "Remove unnecessary content in the last sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_light" ], "instruction": "Make the last sentence shorter, only keep the main idea. Slightly concise this paragraph and improve the english.", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
1
[ { "text": "Our work is also related to other work on the importance of noise in SGDs, which have been previously explored." }, { "text": "The main inspiration for having a learning rate schedule is to anneal noise (Bottou, 1998)." }, { "text": "Neelakantan et al." }, { "text": "(2015) o...
[ { "text": "Our work is also related to the importance of noise in SGD, which has been previously explored." }, { "text": "The main inspiration behind learning rate schedule has been shown to be noise annealing (Bottou, 1998)." }, { "text": "Neelakantan et al." }, { "text": "(2015) observ...
u9NaukzyJ-.hh0KECXQLv.11
Design A supportstwo sorts of medication entries: drug or phys- ical activity. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashe...
Design A supports medication (or drug) entries and physical activ- ities. Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage. The suffix -WF indicates that the drug should be administered with food. Physical activity entries have a full-color fill, a dashed bor...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make this paragraph a bit more fluid.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "I want to rewrite the first sentence.", "annotator": "annotator_09" }
u9NaukzyJ-
hh0KECXQLv
11
[ { "text": "Design A supportstwo sorts of medication entries: drug or phys- ical activity." }, { "text": "Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage." }, { "text": "The suffix -WF indicates that the drug should be administered wit...
[ { "text": "Design A supports medication (or drug) entries and physical activ- ities." }, { "text": "Each drug entry in the calendar is labelled with the name of the drug and suffixed with bracketed drug dosage." }, { "text": "The suffix -WF indicates that the drug should be administered with foo...
CVRUl83zah.I75TtW0V7.04
Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass,Zhang et al. (2019) backpropagate through the gradient descent iterations in order to c...
Because g is permutation-invariant, any ordering for the elements in Y has the same value for L . In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps. In the backward pass, the goal is to differentiate Equation 7 with respect to the input vector z and the par...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Add a sentence to explain the last sentence.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Improve the logical flow of the last half of the paragraph.", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
4
[ { "text": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L ." }, { "text": "In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps." }, { "text": "In the backward pass,Zhang et al. (2019) ba...
[ { "text": "Because g is permutation-invariant, any ordering for the elements in Y has the same value for L ." }, { "text": "In the forward pass of the model, the arg min is approximated by running a fixed number of gradient descent steps." }, { "text": "In the backward pass, the goal is to differ...
cW17DDjQa_.6iDdN7-bYz.00
We propose an algorithm to solve above optimization problem (3). The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we first reformulate the inequality constraints as soft regularizations and int...
To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation. In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make the problem difficult to solve. Therefore, we...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
cW17DDjQa_
6iDdN7-bYz
0
[ { "text": "We propose an algorithm to solve above optimization problem (3)." }, { "text": "The optimization problem contains non-continuous indicator function in constraint (3d, 3c), and non-convex constraint (3b), which make the problem difficult to solve." }, { "text": "Therefore, we first refor...
[ { "text": "To address the optimization problem (3), we adopts the alternating direction method of multipliers (ADMM) for the reformulation." }, { "text": "In details, the optimization problem contains non-continuous indicator function in constraint (3c, 3d), and non-convex constraint (3b), which make th...
33RNh69fYq.kMvWVl725x.02
Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [3]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[37] respectively have the cha...
Setup . Anomaly detection aims to detect whether an image contains anomalous regions. Theperformance is evaluated on MVTec-AD [4]. The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 . The feature maps from stage-1 to stage-4 of EfficientNet-b4[39] are resized and concatena...
{ "annotation": [ "Concision" ], "instruction": "Remove some details on model training to make the paragraph more concise.", "annotator": "annotator_04" }
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary details to shorten this paragraph.", "annotator": "annotator_07" }
33RNh69fYq
kMvWVl725x
2
[ { "text": "Setup ." }, { "text": "Anomaly detection aims to detect whether an image contains anomalous regions." }, { "text": "Theperformance is evaluated on MVTec-AD [3]." }, { "text": "The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 ." ...
[ { "text": "Setup ." }, { "text": "Anomaly detection aims to detect whether an image contains anomalous regions." }, { "text": "Theperformance is evaluated on MVTec-AD [4]." }, { "text": "The image size is selected as 224 × 224 , and the size forresizing feature maps is set as 14 × 14 ." ...
MXi6uEx-hp.rdZfFcGyf9.21
In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. Then, we hypothesized that the existence of the ...
In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations. In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making. We hypothesize that these environments re...
{ "annotation": [ "Rewriting_medium", "Content_deletion" ], "instruction": "Make this paragraph shorter and easier to understand", "annotator": "annotator_10" }
{ "annotation": [ "Concision" ], "instruction": "Simplify the less essential ideas of the paragraph to make it more concise.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
21
[ { "text": "In the experiment of Fig. 5, we found that in RecSim the relation of items is easy to model such that " }, { "text": "AGILE could not outperform the ablations whereas AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making." },...
[ { "text": "In the experiment of Fig. 5, we found that in RecSim, the relation of items is easy to model such that AGILE could not outperform the ablations." }, { "text": "In contrast, AGILE outperformed the ablations in CREATE and Grid World by correctly utilizing the action relation in decision-making....
NwOG107NKJ.0PPYM22rdB.02
Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) Weber and Luo [2014]. Other features includeproject volume, document...
Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users. Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest) [Weber and Luo, 2014]. Other features include project size, file vol...
{ "annotation": [ "Rewriting_light" ], "instruction": "Make the use of a citation in the second sentence correct. Update the third sentence.", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the readability of this paragraph.", "annotator": "annotator_03" }
NwOG107NKJ
0PPYM22rdB
2
[ { "text": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users." }, { "text": "Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest)" }, { "text": "Web...
[ { "text": "Influence on the Github platform can be quantified by the number of followers, stars, mentions, quotes, and up-votes received from other users." }, { "text": "Social network metrics such as centrality indicate how broadly influence extends (e.g. geographic interest)" }, { "text": "[We...
ByZyHzZC-.HktKf7-AW.00
The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In parti...
The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014; Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et al., 2015; Sato & Nakagawa, 2014). In parti...
{ "annotation": [ "Development", "Content_addition" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
ByZyHzZC-
HktKf7-AW
0
[ { "text": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014;" }, { "text": "Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et a...
[ { "text": "The relationship between stochastic gradient descent (SGD) and sampling a posterior distribution via stochastic Langevin methods has been the subject of discussion in a number of papers (Chen et al., 2014;" }, { "text": "Ding et al., 2014; Vollmer et al., 2015; Welling & Teh, 2011; Shang et a...
7_CwM-IzWd.zcm6f5HDI.05
During training, the uni-modal branch largely focuses on the associated modality. The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 , corresponding to...
During training, each uni-modal branch largely focuses on its associate input modality. The fusion modules generate context representation using all modalities and feed such information to the unimodal branches. Both ˆ y 0 and ˆ y 1 depend on information from both modalities. We end up with two functions, f 0 and f 1 ,...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make the sentence understandable.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the wording of this paragraph.", "annotator": "annotator_02" }
7_CwM-IzWd
zcm6f5HDI
5
[ { "text": "During training, the uni-modal branch largely focuses on the associated modality." }, { "text": "The fusion modules generatecross-modal context information from the uni-modal branches and pass it back to them." }, { "text": "Both ˆ y 0 and ˆ y 1 depend on information from both modalit...
[ { "text": "During training, each uni-modal branch largely focuses on its associate input modality." }, { "text": "The fusion modules generate context representation using all modalities and feed such information to the unimodal branches." }, { "text": "Both ˆ y 0 and ˆ y 1 depend on information ...
eyheq0JfG.lDLi0nFVcl.00
For example, using mixup on top of random scaling and cropping improves the results by 0.4%. This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amenable to stronger augmentations).
For example, using mixup on top of random scaling and cropping improves the results by 0.4%. In comparison, when we trained Real-to-Bin Martinez et al. (2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II. This suggests that, thanks to the proposed methods, we are getting closer than ever...
{ "annotation": [ "Content_addition", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
eyheq0JfG
lDLi0nFVcl
0
[ { "text": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%." }, { "text": "" }, { "text": "" }, { "text": "This suggests that thanks to the proposed methods, we are getting closer than ever to the capacity of a real-valued model (which is amena...
[ { "text": "For example, using mixup on top of random scaling and cropping improves the results by 0.4%." }, { "text": "In comparison, when we trained Real-to-Bin Martinez et al." }, { "text": "(2020) with mixup, the accuracy dropped by 0.25% for Stage I, and 0.8% for Stage II." }, { "tex...
CVRUl83zah.I75TtW0V7.05
Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not necessarily setequivariant. Zhang et al. find t...
Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets. The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but depending on the encoder, it is not necessarily se...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light", "Development" ], "instruction": "", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
5
[ { "text": "Equivariance of DSPN We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets." }, { "text": "The gradient of the permutation-invariant encoder g is always multiset-equivariant, but depending on the encoder, it is not nece...
[ { "text": "Equivariance of DSPN. We now discuss the particular form of equivariance that DSPN takes, which has not been done before in the context of multisets." }, { "text": "The gradient of the permutation-invariant encoder g with respect to the set input Y is always multiset-equivariant, but dependin...
aomiOZE_m2.rxb2TiQ6bq.05
Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly introduced recursive learning in DRCN to decrease model ...
Lightweight Image SR Models. Recent years have been rising interest in investigating lightweight image SR models. These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting. Kim et al . firstly decreased parameter number by utilizing recursive learni...
{ "annotation": [ "Rewriting_medium", "Concision" ], "instruction": "Can you make my paragraph more concise?", "annotator": "annotator_09" }
{ "annotation": [ "Concision" ], "instruction": "Use shorter formulations and more direct language to make the paragraph more concise.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
5
[ { "text": "Lightweight Image SR Models." }, { "text": "Recent years have been rising interest in investigating lightweight image SR models." }, { "text": "These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting." }, { ...
[ { "text": "Lightweight Image SR Models." }, { "text": "Recent years have been rising interest in investigating lightweight image SR models." }, { "text": "These approaches try to design lightweight architectures, which mainly take advantage of recursive learning and channel splitting." }, { ...
gIp_U0JsFa.T3RdAsTpzN.00
Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data...
Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1). As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the source data and tested on the target data...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
gIp_U0JsFa
T3RdAsTpzN
0
[ { "text": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1)." }, { "text": "As per [32], we consider the setting of ‘dataset shift’, whereby a model is developed on the...
[ { "text": "Here, we present two case studies from the healthcare domain in dermatology and in clinical risk prediction using Electronic Health Records in which fairness does not transfer (Fig. 1)." }, { "text": "As per [27], we consider the setting of ‘dataset shift’, whereby a model is developed on the...
7_CwM-IzWd.zcm6f5HDI.22
We report means and standard deviations of the models’ test accuracy in Table 1.[- -] The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, where their performances are very close.
We report means and standard deviations of the models’ test accuracies in Table 1.[- -] 3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods in all four cases.
{ "annotation": [ "Content_substitution", "Development" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Content_substitution", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_07" }
7_CwM-IzWd
zcm6f5HDI
22
[ { "text": "We report means and standard deviations of the models’ test accuracy in Table 1.[-\n-]" }, { "text": " The guided algorithm improves the models’ generalization performance over the vanilla algorithm in all four cases.It also outperforms the random algorithm, with the exception of ModelNet40, ...
[ { "text": "We report means and standard deviations of the models’ test accuracies in Table 1.[-\n-]" }, { "text": "3 RUBi does not show consistent improvement across tasks compared to the vanilla algorithm. The guided algorithm improves the models’ generalization performance over all three other methods...
S1-LZxvKX.rJ009I8RX.03
Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at th...
Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently. Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training. Sparse evolutionary training (Mocanu et al., 2018) used magnitude-based pruning and random growth at th...
{ "annotation": [ "Concision" ], "instruction": "Edit the last sentence of this paragraph to make it shorter and remove the reference to Section 5.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Rewrite the last sentence to make it more concise.", "annotator": "annotator_07" }
S1-LZxvKX
rJ009I8RX
3
[ { "text": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently." }, { "text": "Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training." }, { "text": "Sparse evolutionary training (Mocanu e...
[ { "text": "Most closely related to our work are dynamic sparse reparameterization techniques that emerged only recently." }, { "text": "Like ours, these methods adaptively alter, by certain heuristic rules, reparameterization during training." }, { "text": "Sparse evolutionary training (Mocanu e...
XXtXW925iG.JHwYPw52XHb.00
In the previous section, we showed that the limiting diffusion exists when ⌘ and go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘ ! 0 while ⌘ varies and is only upper bounded by some constant. A concrete example is ⌘ ! 0and being fixed.
In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio. However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant. A concrete example is η → 0 and λ beingfixed.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
XXtXW925iG
JHwYPw52XHb
0
[ { "text": "In the previous section, we showed that the limiting diffusion exists when ⌘ and \u0000 go to zero witha fixed ratio." }, { "text": "However, the situation is more complicated in the general case, i.e. , the intrinsic LR ⌘\u0000 ! 0 while ⌘\u0000 varies and is only upper bounded by some consta...
[ { "text": "In the previous section, we showed that the limiting diffusion exists when η and λ go to zero witha fixed ratio." }, { "text": "However, the situation is more complicated in the general case, i.e. , the intrinsic LR ηλ → 0 while ηλ is upper bounded by some constant." }, { "text": "A c...
aFWzpdwEna.MCecpd3utK.00
In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to. To address this probl...
In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment. To address the problem, we study a bi-objective formulation for model-based of...
{ "annotation": [ "Concision", "Rewriting_heavy" ], "instruction": "Make this paragraph more concise by rewriting the second half.", "annotator": "annotator_02" }
{ "annotation": [ "Concision", "Content_deletion" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
aFWzpdwEna
MCecpd3utK
0
[ { "text": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is usually challenging or intractable without access to the environment that the learned policy will be deployed to." ...
[ { "text": "In this paper, we find that model-based offline RL’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment." }, { "text": "To address the problem, we study a b...
YkiRt7L93m.jgDbnUD7s.01
We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. Italso provides a unique solution to the projecti...
A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of general probability ...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Please, make this paragraph easier to read.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite and reorganise this paragraph to improve the english and be more convincing, let the last sentence as it is.", "annotator": "annotator_07" }
YkiRt7L93m
jgDbnUD7s
1
[ { "text": "We introduce a notion of projection between sets of probability measures supported on Euclidean spaces. The proposed definition is applicable between sets of general probability measures with different supports and possesses good computational and statistical properties. " }, { "text": "Italso...
[ { "text": "A notion of projection between sets of probability measures should be applicable between any set of general probability measures, replicate geometric properties of the target measure, and possess good computational and statistical properties. We introduce such a notion of projection between sets of g...
jzQGmT-R1q.ugUt9B3XaO.02
In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, but we see a similar trend across most archi...
In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget. This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by saturated units and that this phenomenon ...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
jzQGmT-R1q
ugUt9B3XaO
2
[ { "text": "In Figure 2 we see that the networks trained in these two experiments both exhibit decreased ability to fit later target functions under a fixed optimization budget." }, { "text": "This effect is strongest in small networks with ReLU activations, suggesting that some units may be saturating, bu...
[ { "text": "In Figure 2 we see that most networks trained in these two experiments exhibit decreasing ability to fit later target functions under a fixed optimization budget." }, { "text": "This effect is strongest in small networks with ReLU activations, suggesting that this capacity loss may be driven by...
hegI87bI5S.fL6Q48sfx8.08
VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI). The mouse-cursor speed via the OS setting was set to the middle of the slider in the control display and ” Enhance pointer precision ” setting was turned on to mat...
VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz. We used an optical mouse (Logitech gaming mouse, G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the control display and the “ Enhance pointer precision ” setting was tur...
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English of this paragraph", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium", "Rewriting_light" ], "instruction": "Slightly revise the linking between phrases.", "annotator": "annotator_07" }
hegI87bI5S
fL6Q48sfx8
8
[ { "text": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz." }, { "text": "We used an opticalmouse, Logitech gaming mouse (G-PPD-002WLr; 1600 DPI)." }, { "text": "The mouse-cursor speed via the OS setting was set to the middle of the slider in the control displ...
[ { "text": "VZ249HR; 23.8” diagonal, 1920 × 1080 pixels) and its refresh rate was set at 75 Hz." }, { "text": "We used an optical mouse (Logitech gaming mouse," }, { "text": "G-PPD-002WLr; 1600 DPI, and the mouse-cursor speed based on the OS setting was set to the middle of the slider in the cont...
_nwyDQp-7.85dN7i1zNm.00
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. This assumpti...
To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018; Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d. from the same random distribution. Intuitively, ...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
_nwyDQp-7
85dN7i1zNm
0
[ { "text": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018;" }, { "text": "Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d." }, ...
[ { "text": "To this end, first studies in the context of meta-learning relied on probabilistic assumption (Baxter, 2000; Pentina & Lampert, 2014; Maurer et al., 2016; Amit & Meir, 2018;" }, { "text": "Yin et al., 2020) stating that meta-train and meta-test tasks distributions are all sampled i.i.d." }, ...
OV5v_wBMHk.bw4cqlpLh.02
Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i . e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i . e ., individuals have their preferences regarding treatment selection, making the population across diffe...
Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i . e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i . e ., individuals have their preferences for treatment selection, making units in different treatment groups hete...
{ "annotation": [ "Unusable", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
null
OV5v_wBMHk
bw4cqlpLh
2
[ { "text": "Estimating ITE with observational data suffers from two primary issues: (1) missing counterfactuals, i ." }, { "text": "e ., we can only observe one factual outcome out of all potential outcomes; (2) treatment selection bias, i ." }, { "text": "e ., individuals have their preferences ...
[ { "text": "Estimating ITE with observational data has two main challenges: (1) missing counterfactuals, i ." }, { "text": "e ., only one factual outcome out of all potential outcomes can be observed; (2) treatment selection bias, i ." }, { "text": "e ., individuals have their preferences for tre...
aomiOZE_m2.rxb2TiQ6bq.07
We first give a brief view of the problem setting about deep CNN for image SR. We also observe that there exists heavy redundancy in the networks. To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to compress them.
We first present an overview of the problem setting about deep CNN for image SR. It is also observed that excessive redundancy exists in the SR deep CNNs. Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more efficient SR networks.
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Can you paraphrase the last sentence?", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Rewrite the last sentence preferring passive voice over active.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
7
[ { "text": "We first give a brief view of the problem setting about deep CNN for image SR." }, { "text": "We also observe that there exists heavy redundancy in the networks." }, { "text": "To pursue more efficient image SR networks, we then propose structure-regularized pruning (SRP) method to co...
[ { "text": "We first present an overview of the problem setting about deep CNN for image SR." }, { "text": "It is also observed that excessive redundancy exists in the SR deep CNNs." }, { "text": "Then we move on to proposing our structureregularized pruning (SRP) method attempting to achieve more...
nCTSF9BQJ.DGhBYSP_sR.02
Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021). Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on protein-protein binding. The major challeng...
Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021). However, developing deep learning-based models to predict mutational effects on protein-protein binding is challenging due to the scarcity of exp...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the following paragraph using a more formal language.", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite this paragraph for better readability.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
2
[ { "text": "Recently, deep learning has gained tremendous success in modeling proteins, making data-driven methods more appealing than ever (Rives et al., 2019; Jumper et al., 2021)." }, { "text": "Nevertheless, challenges exist for developing deep learning-based models to predict mutational effects on p...
[ { "text": "Recently, deep learning has shown significant promise in modeling proteins, making data-driven approaches more attractive than ever (Rives et al., 2019; Jumper et al., 2021)." }, { "text": "However, developing deep learning-based models to predict mutational effects on protein-protein binding...
g5N2H6sr7.6J3ec8Dl3p.02
Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We denote our framework using (1) GCN (Kipf & Welling, ...
Kernel (MLG) (Kondor & Pan, 2016). In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC (Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmadi, 2020). We also include the results of recent supervised graph ...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
g5N2H6sr7
6J3ec8Dl3p
2
[ { "text": "Kernel (MLG) (Kondor & Pan, 2016)." }, { "text": "In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC" }, { "text": "(Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmad...
[ { "text": "Kernel (MLG) (Kondor & Pan, 2016)." }, { "text": "In addition, we compare with four unsupervised graph-level representation learning methods: SUB2VEC (Adhikari et al., 2018), GRAPH2VEC" }, { "text": "(Narayanan et al., 2017), INFOGRAPH (Sun et al., 2020) and MVGRL (Hassani & Khasahmad...
hegI87bI5S.fL6Q48sfx8.11
We defined the notch position ( Position ) as the condition. Position = Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target. When the angle of entry to a target adjacent to a top edge with respect to the ...
We defined the notch position ( Position ) as the condition. Position = Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target. An equivalent effect is observed at angles of entry that are lineally symmetric ab...
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
hegI87bI5S
fL6Q48sfx8
11
[ { "text": "We defined the notch position ( Position ) as the condition." }, { "text": "Position =" }, { "text": "Inside indicated that the notch was placed between the start area and the target, and Position = Outside indicated that the notch was placed to the right of the target." }, { ...
[ { "text": "We defined the notch position ( Position ) as the condition." }, { "text": "Position =" }, { "text": "Inside indicates that the notch is placed between the start area and the target, and Position = Outside indicates that the notch is placed to the left of the target." }, { "te...
aVemIPPM7t.-8hV3QV4L9.00
Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM. It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described in this paper.
Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM. It takes less than a week of compute on a single r5.24xlarge instance to run all the experiments described in this pa...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
aVemIPPM7t
-8hV3QV4L9
0
[ { "text": "Experiments were conducted on a small number of n1-standard-96 Google Cloud Platform VM instances, with 48 CPU cores on an Intel Skylake processor and 360 GB of RAM." }, { "text": "It takes less than a week of compute on a single n1-standard-96 instance to run all the experiments described i...
[ { "text": "Experiments were conducted on a workstation (Intel i9-7920X CPU with 64 GB of RAM), and a small number of r5.24xlarge AWS VM instances, with 48 CPU cores on an Intel Skylake processor and 768 GB of RAM." }, { "text": "It takes less than a week of compute on a single r5.24xlarge instance to ru...
SRquLaHRM4.vI2x5N-YHC.00
We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimizat...
We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value. Furthermore,to reduce the computational cost and avoid the extra model parameters, we learn the prompts witha two-stage optimizat...
{ "annotation": [ "Content_deletion" ], "instruction": "Remove any unessential information in this paragraph.", "annotator": "annotator_03" }
{ "annotation": [ "Content_deletion", "Rewriting_light" ], "instruction": "Please exclude the content related to optimal transport.", "annotator": "annotator_09" }
SRquLaHRM4
vI2x5N-YHC
0
[ { "text": "We solve this problem by introducing the optimal transport theory [51] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value." }, { "text": "Furthermore,to reduce the computational cost and avoid the extra model parameters, we l...
[ { "text": "We solve this problem by introducing the optimal transport theory [50] and formulate the feature setsas a discrete probability distribution where each feature has an equal probability value." }, { "text": "Furthermore,to reduce the computational cost and avoid the extra model parameters, we l...
aomiOZE_m2.rxb2TiQ6bq.20
Model Size and Mult-Adds. Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number. We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720. Our SRPN-L operates less Mult-Adds than most compared methods. Those comparisons indicate that S...
Model Size and Mult-Adds. Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN. The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also presented. As seen, our SRPNLite costs fewer Mult-Adds than most compari...
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Give me a more formal version of this paragraph", "annotator": "annotator_01" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the text and change SRPN-L to SRPN-Lite", "annotator": "annotator_06" }
aomiOZE_m2
rxb2TiQ6bq
20
[ { "text": "Model Size and Mult-Adds." }, { "text": "Compared with recent works (e.g., MemNet, CARN, and IMDN), our SRPN-L has the least parameter number." }, { "text": "We also provide operations number with Mult-Adds by setting the output size as 3 × 1280 × 720." }, { "text": "Our SRPN...
[ { "text": "Model Size and Mult-Adds." }, { "text": "Our SRPN-Lite has the fewest parameter number in comparison to recent efficient SR works such as MemNet, CARN, and IMDN." }, { "text": "The comparison in terms of MultAdds (measured when the output size is set to 3 × 1,280 × 720) is also present...
MnewiFDvHZ.iAYttXl-uH.00
• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t p x q , where the ...
• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adversarially chosen, as...
{ "annotation": [ "Concision" ], "instruction": "Make paragraph more concise", "annotator": "annotator_06" }
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
MnewiFDvHZ
iAYttXl-uH
0
[ { "text": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint functions are the same across the timebut they are not necessary to be known when making decision at round t . Note the setting ofknown and fixed constraints in [14, 17, 29, 33] is a special case of ours.• Adversarial constraints g t...
[ { "text": "• Fixed constraints g t p x q “ g p x q , @ t, where the constraint function is known (fixed) when makingdecision at round t as in [15, 12, 30, 26]. • Adversarial constraints g t p x q , where the constraint function g t p x q is unknown when making decision at round t and can be arbitrarily and adver...
3686sm4Cs.AJMXMDLVn.01
Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other metho...
Results. Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling. Note that all the methods boost performance over a single model without requiring additional model parameters. However, our SuperWeight Ensembles outperforms all other metho...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
3686sm4Cs
AJMXMDLVn
1
[ { "text": "Results." }, { "text": "Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling." }, { "text": "Note that all the methods boost performance over a single model without requiring additional model parameter...
[ { "text": "Results." }, { "text": "Table 1(a) compares our SuperWeight Ensembles (SWE-HO) on the CIFAR-100 CIFAR100-C, and CIFAR-10 with prior work in efficient ensembling." }, { "text": "Note that all the methods boost performance over a single model without requiring additional model parameter...
OV5v_wBMHk.bw4cqlpLh.08
However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration. As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochastic mini-batch level:
However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration. A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:
{ "annotation": [ "Rewriting_light" ], "instruction": "check the wordings but keep the original content as much as possible", "annotator": "annotator_05" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the language to make it more formal.", "annotator": "annotator_07" }
OV5v_wBMHk
bw4cqlpLh
8
[ { "text": "However, as neural network estimators mainly update parameters with stochastic gradient methods, only a subset of the representation’s distribution is accessible within each iteration." }, { "text": "As such, a shortcut (Liuyi et al., 2018) is to calculate the group discrepancy at a stochasti...
[ { "text": "However, since prevalent neural estimators mainly update parameters with stochastic gradient methods, only a fraction of the units is accessible within each iteration." }, { "text": "A shortcut in this context is to calculate the group discrepancy at a stochastic mini-batch level:" } ]
5Eyr2crzI.s502diDSt.00
We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made. We still kept a relatively high N=64 in our model as we wanted to insure a wide cov...
We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig. 7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration from 2 to 128. From N = 16 and lower, coverag...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
5Eyr2crzI
s502diDSt
0
[ { "text": "We also display the trade-off between inference speed and coverage from hierarchical refinement in " }, { "text": "Fig." }, { "text": "From 16 upsampled points at the last iteration and lower, coverage performance starts to diminish while little speed gains are made." }, { "te...
[ { "text": "We also display the trade-off between inference speed and coverage from hierarchical refinement in Fig." }, { "text": "7, evaluated on the Interpret multi-agent dataset with marginal MissRate 6 . The curve is obtained setting the number N of upsampled points at the last refinement iteration fro...
atxti8SVk.3K9AmPwALM.15
Pascal: Image tag annotations. On Pascal VOC dataset, our method outperforms others by a large margin. Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable 4 . 5% .
Pascal: Image tag annotations. Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 .
{ "annotation": [ "Content_deletion", "Content_substitution" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
atxti8SVk
3K9AmPwALM
15
[ { "text": "Pascal: Image tag annotations." }, { "text": "On Pascal VOC dataset, our method outperforms others by a large margin." }, { "text": "Table 2 shows that, without additional saliency labels, our method still achieves SOTA. Compared to (Chang et al., 2020), we improves mIoU bya sizable ...
[ { "text": "Pascal: Image tag annotations." }, { "text": "" }, { "text": "Table 2 shows that, without using additional saliency labels, our method outperforms existing methods with saliency by 4 . 4% , and those without saliency by 5 ." } ]
OzYyHKPyj7.O9Mk1uqXra.01
The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively). In this model, stack elements are again vectors, and 𝑎 𝑡 = ( a 𝑡 , v 𝑡 ) , ...
The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stacks. In this model, ...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_03" }
OzYyHKPyj7
O9Mk1uqXra
1
[ { "text": "The stack of Joulin & Mikolov (2015) simulatespartial pushes and pops by making each stack element a convex combination, or “superposition,” of the elements immediately above and below it (resulting from pushing and popping, respectively)." }, { "text": "In this model, stack elements are aga...
[ { "text": "The stack of Joulin & Mikolov (2015) simulates a combination of partial stack actions by computing three new, separate stacks: one with all cells shifted down (push), kept the same (no-op), and shifted up (pop). The new stack is then an element-wise interpolation (“superposition”) of these three stac...
BkwlK_dPB.SJfZLu8oB.00
It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem such asvolume of the goal set |F RLgoal | and how complex...
It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b . Intuitively, ˆ a depends on the scale of the problem. It grows as |F RLgoal | becomes smaller or as the length...
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_04" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rephrase the text to make it more direct and readable when necessary.", "annotator": "annotator_07" }
BkwlK_dPB
SJfZLu8oB
0
[ { "text": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b ." }, { "text": "Intuitively, ˆ a depends on the scale of the problem such asvolume o...
[ { "text": "It is worth noting that while the sample complexity is bounded, the above result implies that its complexity varies according to problem-specific properties, which are encapsulated in the value of ˆ a and ˆ b ." }, { "text": "Intuitively, ˆ a depends on the scale of the problem. It grows as |F...
URRc6L6nmE.yUoqIf6zGY.00
A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators. A continuous approximation that has shown to yield s...
A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work. Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
URRc6L6nmE
yUoqIf6zGY
0
[ { "text": "A less conservative approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space." }, { "text": "Finally, the discontinuities of (4), (12) might be problematic and create chattering when implemented in real actuators." }, { ...
[ { "text": "A less conservative and more robust approach consists of using the off-line data to train multiple neural networks, each for a certain region of the state space; such an approach constitutes part of our future work." }, { "text": "Finally, the discontinuities of (4), (12) might be problematic...
kAwMEYEIN.RlDWAM6qF.00
HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss f...
HJB equation is stable only if p is sufficiently large. Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice. The theory also inspires us to develop a novel PINN training algorithm to minimize the L ∞ loss f...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
kAwMEYEIN
RlDWAM6qF
0
[ { "text": "HJB equation is stable only if p is sufficiently large." }, { "text": "Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss is abetter choice." }, { "text": "The theory also inspires us to d...
[ { "text": "HJB equation is stable only if p is sufficiently large." }, { "text": "Such a theoretical finding reveals that the widelyused L 2 loss is not suitable for training PINN on high-dimensional HJB equations, while L ∞ loss isa better choice." }, { "text": "The theory also inspires us to d...
YCmehaMzt.kHwUIOFr_.00
In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this composing method under different training st...
Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut. Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed. We evaluate the effectiveness of this ensemble m...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Change the idea of \"composition\" to \"ensemble\" if this paragraph. Fix any spelling mistake.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Rewrite the first sentence. Improve English in this paragraph.", "annotator": "annotator_07" }
YCmehaMzt
kHwUIOFr_
0
[ { "text": "In addition, we combine EM and our proposed OPS together to craft a kind of composed unlearnable examples." }, { "text": "Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed." }, { "text": "We evaluate the e...
[ { "text": "Naturally, for the purpose of complementing each other, we can combine EM and our proposed OPS together to craft a kind of ensemble shortcut." }, { "text": "Since OPS only modified a single pixel, after being applied to EM perturbed images, the imperceptibility can still be guaranteed." }, ...
NcdK3bdqnA.kF_TmXY8G0.00
The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections. The two types of image-specific linear projections do not lead to substantial performance differences. Thus, we take the strategy of only adding additional linear bias for augmente...
The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias. Introducing additional image-specific linear projection weights does not lead to further performance increase. Thus, we take the strategy of only adding additional linear bias for...
{ "annotation": [ "Rewriting_medium", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
NcdK3bdqnA
kF_TmXY8G0
0
[ { "text": "The results in Table 6 demonstrate that adopting image-specific linear projections outperforms directly sharing the contextual projections." }, { "text": "The two types of image-specific linear projections do not lead to substantial performance differences." }, { "text": "Thus, we tak...
[ { "text": "The results in Table 6 demonstrate that adopting image-specific projection bias outperforms directly sharing the contextual projection bias." }, { "text": "Introducing additional image-specific linear projection weights does not lead to further performance increase." }, { "text": "Thu...
mS4xvgSiEH.i-a3xp3usm.00
The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.
The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels. We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE.
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02" }
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter.", "annotator": "annotator_07" }
mS4xvgSiEH
i-a3xp3usm
0
[ { "text": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels." }, { "text": "To ensure that the discrete latent space is necessary, we introduce two ablative baselines, which replace the VQ-VAE with either a generic...
[ { "text": "The central novelty of our work is in realizing that by learning a discrete representation, we can perform structured search on two levels." }, { "text": "We introduce two ablative baselines, which replace the VQ-VAE with either a generic autoencoder or a VAE." } ]
g5N2H6sr7.6J3ec8Dl3p.04
Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.
Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s. This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH and MVGRL.
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
g5N2H6sr7
6J3ec8Dl3p
4
[ { "text": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL, i.e., it is 10 times faster than INFOGRAPH and 15 times faster than MVGRL on PROTEINS." }, { "text": "This is because our model neglects the tedious process of negative sampling used in both INFOGRAPH a...
[ { "text": "Running time We observe that our model runs significantly faster than INFOGRAPH and MVGRL. Our model takes 10s to train one epoch of PORTEINS on Tesla P40 24G, while INFOGRAPH needs 127s and MVGRL needs 193s." }, { "text": "This is because our model neglects the tedious process of negative sam...
aomiOZE_m2.rxb2TiQ6bq.06
Neural Network Pruning. Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pruning (a.k.a. unstructured pruning). The former aims to remove we...
Neural Network Pruning. Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017). The methodology of pruning mainly falls into two groups: filter pruning (or more generally known as structured pruning) * and weight-element pr...
{ "annotation": [ "Development", "Concision" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Rewrite the last sentence to make it more concise by removing shortcomings of other work.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
6
[ { "text": "Neural Network Pruning." }, { "text": "Pruning aims to remove parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017)." }, { "text": "It mainly falls into two groups: filter pruning (a.k.a. structured pruning) 1 and weight-element pr...
[ { "text": "Neural Network Pruning." }, { "text": "Network pruning aims to eliminate redundant parameters in a neural network without compromising its performance seriously (Reed, 1993; Sze et al., 2017)." }, { "text": "The methodology of pruning mainly falls into two groups: filter pruning (or mo...
7_CwM-IzWd.zcm6f5HDI.21
Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla). For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40 and
Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019). For each algorithm, we train each model three times with the same learning rate. We use 0.01, 0.1 and 0.01 as l...
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Development", "Rewriting_medium" ], "instruction": "", "annotator": "annotator_08" }
7_CwM-IzWd
zcm6f5HDI
21
[ { "text": "Improved generalization performance We compare the generalization ability of the three algorithms (guided, random and vanilla)." }, { "text": "For each algorithm, we train three repetitions of each model using the same learning rate: 0.01, 0.1 and 0.01 for Colored-and-gray-MNIST, ModelNet40...
[ { "text": "Improved generalization performance We compare the generalization ability of multi-modal DNNs trained by the three algorithms (guided, random and vanilla) and the RUBi learning strategy (Cadene et al., 2019)." }, { "text": "For each algorithm, we train each model three times with the same lea...
sIqSoZ9KiO.KLlOZMoJ9G.01
To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there is a single stochastic vector and not a hie...
To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE. Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning, at least in the sense of (Higgins et al....
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make sentence precise.", "annotator": "annotator_08" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Rephrase the second sentence, mostly focusing on the second half.", "annotator": "annotator_07" }
sIqSoZ9KiO
KLlOZMoJ9G
1
[ { "text": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE." }, { "text": "Apart from the implementation simplicity and shorter training time, a non-hierarchical VAE is more suitable for representation learning – there i...
[ { "text": "To study its performance impact in a more constrained setting, SDN was paired with a VAE architecturally much simpler than IAF-VAE." }, { "text": "Apart from the implementation simplicity and shorter training time, non-hierarchical VAE is more suitable for disentangled representation learning...
q4rMz7ZfFG.uyxGiQeMP.01
We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”. The source code finds all substrings by calling re.findall () build-...
We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes. In the fine-turning step, we set the learning rate as 2e-5, the batch size as 32, the max sequence length of querie...
{ "annotation": [ "Rewriting_heavy", "Content_substitution" ], "instruction": "", "annotator": "annotator_10" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
q4rMz7ZfFG
uyxGiQeMP
1
[ { "text": "We give two cases of the GraphCodeBERT output for this task in Figure 6. In the first example, the model successfully finds Python source code that correctly matches the sementic of the query “Scans through a string for substrings matched some patterns”." }, { "text": "The source code finds all ...
[ { "text": "We use GraphCodeBERT to separately encode query and source code with data flow, and calculate inner product of their representations of the special token [ CLS ] as relevance scores to rank candidate codes." }, { "text": "In the fine-turning step, we set the learning rate as 2e-5, the batch siz...
aomiOZE_m2.rxb2TiQ6bq.04
SRCNN. Tai et al . later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure. Lim et al . (Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper and wider networks with a large number of parameters. Zhang et al . (Zhang et al., 2018b) proposed an even dee...
SRCNN. Tai et al . later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure. Lim et al . (Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper and wider networks with a large number of parameters. Zhang et al . (Zhang et al., 2018b) proposed an even dee...
{ "annotation": [ "Concision" ], "instruction": "Be more concise.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_light", "Concision" ], "instruction": "Use shorter formulations to make some sentences more concise.", "annotator": "annotator_04" }
aomiOZE_m2
rxb2TiQ6bq
4
[ { "text": "SRCNN." }, { "text": "Tai et al ." }, { "text": "later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure." }, { "text": "Lim et al ." }, { "text": "(Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper...
[ { "text": "SRCNN." }, { "text": "Tai et al ." }, { "text": "later introduced memory block in MemNet (Tai et al., 2017b) for deeper network structure." }, { "text": "Lim et al ." }, { "text": "(Lim et al., 2017) simplified the residual block (He et al., 2016) and constructed deeper...
MXi6uEx-hp.rdZfFcGyf9.16
In Figure 6, we analyze the agent performance qualitatively. (a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially covers all the tools that get activated with spring, such as trampoline. At t = 1 , the trampoline tool is selected with a strong attention on sprin...
In Figure 6, we analyze the agent performance qualitatively. (a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially the tools that get activated with spring , such as trampoline . At t = 1 , the trampoline tool is selected with strong attention on spring . This sh...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make this paragraph better. Rewrite a sentence about the Grid World", "annotator": "annotator_10" }
{ "annotation": [ "Rewriting_medium" ], "instruction": "Improve the clarity in this paragraph.", "annotator": "annotator_03" }
MXi6uEx-hp
rdZfFcGyf9
16
[ { "text": "In Figure 6, we analyze the agent performance qualitatively." }, { "text": "(a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially covers all the tools that get activated with spring, such as trampoline." }, { "text": "At t = 1 ,...
[ { "text": "In Figure 6, we analyze the agent performance qualitatively." }, { "text": "(a) In CREATE, at t = 0 , the selected action spring in AGILE’s GAT attends to various other tools, especially the tools that get activated with spring , such as trampoline ." }, { "text": "At t = 1 , the tram...
aFzc_2nNz.WIdHkazOg.00
Further improvement is expected if γ is selected independently for each training sample as shown through the Sample-Dependent Focal Loss (FLSD-53) proposed in [19], which, however, is based on heuristics and, as shown in this paper, does not generalize well. In this paper, we propose a calibration-aware adaptive focal ...
Further improvement is expected if γ is selected independently for each training sample (Sample-Dependent Focal Loss (FLSD-53) [19]). However, FLSD-53 is based on heuristics and does not generalize well. In this paper, we propose a calibration-aware adaptive focal loss called AdaFocal that utilizes the calibration prop...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Make the ideas in these paragraph more modular and easier to understand.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Concise this academic paragraph a bit and smooth out the writing.", "annotator": "annotator_07" }
aFzc_2nNz
WIdHkazOg
0
[ { "text": "Further improvement is expected if γ is selected independently for each training sample as shown through the Sample-Dependent Focal Loss (FLSD-53) proposed in [19], which, however, is based on heuristics and, as shown in this paper, does not generalize well." }, { "text": "In this paper, we p...
[ { "text": "Further improvement is expected if γ is selected independently for each training sample (Sample-Dependent Focal Loss (FLSD-53) [19]). However, FLSD-53 is based on heuristics and does not generalize well." }, { "text": "In this paper, we propose a calibration-aware adaptive focal loss called A...
aomiOZE_m2.rxb2TiQ6bq.17
L 1 -norm aspruning criterion, the same as (Li et al., 2017). However, our results are significantly better than theirs. The reason is that they do not impose any regularization on the pruned structure, thus the kept feature map channels are misaligned in residual blocks after pruning. In contrast, our method does no...
L 1 -norm as the scoring criterion to select unimportant filters, same as (Li et al., 2017). Nevertheless, our method delivers significantly better results than theirs. The primary reason is that they do not impose any regularization on the pruned structure; the remaining feature maps are thus mismatched in residual bloc...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_01" }
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_06" }
aomiOZE_m2
rxb2TiQ6bq
17
[ { "text": "L 1 -norm aspruning criterion, the same as (Li et al., 2017)." }, { "text": "However, our results are significantly better than theirs." }, { "text": "The reason is that they do not impose any regularization on the pruned structure, thus the kept feature map channels are misaligned ...
[ { "text": "L 1 -norm as the scoring criterion to select unimportant filters, same as (Li et al., 2017)." }, { "text": "Nevertheless, our method delivers significantly better results than theirs." }, { "text": "The primary reason is that they do not impose any regularization on the pruned structure...
ryaiZC9KQ.ryt3YptA7.00
DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts, suggesting that modern DNNs approximately follow a similar bag-of-feature strategy.
ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different decision strategie...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
ryaiZC9KQ
ryt3YptA7
0
[ { "text": " DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts, suggesting that modern DNNs approximately follow a similar bag-of-feature strategy." } ]
[ { "text": "ResNet-152 or DenseNet-169 in terms of feature sensitivity, error distribution and interactions between image parts. This suggests that the improvements of DNNs over previous bag-of-feature classifiers in the last few years is mostly achieved by better fine-tuning rather than by qualitatively different...
eYzycFMXwr.8-KFmZiCM.01
Normally, C a is too large because the size of batch is too large. In this case, we can reduce the size ofbatch and the number of the model partitions, and replicate each stage on the newly idle accelerator devices for data parallelism to increase the total batchsize to the original size, as shown in Figure 6. This ...
Normally, too large C a is caused by too large microbatch. we can proportionally reduce the depth of the pipeline while reducing the size of the microbatch, and then we proportionally increase the width of data parallelism to maintain the same global batch size, as shown in Figure 6. As a result, the size of the micro-...
{ "annotation": [ "Development", "Rewriting_heavy" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_heavy", "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
eYzycFMXwr
8-KFmZiCM
1
[ { "text": "Normally, C a is too large because the size of batch is too large." }, { "text": "In this case, we can reduce the size ofbatch and the number of the model partitions, and replicate each stage on the newly idle accelerator devices for data parallelism to increase the total batchsize to the ...
[ { "text": "Normally, too large C a is caused by too large microbatch." }, { "text": "we can proportionally reduce the depth of the pipeline while reducing the size of the microbatch, and then we proportionally increase the width of data parallelism to maintain the same global batch size, as shown in Fig...
NvI7ejSHFe.ppieLd2M4a.00
PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space. Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as an smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013). Our work proposes to learn adaptive activation ...
PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space. Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as a smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013). Our work proposes to learn an adaptive activation...
{ "annotation": [ "Rewriting_heavy", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
NvI7ejSHFe
ppieLd2M4a
0
[ { "text": "PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space." }, { "text": "Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as an smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013)." }, ...
[ { "text": "PAU (Molina et al., 2019) leverages Pad´e approximation to form its search space." }, { "text": "Motivated by the connection between Swish and ReLU, ACON (Ma et al., 2021) is proposed as a smooth approximator to the general Maxout family activation functions (Goodfellow et al., 2013)." }, ...
PDvmJtmgQb.gGrpxbc7UI.01
In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a. indistribution ) (Bassily et al., 2018a; Zhou et al., 2020; Kairouz et al., 2021a; Asi et al., 2021), and where the distribut...
In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a. in-distribution ) [4, 7, 23, 39], and where the distributions are different (a.k.a. out-of-distribution ) [1, 26, 31, 32]. In...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Unusable" ], "instruction": "Convert in-text citations to numbers.", "annotator": "annotator_09" }
PDvmJtmgQb
gGrpxbc7UI
1
[ { "text": "In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a. indistribution ) (Bassily et al., 2018a;" }, { "text": "Zhou et al., 2020; Kairouz et al., 2021a; ...
[ { "text": "In-distribution vs. Out-of-distribution Public Data: Prior works have considered both settings where the public data set D pub comes from the same distribution as the private data D priv (a.k.a." }, { "text": "in-distribution ) [4, 7, 23, 39], and where the distributions are different (a.k.a....
lLwt-9RJ2tm.XJsauLjck.01
Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V . discount in its parent’s contribution to the cost, which after cascading gives a third view of Eq.
Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V . is the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to the cost also appears as a positive term in its parent’s contribution to the ...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
1
[ { "text": "Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V ." }, { "text": "" }, { "text": " discount in its parent’s contribution to the cost, which after cascading gives a third view of Eq." } ]
[ { "text": "Given such a sparsifier, by setting S = { u } and T = { v } , one can recover whether or not edge ( u, v ) is present in G for any u, v ∈ V ." }, { "text": "is the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to the cost also appears as a positive...
isfcBsgB-H.SBe0hOLmg9.00
Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability. Here we propose using chemical reactions to assist learn...
Line-Entry System) or GNN-based (Graph Neural Networks) MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ability. Here we propose using chemical reactions to assist learn...
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary details on specific numerical performance of the model. Link to https://github.com/hwwang55/MolR instead of supplementary material.", "annotator": "annotator_03" }
{ "annotation": [ "Concision", "Content_substitution" ], "instruction": "Make the second last sentence from the end of this paragraph more concise by removing too precise details. For the last sentence, the code is now provided on github.", "annotator": "annotator_07" }
isfcBsgB-H
SBe0hOLmg9
0
[ { "text": "Line-Entry System)" }, { "text": "or GNN-based (Graph Neural Networks)" }, { "text": "MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ...
[ { "text": "Line-Entry System)" }, { "text": "or GNN-based (Graph Neural Networks)" }, { "text": "MRL methods either take SMILES strings as input that have difficulty in encoding molecule structure information, or over-emphasize the importance of GNN architectures but neglect their generalization ...
ZpvHK3zB43.QhVM4p3DKI.00
We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection. FROB tackles the few-shot problem using classification with OoD detection. In real-world applications, in the wild, it is a challenge to robustly perform classification and few-shot OoD detection wi...
We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection. FROB tackles the few-shot problem using classification with OoD detection. The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposit...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
ZpvHK3zB43
QhVM4p3DKI
0
[ { "text": "We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection." }, { "text": "FROB tackles the few-shot problem using classification with OoD detection." }, { "text": "In real-world applications, in the wild, it is a challeng...
[ { "text": "We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection." }, { "text": "FROB tackles the few-shot problem using classification with OoD detection." }, { "text": "" }, { "text": "The contribution of FROB is the c...
lLwt-9RJ2tm.XJsauLjck.02
Several other variations of this basic setup have been considered. For example, [12] have considered this problem in the presence of structural constraints. [11, 31, 34] considered a setting where vertices are embedded in a metric space and the similarity/dissimilarity between two vertices is given by their distances. ...
Several other variations of this basic setup have been considered. For example, [12] have considered this problem in the presence of structural constraints. [11, 34, 37] considered a setting where vertices are embedded in a metric space and the similarity/dissimilarity between two vertices is given by their distances. ...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
2
[ { "text": "Several other variations of this basic setup have been considered." }, { "text": "For example, [12] have considered this problem in the presence of structural constraints." }, { "text": "[11, 31, 34] considered a setting where vertices are embedded in a metric space and the similarity...
[ { "text": "Several other variations of this basic setup have been considered." }, { "text": "For example, [12] have considered this problem in the presence of structural constraints." }, { "text": "[11, 34, 37] considered a setting where vertices are embedded in a metric space and the similarity...
zzdwUcxTjWY.rVxmgW1FRK.00
Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they do not necessarily know what they don’t know (Nguyen et al., 2015). In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs, which...
Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they often struggle to handle the unknowns. In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs (Nguyen et al., 2015), which arise ...
{ "annotation": [ "Rewriting_light" ], "instruction": "Improve the English of this paragraph.", "annotator": "annotator_02" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Make this paragraph more formal and fitting to academic style.", "annotator": "annotator_07" }
zzdwUcxTjWY
rVxmgW1FRK
0
[ { "text": "Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they do not necessarily know what they don’t know (Nguyen et al., 2015). In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD)...
[ { "text": "Modern deep neural networks have achieved unprecedented success in known contexts for which they are trained, yet they often struggle to handle the unknowns. In particular, neural networks have been shown to produce high posterior probability for out-of-distribution (OOD) test inputs (Nguyen et al., ...
jP_amc4U0A.Y2t7AFVo5Z.00
We proposed a new inference lgorithm for distributions parametrized by normalizing flow models. The need for approximate inference is motivated by our theoretical hardness result for exact inference,which is surprising given that it applies to invertible models. We also presented a detailed empirical evaluation of our m...
We proposed a new inference algorithm for distributions parametrized by a flow. The need for approximate inference is motivated by the hardness of exact inference. We also presented a detailed empirical evaluation of our method with both quantitative and qualitative results on a wide range of tasks and datasets. Overall...
{ "annotation": [ "Concision" ], "instruction": "Remove details which are unnecessary for the overall paragraph. Fix any spelling mistakes.", "annotator": "annotator_03" }
{ "annotation": [ "Concision" ], "instruction": "Correct and concise the two first sentences.", "annotator": "annotator_07" }
jP_amc4U0A
Y2t7AFVo5Z
0
[ { "text": "We proposed a new inference lgorithm for distributions parametrized by normalizing flow models." }, { "text": "The need for approximate inference is motivated by our theoretical hardness result for exact inference,which is surprising given that it applies to invertible models." }, { "t...
[ { "text": "We proposed a new inference algorithm for distributions parametrized by a flow." }, { "text": "The need for approximate inference is motivated by the hardness of exact inference." }, { "text": "We also presented a detailed empirical evaluation of our method with both quantitative and q...
fDUdAYCQqZy.0cNiGAHFml.02
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ . However, applying operators with an offline dataset raises a noise on the actual ...
We use the following toy example to further illustrate the trade-offs achieved by EVL. Consider a random generated MDP. When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ . However, applying operators with an offline dataset raises a noise on the actual ...
{ "annotation": [ "Content_addition", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
fDUdAYCQqZy
0cNiGAHFml
2
[ { "text": "We use the following toy example to further illustrate the trade-offs achieved by EVL." }, { "text": "Consider a random generated MDP." }, { "text": "When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ ." }, { "...
[ { "text": "We use the following toy example to further illustrate the trade-offs achieved by EVL." }, { "text": "Consider a random generated MDP." }, { "text": "When the operator can be applied exactly, the Bellman optimality operator is sufficient to learn the optimal value V ∗ ." }, { "...
vokZIVWUXN.zMdXRtaisu.01
Xie et al., 2020; Sohn et al., 2020) (also known as input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b). It is interesting that UDA (Xie et al., 2020) reveals the crucial role of noise produced by advanced data augmentatio...
Xie et al., 2020; Sohn et al., 2020) (a.k.a. input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b). Further, Co-Training (Blum & Mitchell, 1998b), Deep Co-Training Qiao et al. and Tri-Training (Zhou & Li, 2005a) improve data...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_07" }
vokZIVWUXN
zMdXRtaisu
1
[ { "text": "Xie et al., 2020; Sohn et al., 2020) (also known as input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b)." }, { "text": "It is interesting that UDA (Xie et al., 2020) reveals the crucial role of n...
[ { "text": "Xie et al., 2020; Sohn et al., 2020) (a.k.a. input consistency regularization) or fit the unlabeled data on its predictions generated by a previously learned model (Lee, 2013; Chen et al., 2020b)." }, { "text": "Further, Co-Training (Blum & Mitchell, 1998b), Deep Co-Training Qiao et al. and Tr...
tOMAf1V5dI.SNeLZ71pb5.00
CNN-based Architectures. Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features. Subsequently, the VGG network (Simonyan & Zisserman, 2015) is proposed, which purely u...
CNN-based Architectures. Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features. Subsequently, the VGG network (Simonyan & Zisserman, 2015) is proposed, which purely u...
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph more concise.", "annotator": "annotator_02" }
{ "annotation": [ "Content_deletion", "Concision" ], "instruction": "Remove the sentence about the residual module. Make the paragraph more concise.", "annotator": "annotator_07" }
tOMAf1V5dI
SNeLZ71pb5
0
[ { "text": "CNN-based Architectures." }, { "text": "Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features." }, { "text": "Subsequently, the VGG...
[ { "text": "CNN-based Architectures." }, { "text": "Since AlexNet (Krizhevsky et al., 2012) won the ImageNet competition in 2012, the CNN-based architectures have gradually been utilized to automatically extract image features instead of hand-crafted features." }, { "text": "Subsequently, the VGG...
CVRUl83zah.I75TtW0V7.13
In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Subsection 4.1), the differences between our implicit and automatic differentiation (Subsection 4.2), and the applicability of iDSPN to a larger-scale dataset (Subsection 4.3). We provide deta...
In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Section 4.1), the differences between automatic and our approximate implicit differentiation (Section 4.2), and the applicability of iDSPN to a larger-scale dataset (Section 4.3). We provide det...
{ "annotation": [ "Rewriting_light" ], "instruction": "Be clear about references.", "annotator": "annotator_09" }
{ "annotation": [ "Rewriting_medium", "Rewriting_light" ], "instruction": "Lightly clarify the text. Add a reference to appendix at the end.", "annotator": "annotator_07" }
CVRUl83zah
I75TtW0V7
13
[ { "text": "In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Subsection 4.1), the differences between our implicit and automatic differentiation (Subsection 4.2), and the applicability of iDSPN to a larger-scale dataset (Subsection 4....
[ { "text": "In this section, we evaluate three different aspects of our contributions: the usefulness of exclusive multiset-equivariance (Section 4.1), the differences between automatic and our approximate implicit differentiation (Section 4.2), and the applicability of iDSPN to a larger-scale dataset (Section 4...
CzTbgFKuy.hfDu8DsDq6.02
Our main example willinstead be online job scheduling via minimizing the fractional makespan, following Lattanzi et al. They consider the problem of assigning each in a sequence of variablesized jobs to one of m machines [30, Section 3]. The authors provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good" machi...
Our main example will be online job scheduling via minimizing the fractional makespan [30], where we must assign each in a sequence of variable-sized jobs to one of m machines. Lattanzi et al. [30] provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good” machine weights w ∈ R m> 0 to assign jobs based on how we...
{ "annotation": [ "Concision", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
CzTbgFKuy
hfDu8DsDq6
2
[ { "text": "Our main example willinstead be online job scheduling via minimizing the fractional makespan, following Lattanzi et al. They consider the problem of assigning each in a sequence of variablesized jobs to one of m machines [30, Section 3]." }, { "text": "The authors provide an algorithm that us...
[ { "text": "Our main example will be online job scheduling via minimizing the fractional makespan [30], where we must assign each in a sequence of variable-sized jobs to one of m machines. Lattanzi et al." }, { "text": "[30] provide an algorithm that uses predictions ˆw ∈ R m> 0 of “good” machine weights...
KUhhOtV2Yw.nPdxbHsbU.00
Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (eq. 5), ¯ y ⊥ s | y (eq. 6–7), and y ⊥ s | ¯ y (eq. 8–9). If our training set has no positive outcome for the demographic s = 0 , i.e. ...
Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (equation 5), ¯ y ⊥ s | y (equation 6 – equation 7), and y ⊥ s | ¯ y (equation 8 – equation 9). If our training set has no positive outc...
{ "annotation": [ "Rewriting_light" ], "instruction": "Prefer extended forms over abbreviations of words.", "annotator": "annotator_04" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Write the abbreviation in their full form.", "annotator": "annotator_07" }
KUhhOtV2Yw
nPdxbHsbU
0
[ { "text": "Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (eq. 5), ¯ y ⊥ s | y (eq." }, { "text": "6–7), and y ⊥ s | ¯ y (eq. 8–9)." }, { "text": "If our train...
[ { "text": "Generally, those statistical notions can be expressed in terms of different (conditional) independence statements between the involved random variables (Barocas et al., 2019): ¯ y ⊥ s (equation 5), ¯ y ⊥ s | y (equation 6 – equation 7), and y ⊥ s |" }, { "text": "¯ y (equation 8 – equation 9)...
slsGUcTSZI.DH75WqDfD7.00
We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models. During the training phase, sBN does not track running estimates and simply normalize batch data. We do not track the local running statistics as the size of local models may also vary dynam...
We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models. During the training phase, sBN does not track running estimates and simply normalize batch data. We do not track the local running statistics as the size of local models may also vary dynam...
{ "annotation": [ "Content_substitution" ], "instruction": "", "annotator": "annotator_07" }
null
slsGUcTSZI
DH75WqDfD7
0
[ { "text": "We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models." }, { "text": "During the training phase, sBN does not track running estimates and simply normalize batch data." }, { "text": "We do not track the local r...
[ { "text": "We highlight an adaptation of BN named as static Batch Normaliztion (sBN) for optimizing privacy constrained heterogeneous models." }, { "text": "During the training phase, sBN does not track running estimates and simply normalize batch data." }, { "text": "We do not track the local r...
8_oadXCaRE.Kt4-LpYuM.01
Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single "hard" winner neuron. This allows us to prove formally that a SoftHebb layer is a generative mixture model that obje...
Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single "hard" winner neuron. This allows us to prove formally that a SoftHebb layer is a generative mixture model that obje...
{ "annotation": [ "Concision" ], "instruction": "Make this paragraph shorter by removing details.", "annotator": "annotator_04" }
{ "annotation": [ "Content_deletion", "Concision" ], "instruction": "Summarize the middle of the paragraph to make it shorter and more concise. Remove unnecessary details.", "annotator": "annotator_07" }
8_oadXCaRE
Kt4-LpYuM
1
[ { "text": "Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single \"hard\" winner neuron." }, { "text": "This allows us to prove formally that a SoftHebb l...
[ { "text": "Importantly, it is equipped with a simple normalization of the layer’s activations, and an optional temperature-scaling mechanism (Hinton et al., 2015), producing a soft WTA instead of selecting a single \"hard\" winner neuron." }, { "text": "This allows us to prove formally that a SoftHebb l...
9wfZbn73om.FhHH15YtKt.02
Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; 2020). However, a rigorous relationship between mutual information and the downstream classification error has not been establ...
Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019; Hjelm et al., 2018; Tian et al., 2019; 2020; Tschannen et al., 2019). However, a rigorous relationship between mutual information and downstream performance has not ...
{ "annotation": [ "Content_deletion", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
9wfZbn73om
FhHH15YtKt
2
[ { "text": "Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019;" }, { "text": "Hjelm et al., 2018; Tian et al., 2019; 2020)." }, { "text": "However, a rigorous relationship between mutual inform...
[ { "text": "Early works understand the InfoNCE loss based on maximizing the mutual information (MI) between positive samples (Oord et al., 2018; Bachman et al., 2019;" }, { "text": "Hjelm et al., 2018; Tian et al., 2019; 2020; Tschannen et al., 2019)." }, { "text": "However, a rigorous relationsh...
CVRUl83zah.I75TtW0V7.02
Set prediction modelsmake use of set-to-set functions that are permutation-equivariant (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020). This is desirable when processing sets because it prevents a function from relying on the arbitrary order of the set in its matrix representation...
Recent set prediction models (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020) make use of set-to-set (permutation-equivariant list-to-list) functions to refine an initial set Y 0 , which is usually a randomly generated or learnable matrix. Permutation-equivariance is desirable when ...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
2
[ { "text": "Set prediction modelsmake use of set-to-set functions that are permutation-equivariant (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020)." }, { "text": "This is desirable when processing sets because it prevents a function from relying on the arbitrary ord...
[ { "text": "Recent set prediction models (Lee et al., 2019; Locatello et al., 2020; Carion et al., 2020; Kosiorek et al., 2020) make use of set-to-set (permutation-equivariant list-to-list) functions to refine an initial set Y 0 , which is usually a randomly generated or learnable matrix." }, { "text": "P...
hegI87bI5S.fL6Q48sfx8.04
Hollinworth et al. found that senior citizens generally lose the cursor due to poor eyesight and sustained concentration [15]. Therefore, the implemented a Field Mouse (a mouse with a touch sensor attached) and proposed a technique wherein the cursor moves to the center of the screen when the user hold the mouse. This ...
Hollinworth et al. found that senior citizens lose the cursor be- cause of poor eyesight and sustained concentration, and therefore, they implemented a Field Mouse (a mouse with a touch sensor at- tached) and proposed a technique wherein the cursor moves to the center of the screen when the user holds the mouse [15]. T...
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_07" }
null
hegI87bI5S
fL6Q48sfx8
4
[ { "text": "Hollinworth et al." }, { "text": "found that senior citizens generally lose the cursor due to poor eyesight and sustained concentration [15]. Therefore, the implemented a Field Mouse (a mouse with a touch sensor attached) and proposed a technique wherein the cursor moves to the center of the ...
[ { "text": "Hollinworth et al." }, { "text": "found that senior citizens lose the cursor be- cause of poor eyesight and sustained concentration, and therefore, they implemented a Field Mouse (a mouse with a touch sensor at- tached) and proposed a technique wherein the cursor moves to the center of the sc...
7_CwM-IzWd.zcm6f5HDI.11
To warm-up the model, we perform regular steps in the first epoch. We switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is the imbalance parameter . The training takes Q re-balancing steps before returning to regular mode. We refer to Q as the re-balancing window size .
To warm-up the model, we perform only regular steps in the first training epoch. Then we switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is a hyperparameter, referred to as the imbalance tolerance parameter . The training takes Q re-balancing steps before returning to regular mode. We ...
{ "annotation": [ "Development" ], "instruction": "Change the descriptions so that the hyperparameters can be easily referred to later", "annotator": "annotator_05" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
7_CwM-IzWd
zcm6f5HDI
11
[ { "text": "To warm-up the model, we perform regular steps in the first epoch." }, { "text": "We switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is the imbalance parameter ." }, { "text": "The training takes Q re-balancing steps before returning to regular mo...
[ { "text": "To warm-up the model, we perform only regular steps in the first training epoch." }, { "text": "Then we switch from regular steps to re-balancing steps if | d speed ( t ) | > α , where α is a hyperparameter, referred to as the imbalance tolerance parameter ." }, { "text": "The training...
r1DvZQwjB.Hk8CzQDiB.00
Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse probl...
Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order. This framework therefore, enables the solution of high order non-linear PDEs. The proposed algorithm is a unified formulation of both forward and inverse probl...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
null
r1DvZQwjB
Hk8CzQDiB
0
[ { "text": "Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order." }, { "text": "This framework therefore, enables the solution of high order non-linear PDEs." }, { "text": "The proposed al...
[ { "text": "Unlike other numerical methods such as finite differences and finite elements, the derivatives of the desired function can be analytically calculated to any order." }, { "text": "This framework therefore, enables the solution of high order non-linear PDEs." }, { "text": "The proposed al...
u9NaukzyJ-.hh0KECXQLv.17
The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) . Design elements such as colors, sliders, labels, and markers should be carefully employed to avoid overwhelming the calendar. One of the reasons why Design B was preferred is because it is less cluttered: medicati...
The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) . One of the reasons why Design B was preferred is because it is less cluttered: medication entries can be rendered effectively using position, shape, and size. The size of a medication entry should be as small as p...
{ "annotation": [ "Concision", "Content_deletion" ], "instruction": "Remove unnecessary details and explanations.", "annotator": "annotator_03" }
{ "annotation": [ "Content_deletion", "Development" ], "instruction": "", "annotator": "annotator_09" }
u9NaukzyJ-
hh0KECXQLv
17
[ { "text": "The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) ." }, { "text": "Design elements such as colors, sliders, labels, and markers should be carefully employed to avoid overwhelming the calendar." }, { "text": "One of the reasons why...
[ { "text": "The design of the calendar should avoid design elements that in- troduce clutter to the calendar ( DG3 ) ." }, { "text": "" }, { "text": "One of the reasons why Design B was preferred is because it is less cluttered: medication entries can be rendered effectively using position, shape...
SkMm_pDYm.rkQWRbxAQ.00
S t +1 = f st ( S t , A t , U st ) . This is always possible using auto-regressive uniformization. The DAG G of the resulting SCM is shown in fig. 1. This procedure is closely related to the ‘reparameterization trick’ for models with lotion-scale distributions (Kingma & Welling, 2013; Rezende et al., 2014).
S t +1 = f st ( S t , A t , U st ) . This is always possible using auto-regressive uniformization, see Lemma 2 in the appendix. The DAG G of the resulting SCM is shown in fig. 1. This procedure is closely related to the ‘reparameterization trick’ for models with location-scale distributions (Kingma & Welling, 2013; Reze...
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_09" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
SkMm_pDYm
rkQWRbxAQ
0
[ { "text": "S t +1 =" }, { "text": "f st ( S t , A t , U st ) ." }, { "text": "This is always possible using auto-regressive uniformization." }, { "text": "The DAG G of the resulting SCM is shown in fig. 1." }, { "text": "This procedure is closely related to the ‘reparameterization...
[ { "text": "S t +1 =" }, { "text": "f st ( S t , A t , U st ) ." }, { "text": "This is always possible using auto-regressive uniformization, see Lemma 2 in the appendix." }, { "text": "The DAG G of the resulting SCM is shown in fig. 1." }, { "text": "This procedure is closely relat...
nCTSF9BQJ.DGhBYSP_sR.12
The prior rotamers ˜ χ j are inaccurate or unknown in many cases. For example, if we mutate some amino acids in the protein complex, the rotamers of the mutated amino acids are unknown, and the rotamers of amino acids nearby the mutated ones are inaccurate because they are affected by the mutation. The probability den...
The prior rotamers ˜ χ j are often inaccurate or unknown. For example, if we mutate some residues, the rotamers of the mutated residues are unknown, and the rotamers of residues nearby the mutated ones are inaccurate because they are affected by the mutation. The probability density is defined over the d -dimensional t...
{ "annotation": [ "Rewriting_medium" ], "instruction": "Replace every apparition of \"amino acids\" or \"amino acids in the protein complex\" by \"residues\"", "annotator": "annotator_01" }
{ "annotation": [ "Concision", "Rewriting_medium" ], "instruction": "Replace occurrences of amino acids by residues. Make this paragraph a lit bit more concise.", "annotator": "annotator_07" }
nCTSF9BQJ
DGhBYSP_sR
12
[ { "text": "The prior rotamers ˜ χ" }, { "text": "j are inaccurate or unknown in many cases." }, { "text": "For example, if we mutate some amino acids in the protein complex, the rotamers of the mutated amino acids are unknown, and the rotamers of amino acids nearby the mutated ones are inaccura...
[ { "text": "The prior rotamers ˜ χ" }, { "text": "j are often inaccurate or unknown." }, { "text": "For example, if we mutate some residues, the rotamers of the mutated residues are unknown, and the rotamers of residues nearby the mutated ones are inaccurate because they are affected by the mutat...
skR2qMboVK.lmwxQfhmln.01
Margin-Density (Nguyen & Smeulders, 2004). Scores candidates by the product of their margin and their density estimates, so as to increase diversity. The density is computed by first clustering the penultimate layer activations of all |Z| candidate points via K -means. Then, the density score of candidate x i is comput...
Margin-Density (Nguyen & Smeulders, 2004). Scores candidates by the product of their margin and their density estimates, so as to increase diversity. The density is computed by first clustering the penultimate layer activations of the current model on all |Z| candidate points via K -means. Then, the density score of can...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
skR2qMboVK
lmwxQfhmln
1
[ { "text": "Margin-Density (Nguyen & Smeulders, 2004)." }, { "text": "Scores candidates by the product of their margin and their density estimates, so as to increase diversity." }, { "text": "The density is computed by first clustering the penultimate layer activations of all |Z| candidate points...
[ { "text": "Margin-Density (Nguyen & Smeulders, 2004)." }, { "text": "Scores candidates by the product of their margin and their density estimates, so as to increase diversity." }, { "text": "The density is computed by first clustering the penultimate layer activations of the current model on all ...
IoTyuVEanE.Et-c0vQfeb.04
Parsing signal from noise is critical to learning from weak and rule-based supervision. Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1. For each dataset, we provide exactly one seed LF foreach class. Each seed...
Parsing signal from noise is critical to learning from weak and rule-based supervision. Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1. We provided exactly each class with exactly one labeling function consist...
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Rewriting_medium", "Development" ], "instruction": "", "annotator": "annotator_08" }
IoTyuVEanE
Et-c0vQfeb
4
[ { "text": "Parsing signal from noise is critical to learning from weak and rule-based supervision." }, { "text": "Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1." }, { "text": "For each...
[ { "text": "Parsing signal from noise is critical to learning from weak and rule-based supervision." }, { "text": "Accordingly, we compare ReGAL’s ability to that of our baselines in accurately classifying instances based on a set of seed rules, which are shown in Table 1." }, { "text": "We provi...
LC37_sQl_t.XlHDVLz97W.01
In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time. Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a grid-world in a zero-shot manner, being give...
In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time. Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a grid-world in a zero-shot manner, being give...
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_06" }
{ "annotation": [ "Content_addition", "Rewriting_light" ], "instruction": "", "annotator": "annotator_08" }
LC37_sQl_t
XlHDVLz97W
1
[ { "text": "In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time." }, { "text": "Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a gr...
[ { "text": "In this paper, we introduce ZeroC, a new framework for zero-shot concept recognition and acquisition at inference time." }, { "text": "Our experiments show that in a challenging grid-world domain, ZeroC is able to recognize complex, hierarchical concepts composed of English characters in a gr...
CVRUl83zah.I75TtW0V7.17
Attention on all AP metrics. It is better at the attribute classification ignoring the 3d coordinates (96. → 98.8) and improves especially for the stricter AP thresholds like AP 0 . 125 (7.9 → 76.9). Note that a stricter AP threshold is always upper bounded by the looser AP threshold, so iDSPN is guaranteed to be bett...
Attention on all AP metrics. It is better at attribute classification when ignoring the 3d coordinates (AP ∞ , 96.4% → 98.8%) and improves especially for the metrics with stricter 3d coordinate thresholds (AP 0 . 125 , 7.9% → 76.9%). This is despite Slot Attention † using a three times higher weight on the loss for the ...
{ "annotation": [ "Development", "Rewriting_light" ], "instruction": "", "annotator": "annotator_07" }
null
CVRUl83zah
I75TtW0V7
17
[ { "text": "Attention on all AP metrics." }, { "text": "It is better at the attribute classification ignoring the 3d coordinates (96." }, { "text": "→ 98.8) and improves especially for the stricter AP thresholds like AP 0 ." }, { "text": "125 (7.9 → 76.9)." }, { "text": " Note th...
[ { "text": "Attention on all AP metrics." }, { "text": "It is better at attribute classification when ignoring the 3d coordinates (AP ∞ , 96.4%" }, { "text": "→ 98.8%) and improves especially for the metrics with stricter 3d coordinate thresholds (AP 0 ." }, { "text": "125 , 7.9% → 76.9%)....
SyF8k7bCW.HytIRPamf.01
Less Constraints: During encoding, the explicit word order information used inRNN will help the vector representation capture more of the temporally-specific relationships among words, but this same constraint (if using RNN as the decoder) could be an inappropriate constraint in the decoding process.
The results are presented in the Table 1. Generally, the three different decoding settings didn’t make much of a difference in terms of the performance on selected downstream tasks, with RNN or CNN as the decoder. The results tell us that, in terms of learning good sentence representations, the autoregressive decoder d...
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_heavy" ], "instruction": "Can you reformulate my entire paragraph?", "annotator": "annotator_09" }
SyF8k7bCW
HytIRPamf
1
[ { "text": "Less Constraints: During encoding, the explicit word order information used inRNN will help the vector representation capture more of the temporally-specific relationships among words, but this same constraint (if using RNN as the decoder) could be an inappropriate constraint in the decoding process...
[ { "text": "The results are presented in the Table 1. Generally, the three different decoding settings didn’t make much of a difference in terms of the performance on selected downstream tasks, with RNN or CNN as the decoder. The results tell us that, in terms of learning good sentence representations, the autor...
lLwt-9RJ2tm.XJsauLjck.00
Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse. Hereis the second observation: the negative term w G ( S ∪ T, S ∪ T ) that internal node S contributes to thecost al...
Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse. Here optimal hierarchy that is binary.
{ "annotation": [ "Unusable" ], "instruction": "", "annotator": "annotator_07" }
null
lLwt-9RJ2tm
XJsauLjck
0
[ { "text": "Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse." }, { "text": "Hereis the second observation: the negative term w G ( S ∪ T, S ∪ T ) that...
[ { "text": "Unfortunately, the distortion in w G ( S, T ) can be very large depending on the quantities on the right, and the cumulative error in cost G ( T ) blows up with the depth of the tree which is even worse." }, { "text": "" }, { "text": "Here optimal hierarchy that is binary." } ]
p8yrWJS4W.eHA5NswPr.02
Results. Fig. 4 shows that certain alterations—such as completely removing articles from the evaluated text—have almost no impact on the divergence between our reference and test corpora for various ∆ . In fact, text without any articles is judged as better than GPT-2 XL ’s by most of the cluster-based divergences. Fu...
Results. Fig. 4 shows that certain alterations to the evaluated text—such as completely removing articles—have almost no impact on its divergences from the reference corpora for various ∆ . In fact, text without any articles is judged as better than GPT-2 XL ’s by all of the cluster-based divergences (see Fig. 9 for a ...
{ "annotation": [ "Rewriting_light" ], "instruction": "Make the concepts a bit more specific, such that some vague ideas are more clear.", "annotator": "annotator_03" }
{ "annotation": [ "Rewriting_light" ], "instruction": "Revise the writing for better readability.", "annotator": "annotator_07" }
p8yrWJS4W
eHA5NswPr
2
[ { "text": "Results." }, { "text": "Fig." }, { "text": "4 shows that certain alterations—such as completely removing articles from the evaluated text—have almost no impact on the divergence between our reference and test corpora for various ∆ ." }, { "text": "In fact, text without any ar...
[ { "text": "Results." }, { "text": "Fig." }, { "text": "4 shows that certain alterations to the evaluated text—such as completely removing articles—have almost no impact on its divergences from the reference corpora for various ∆ ." }, { "text": "In fact, text without any articles is judg...
rkwFe19K7.BJCfw3tCm.00
We follow several rules when selecting victim nodes. First, the attack must be successful on the victim node to fool the model. Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ properties. Finally, we choose victim nodes among those with the sa...
We follow several rules when selecting victim nodes. First, the attack must be successful on the victim node to fool the model. Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ properties. Finally, we choose victim nodes among those with the sa...
{ "annotation": [ "Content_addition" ], "instruction": "", "annotator": "annotator_02" }
{ "annotation": [ "Development" ], "instruction": "", "annotator": "annotator_07" }
rkwFe19K7
BJCfw3tCm
0
[ { "text": "We follow several rules when selecting victim nodes." }, { "text": "First, the attack must be successful on the victim node to fool the model." }, { "text": "Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ pr...
[ { "text": "We follow several rules when selecting victim nodes." }, { "text": "First, the attack must be successful on the victim node to fool the model." }, { "text": "Next, we try our best to find successful attacks on victim nodes with different node degree to evaluate diverse victim nodes’ pr...
B1SkMaDvr.W2MCLgZGr.00
In this paper, we prove new generalization bounds for convolutional networks that take account of this effect. As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters. Additionally, our bounds are “size-free”, in the sense that ...
In this paper, we prove new generalization bounds for convolutional networks that take account of this effect. As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters. Additionally, our bounds independent of the number of pixels...
{ "annotation": [ "Concision" ], "instruction": "Make the ideas more concise.", "annotator": "annotator_03" }
{ "annotation": [ "Concision" ], "instruction": "Remove unnecessary details.", "annotator": "annotator_07" }
B1SkMaDvr
W2MCLgZGr
0
[ { "text": "In this paper, we prove new generalization bounds for convolutional networks that take account of this effect." }, { "text": "As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters." }, { "tex...
[ { "text": "In this paper, we prove new generalization bounds for convolutional networks that take account of this effect." }, { "text": "As in earlier analyses for the fully connected case, our bounds are in terms of the distance from the initial weights, and the number of parameters." }, { "tex...